WO2017009689A1 - Software-defined power management for network resources - Google Patents

Software-defined power management for network resources Download PDF

Info

Publication number
WO2017009689A1
WO2017009689A1 PCT/IB2015/055362 IB2015055362W WO2017009689A1 WO 2017009689 A1 WO2017009689 A1 WO 2017009689A1 IB 2015055362 W IB2015055362 W IB 2015055362W WO 2017009689 A1 WO2017009689 A1 WO 2017009689A1
Authority
WO
WIPO (PCT)
Prior art keywords
service function
virtual
energy
compute
network
Prior art date
Application number
PCT/IB2015/055362
Other languages
French (fr)
Inventor
Catalin Meirosu
Tereza Cristina Melo De Brito Carvalho
Ana Carolina RIEKSTIN
Bruno Bastos RODRIGUES
Viviane TAVARES NASCIMENTO
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Ericsson Telecomunicacoes S.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ), Ericsson Telecomunicacoes S.A. filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/IB2015/055362 priority Critical patent/WO2017009689A1/en
Publication of WO2017009689A1 publication Critical patent/WO2017009689A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3209Monitoring remote activity, e.g. over telephone lines or network connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/325Power saving in peripheral device
    • G06F1/3278Power saving in modem or I/O interface
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/0833Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for reduction of network energy consumption
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention generally relates to network energy
  • each data center is in itself a large amalgam of physical devices, e.g., computing devices, servers, routers, etc. and virtual devices, e.g., virtual machine, virtual router, virtual switch, etc.
  • virtual devices e.g., virtual machine, virtual router, virtual switch, etc.
  • Each of these devices consumes real electric power, which given the large size of the data center, is becoming a considerable expense associated with the data center.
  • the available studies indicate numbers that vary between 4% and 33% of the total data center's consumption. There are some studies that indicate that these numbers could go up to 50% of the total power if the utilization is low (about 15%) and many of the servers are shutdown.
  • Energy savings in the network part, complementing those achieved in the computing part (e.g., servers) of the computing environment are thus highly desirable as an operation expense reduction, in addition to providing increased green credentials.
  • Figure 1 shows a compute environment 100 that includes two data centers 102 and 104 connected to each other through various links 108 and 1 10.
  • the compute environment includes infrastructure (i.e., hardware) but also software associated with the infrastructure.
  • a mechanism for coordinating the data transmitted between the various parts of the compute environment 100 may be located in the cloud 106.
  • Each link 108 and 1 10 may include one or more wire lines and/or one or more wireless channels.
  • the compute environment 100 may be structured as a compute part and a network part.
  • the compute part may include compute resources, e.g., servers 120A-E, storage devices 122A-C, processors, one or more virtual machines, etc.
  • the network part may include network resources, e.g., physical and/or virtual routers 130A-B, real or virtual ports 132A-B, real and/or virtual switches, mapping tables, etc.
  • the network part includes any physical or virtual resource that supports networking functions.
  • NFV Network Function Virtualization
  • middle boxes e.g., a physical node having one or more functionalities as a fire wail, network address translation, packet inspection, etc.
  • VNF Virtual Network Functions
  • GAL Green Abstraction Layer
  • ETSI ES 203 237 v. 1.1.1 European Telecommunications Standards institute
  • GAL defines a set of abstract data objects (known as energy- aware states) that describe the power management settings of energy management capabilities available on network nodes and at the network level itself.
  • GAL descriptions include the performance level achievable for each of the energy state configurations available and the time taken by each capability to be applied.
  • GAL uses a representation of how energy management elements are interconnected with each other within a network node.
  • GAL also defines a management interface between node-level control of energy management features, and a hierarchy of network-level control planes that may monitor and set higher-level policies.
  • a GAL- enabied prototype that uses pre-standard definitions to achieve network energy savings is presented in Bolla et a!., "A northbound interface for power management in next generation network devices," Communications Magazine, IEEE , vol.52, no.1 , pp. 49, 157, January 2014.
  • the Virtual Power document (R. Nathuji and K. Schwan, "VirtualPower: coordinated power management in virtuaiized enterprise systems," SIGOPS Oper. Syst. Rev. 41 , 8 (October 2007), 265-278) defines a set of soft states that virtual machines communicate to a hypervisor to indicate their desires for processor power saving, as well as a method for the hypervisor to combine soft states before applying them to the actual hardware CPU. Based on the Virtual Power document, VPM tokens are defined (see, R. Nathuji and K.
  • VPM tokens virtual machine-aware power budgeting in datacenters
  • the VPM tokens is a token-based scheme for allocating power only to the compute part in a data center.
  • the power tokens act as a virtual currency system, allowing for virtual power budgeting within the compute resources of the data center.
  • Application of the VPM tokens shows that power savings in excess of 50% are possible when using an application-aware power budgeting strategy for compute resources.
  • PTEA Per-task Energy Accounting
  • PTEM Per-task Energy Metering
  • the concept of PTEA is based on two pillars (1 ) the PTEM, which is based on models for each hardware component to derive the actual energy consumed by a given task, and (2) an energy metering component, which modulates the PTEM results to derive the energy consumed by a task with a share of the hardware.
  • the PTEA mechanism calculates the usage of a hardware component by a given task at a point in time, e.g., cores in a multicore CPU, caches, etc.
  • PowerAssure Inc. whitepaper termed Software-Defined Power, that allows administrators to automaticaiiy move workloads within and between data centers in order to optimize power consumption. More specifically, the software-defined power monitors and predicts application activity, dynamically adding and removing servers from the available pool as necessary. Unneeded servers are powered down, resulting in power savings, but they can be rapidly and automaticaiiy returned to service when required. Sophisticated algorithms ensure that adequate headroom is always available to ensure service levels are met, even in rapidly changing environments. This solution takes into account real-time and day-ahead pricing, Demand-Response events, power quality and reliability forecasts. The solution relies on PAR4 (see PowerAssure Inc.
  • VPM tokens and VirtuaiPower solutions discussed above are designed for the compute part of the compute environment, e.g., managing a power budget with respect to a number of virtual machines accommodated on a set of physical servers. These solutions do not take into account the power consumption in the network part of the compute environment, and do not attempt to manage the energy saving features that might be available in the network part.
  • Per-task Energy Accounting and Metering provide a fine grained view of the CPU energy usage, enabling a third-party to estimate accurately the processor setup that leads to the lowest energy consumption. By itself, this technique does not manage the energy consumption, but only provides useful input towards software that implements the energy management.
  • the GAL document enables an operator to control, using abstract parameters, the energy performance levels modeled for physical network nodes, and then, using additional intelligence not included in the standard, to manage the lower- level energy saving primitives.
  • GAL-enabied solutions have no application awareness (i.e., these solutions are blind to the Quality-of-Service (QoS)
  • the PowerAssure solution is centered on the compute part of the infrastructure (servers). PAR4 metrics are not defined for physical network nodes, and is unclear to what extent they might be relevant when the CPU utilization is due to VNFs rather than standardized data center benchmark applications. V Fs are expected to have different usage patterns due, for example, to accelerator hardware and libraries such as Intel DPDK.
  • the PowerAssure solution does not appear to pro- actively distribute a network power budget between the applications that use VMs and network paths within the data center. In addition, there is no notion of managing energy consumed by network nodes outside the data center and used for inter-data center communications.
  • a method for power management in a compute environment having a compute part and a network part.
  • the method includes defining various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency; assigning, in a virtual power controller, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency; configuring a set of energy saving capabilities to be deployed on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency;
  • gp energy policies
  • monitoring power consumption of the service function with a policy enforcement utility that is aware of the selected energy policy; and adjusting an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
  • a compute resource that includes a storage device configured to store various energy policies (gp) for a network part of a compute environment, the energy policies having assigned a different amount of virtual currency.
  • the compute resource further includes a virtual power application configured to assign, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency.
  • the virtual power application is also configured to define a set of energy saving capabilities to be deployed on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency.
  • the compute resource further includes a policy enforcement utility.
  • the policy enforcement is configured to monitor power consumption of the service function, wherein the policy enforcement utility is aware of the selected energy policy, and adjust an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
  • Figure 1 is a schematic diagram of a compute environment having a compute part and a network part
  • Figure 2 is a schematic diagram of a compute environment in which a virtual power application is installed
  • Figure 3 is a signal diagram of commands exchanged by various entities of the compute environment for instantiating a service function and associated virtual power budget
  • Figure 4 is a schematic diagram of various modules that implement various entities of the compute environment
  • Figure 5 is a schematic diagram illustrating power token operations for renewal and usage policies
  • Figure 8 is a flowchart illustrating the process of calculating tokens associated with a power budget
  • Figure 7 is a flowchart illustrating a process of calculating power requirements and energy saving capabilities
  • Figure 8 is a schematic diagram of various processes that take place in the policy enforcement unit
  • Figure 9 is a flowchart of a method for managing VNF requests
  • Figure 10 is a flowchart of a method of power management in a compute environment; and [0033] Figure 1 1 is a schematic illustration of a computing device that can implement one or more of the methods discussed herein.
  • An architecture of a compute environment that supports NFV includes at least three components.
  • the first component is the VNFs, which are software implementations of network functions that can be deployed on a Network Function Virtualization Infrastructure (NFVI).
  • the second component is the NFVi, which includes the totality of hardware and software components which build up the environment in which VNFs are deployed.
  • the NFVI can span across several locations, e.g., at different data centers.
  • the network part providing connectivity between these locations is regarded to be part of the NFVL
  • the third component is the Network functions virtuaiization management and orchestration architectural framework (NFV- ANO Architectural Framework), which is the collection of all functional blocks, data repositories used by the functional blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.
  • NFV- ANO Architectural Framework Network functions virtuaiization management and orchestration architectural framework
  • a method is proposed herein that estimates a network power budget taking into account energy management capabilities of the nodes supporting the network and then automatically distributing this budget among service functions (e.g., applications and/or VNFs) according to policies defined by the operator of the compute environment.
  • This method may include a policy enforcement unit that maintains and/or enforces the allocated power budget, as well as signaling associated to it.
  • the method may use an interface between infrastructure controllers within the data center and within the WAN, thus extending the power budget between multiple data centers for a much more realistic evaluation of the energy consumed.
  • the method may exhibit one or more of the following advantages:
  • this method can be combined with solutions such as the VP tokens to provide additional power savings coming from the physical network part, reducing energy costs of the operators.
  • Such a combined solution has an additional advantage, namely, it coordinates power savings between the compute and network parts and reduces the chances of conflicts, for example, network paths that connect a set of servers being transitioned to a sleep state at the same time as compute resources need to be woken up in order to handle a surge in compute demand;
  • a compute environment 200 includes a first data center 202 and a second data center 204.
  • the two data centers are connected to the cloud 206, where various hardware and software are located, as discussed later.
  • This simplistic structure of the compute environment omits many parts in an effort of making the understanding of the method easier.
  • Data center 202 includes plural servers 210A-B (only two are shown in the figure for simplicity) that are connected to a physical router 212 for
  • Data center 202 may also include a cloud controller 214 (sometimes called cloud orchestrator) that manages compute and network requirements in the data center.
  • a cloud controller as discussed later, may be implemented as software instructions in a processor.
  • a network controller 216 (which also may be implemented as software instructions in a processor) is responsible for allocating network resources in the data center. In one embodiment, the network controller 216 is also responsible for allocating energy efficiency functions (to be discussed later).
  • the data center 202 may also host a virtual power application 220 that acts as policy decision point (PDP) for the virtual software-defined power, as discussed later with regard to Figure 3.
  • the virtual power application may be implemented as software instructions in a processor.
  • the virtual power application effectively manages features that control the actual electric power used at least by the network resources.
  • the term “virtual” qualifying the "power application” indicates that service functions that are called virtual, as the VNF, are also managed.
  • the term “virtual” in this context does not imply something imaginary or abstract, but rather an industry accepted name for functions that are run in a guest machine, in hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine.
  • the terms “host” and “guest” are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine.
  • the virtual power application may be deployed in the cloud 206, or it may be distributed among various data centers and/or the cloud.
  • the virtual power application 220 is configured to also act as a policy decision maker in other data centers.
  • a VNF manager 222 may also be deployed in data center 202.
  • the V F manager may be implemented as software instructions in a processor.
  • Local policy enforcement point (PEP) utility 230 may be implemented at each server 210 for enforcing the energy policy selected by a user, in one application, a PEP utility 232 may be implemented at each virtual machine 234 hosted by the server.
  • Other PEP utilities 238 may be implemented in the cloud 206 for enforcing the energy policy at a global level.
  • cloud 206 includes at least routers 240 and a network controller 242 that communicate with the network controllers from the data centers.
  • a PEP utility may include software that is run on a dedicated virtual machine or a physical machine.
  • the virtual power application 220 is configured to act as a source of virtual currency for various energy policies.
  • the energy policies may be stored in a local data base and they may be associated with each user.
  • Each energy policy has an assigned amount of currency.
  • the amount of currency may be defined by the compute environment's operator. For example, a user may select an energy policy that includes many energy efficient features while another use may select an energy policy that includes no energy efficient features. Depending on the number of energy efficient features, different energy policies are associated or include a different amount of the virtual currency.
  • the energy efficient features are configured after the user has selected an energy policy, as discussed later.
  • the virtual currency is represented by power tokens, a structure which is also discussed later.
  • FIG. 3 shows, at the top, the various "players" involved in this process. These players can be physical and/or virtual entities as now discussed.
  • a user 301 may be an enterprise, a government agency, an organization or a person.
  • User 301 sends in step 1 a request for service (i.e., service function) to a VNF/App Manager 322.
  • the VNF/App Manager 322 may be element 222 in Figure 2.
  • the service function may be an application or VNF
  • the manager 322 may be V F manager, an App Manager or both. For simplicity, in the following, this manager is called a VNF/App Manager with the understanding that it can be either of them or both of them.
  • the request for service in step 1 may include an energy policy that the user has selected from the data center's operator in advance. This means that in step 0, the operator of the data center has defined various energy policies to be available to the users.
  • the energy policy may be one that include or not energy efficient functions. Because it is likely that under the current trend of conserving energy, the user will try to use energy efficient functions, the energy policy is called a green policy (gp). However, the operator may also offer energy policies that are less "green" or that include no energy efficient functions.
  • One purpose of the currency used in this method is to manage the energy efficient functions.
  • the request from the user may also include requirements for compute and network resources and VNF type, in this regard, the VNF type may be determined by the service function.
  • VNF type when the user asks for a particular service function, there is a one-to-one relation to the VNF type.
  • the provider has several VNF types that could perform that service function.
  • an automatic choice could be made by the provider, for example, taking into account the amount of resources requested for the service function and the type of green plan.
  • the provider could present a list of alternative VNF types to the user, which will then make a choice before submitting the final version of the request to the provider.
  • the VNF/App manager needs to figure out the implication of the users request for the compute environment, i.e., which are the compute and network resources needed to support the service function and associated green policy required by the user.
  • the VNF/App manager requests in step 2, from the cloud controller 314 (corresponding to element 214 in Figure 2), details about the resources that will be needed in the data center, in the network and outside the data center for supporting the users request.
  • the cloud controller 314, which have access to ail this information allocates in step 3 compute resources and configures energy efficient capabilities, based on the green plan which the user has selected.
  • the cloud controller 314 may not be aware of the network resources necessary to support the user's network requirements.
  • the cloud controller 314 asks in step 4 network controller 316 (element 216 in Figure2) to estimate network resources needed by the service function for the given green plan.
  • the green plan features need to be known by the network controller for estimating the network resources.
  • the following simple example is provided for a better understanding of the invention. This example is not intended to limit the invention.
  • the network controller when determining the network resources for supporting the service function, can allocate to this specific user network paths that accommodate delay, resulting in less cost in running the network.
  • this green plan is likely to be cheaper than another time-sensitive green plan which is selected for transmitting movies from the cloud to the user.
  • the network controller has to allocate resources that are more expensive to be run in order to not pause the movie while the user is watching.
  • the various entities coordinating the power budgets are application-aware (e.g., is the application time sensitive or not), and thus, the compute environment better serves the users.
  • Network controller 316 allocates in step 5 the network resources (e.g., virtual and/or physical routers, switches, etc.) based on the energy efficient functions found in the green plan and then sends this information to the cloud controller.
  • the cloud controller sends in step 6 to the V F/App manager 322 a list of the compute and network resources that will support the user's requests. With this information, the V F/App manager 322 sends in step 7 a request for tokens to the virtual power application 320 (element 220 in Figure 2).
  • the VNF/App manager may also send in step 7 potential known conflicts between the compute and network resources of the users request and already deployed compute and network resources.
  • the virtual power application determines the number of tokens to be granted to the user and sends this information in step 8 back to the VNF/App manager 322.
  • the number of granted tokens is calculated by taking into account the overall power request, as well as the energy efficient capabilities that the VNF advertises (such as which P-states it might be able to support on the compute resources, and PAR4 requirements on the physical network resources).
  • the tokens may be divided between compute and network virtual power tokens.
  • the granted tokens may include renewal and spend policies, as will be discussed later. For example, tokens buckets could be defined in a way that they refill an empty or partly empty bucket automatically, after a given time interval; how fast tokens are allowed to be consumed from the bucket could be defined in terms of a committed and peak consumption rates, etc.
  • VNF/App manager 322 instantiates the tokens to be associated with the user's request, and sends in step 9 this information to the VNF/App control 340.
  • VNF/App control 340 is responsible for instantiating in step 10 the service function 342, i.e., either an application or a VNF instance.
  • step 10 may include instantiating only the application, only the VNF or both the application and the VNF.
  • more than one application or more than one VNF may be instantiated at this step, it is likely that more than one VNF are instantiated in case when the sen/ice function is actually composed of several sub- functions, and each of these sub-functions would be implemented through a VNF.
  • Such decomposition is expected to be performed by the provider (either
  • the respective service function 342 acknowledges in step 0.1 to the VNF/App control that the function has been instantiated.
  • step 0 when the VNF/App instances 342 start executing, they report to the VNF/App control 340, which relays their identifiers to the VNF/App manager 322. Now, the VNF/App manager can install the tokens in the PEPs, which is discussed next. For example, the network PEP needs to know traffic from which flows should be counted towards the virtual power token, so it needs a way of identifying the flows.
  • Known OpenFlow or IETF I2RS or iPP !PF!X flow identifiers could be used, depending on the PEP implementation; for the compute PEPs, process IDs could be used to identify the instance; the format depends on the operating system used for the implementation (it is well documented in literature).
  • the VNF/App control informs the V F/App manager 322 that the service function has been instantiated and also provides sen/ice function identifiers so that the VNF/App manager 322 can track the plural service functions associated with the many users that usually use the data centers.
  • the identifier can be a simple code, similar to an ID, which is unique to each user. More sophisticated identifiers may be used to also provide information about the green policy, the size of the user, the equipment used by the user, etc.
  • the VNF/App manager 322 in anticipation of the new phase of enforcing the green policy (i.e., monitoring and adjusting accordingly the tokens assigned to the user based on the user's consumption), sends in step 12 the allocated amount of tokens to PEP utility 332 (element 232 in Figure 2).
  • the VNF/App manager 322 may also configure a set of energy saving capabilities associated with the green policy selected by the user. This configuration step may alternatively take place in step 3 or step 9 or it may be distributed over these steps. Thus, the configuration step may take place in the VNF/App control 340 or PEP 332 or cloud controller 314.
  • the PEP utility may reside at multiple points in the same server and it may be configured to monitor each virtual machine in the server and the users associated with that virtual machine.
  • PEP utility 332 acknowledges in step 12.1 that it is ready to monitor the user's service function in the data centers.
  • PEP utility 332, as discussed later, has access to the user's green policy, the user's requirements, and the computer and network resources assigned to service these requirements.
  • a PEP utility is located on each of the network part forwarding node, whether physical or virtual, along the paths that the user may use.
  • VNF/App Manager 322 informs in step 13 VNF/App control 340 that both the instantiation and monitoring infrastructure is in place, which results in the VNF/App manager 322 informing in step 14 the user that its requirements were met and ready to be used.
  • the method advances the enforcement part, which is discussed still with regard to Figure 3.
  • the PEP utility measures energy usage of the user's activities supported by the data centers. For example, the PEP utility calculates the amount of real power consumed by the compute and/or network resources supporting a particular VNF or application requested by the user. In a server, this function could be achieved by counting the number of real CPU cycles used and at which frequency they were consumed. In a physical network node, this function can be evaluated based on a set of node power profiles (known in the art or measured prior to deploying this utility) that codify the dependency between the number of packets forwarded for the VNF or application, the type of processing that needed to be applied and the energy management functionality that might have been active or inactive during the interval.
  • a set of node power profiles known in the art or measured prior to deploying this utility
  • the PEP utility determines in step 18 whether the used power is within the power budget specified in the tokens. If the result of this determination is that the power consummation is within the power budget, no action is taken. However, if the result is that the power consumption is outside the allocated power budget, e.g., has exceeded the allowed power, the PEP utility takes action in step 7 and also reports the action to the VNF/App manager. The action or actions taken during this step are now discussed.
  • the action may depend on the type of PEP utility. If the PEP utility is configured to handle compute resources, then it may take the following actions, if the granted tokens were consumed by the user, the V F or application instance may not be scheduled to execute until new tokens become available, for example, through an automatic renewal mechanism to be discussed later. Alternately, the VNF or application may be allowed to "borrow" from future utilization in case that resources are available. Another alternative may be to allow borrowing only at a pre-defined spend rate, which is a fraction of the spend rate specified in the token.
  • the PEP utility may take the following actions. If the granted tokens were consumed, the packets to be transmitted over the network part may be buffered for short intervals (meaning that they will still consume some power while waiting, but less than when being transmitted through the network). Alternatively, a functionality such as Weighted Random Early Discard could be activated on the network flows associated with both the application or VNF in order to reduce the traffic. Those skilled in the art would imagine other actions for slowing down traffic.
  • the PEP utility handles both the compute and the network resources. Note that novel embodiments are discussed sometimes with regard to the network part. However, the novel concepts discussed herein may be adapted to apply to the compute part or to both the compute and network parts.
  • the action or actions discussed above may be implemented in various ways.
  • the action is implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities.
  • an Energy Efficient Ethernet capability would be implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would be implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would be implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would be implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would be implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would be implemented in an
  • the two timers may be implemented anywhere in the compute environment and they are used for determining the sleep and awake modes.
  • the PEP utility could, for example, decide to force a link (e.g., network connection) to transition into a sleep mode if the power budget was exceeded for a certain amount of time.
  • the PEP utility could decide to force a CPU transition to a slower clock value, if that CPU executes a service function that went over the power budget.
  • a link e.g., network connection
  • the PEP utility could decide to force a CPU transition to a slower clock value, if that CPU executes a service function that went over the power budget.
  • Other energy efficient capabilities are discussed with regard to Figure 7.
  • the PEP utility 322 repeats steps 15-17 for constantly or continuously monitoring the power consumption in the corresponding compute and/or network resources.
  • the various entities discussed above e.g., cioud controller, network control, VNF/App control, PEPs, VNF/App manager and virtual power application may be implemented in hardware, virtual machines, software or a combination of them.
  • each of these entities may be implemented in its own controller.
  • the controller may be a physical one that has at least a CPU, memory, bus connecting the CPU to the memory, and an input/output interface for communicating with the CPU. in another example, the controller is a guest virtual machine having a similar structure as the physical controller.
  • the computing device 400 may include a processor 402 that runs the cloud controller 314, network controller 316, VNF/App control 340, VNF/App manager 322, and the virtual power application 320. These entities can communicate to each other through logic 404 and they also can be stored in memory 408.
  • Memory 406 may also be configured to store various energy policies (gp) for the network part, the energy policies having been assigned a different amount of virtual currency.
  • a virtual machine 234 may be run on the server and it may host the PEP utility 332 and other elements, for example, a virtual router 408.
  • virtual power application 320 is not an abstract application, it is a real application that manages the virtual currency (e.g., tokens) with the purpose of monitoring and handling the real power used by the underlying infrastructure.
  • FIG. 5 illustrates the power token operation for renewal and usage policies.
  • the triggering event is the expiration time of a token or tokens.
  • the token policies module 504 which may be stored anywhere in the data center, in one application, the token policies module 504 is stored in the server where the PEP utility is stored.
  • the token renew policies utility 506 requests the renew policies of the user associated with the event 502, from the token database manager 508.
  • the token database manager 508 interacts with a plan utility 510, a tokens utility 512 and a token policy utility 514 for retrieving the necessary information.
  • the plan utility 510 is responsible for storing all the plans that can be selected by the users. Among these plans, there is the plan (green policy) selected by the user associated with event 502.
  • the tokens utility 512 stores the various amounts of tokens associated with each plan and token policy utility 514 stores the rules associated with each plan, tokens and user. For example, for a given plan and user, automatic renewal of tokens is allowed while for some other users, this is not permitted.
  • the token database manager 508 interacts with the plan utility 510, tokens utility 512 and token policy utility 514 for collecting corresponding data associated with the user. This information is relayed back to the token renew policies 506, which decides in step 520 whether to automatically renew the tokens, if the answer is yes, the automatic renewal takes place in step 522, and the PEP utility is informed about this result. If the answer is no, a request is forwarded to the user in step 524 to inform that new steps are needed for renewing the green policy. [0088] Another event 530 may arrive at the token policies utility 504, while the amount of tokens is still available.
  • event 530 arrives at token usage policies utility 532, which decides based on the information contained in the event, and also from the information retrieved from the token database manager 508 (e.g., type of green policy, amount of tokens associated with the plan, etc.), whether to reduce usage in step 534 or to increase usage in step 536. Irrespective of the output of this step, the result is communicated to the PEP utility and other parties in the compute
  • a token is the unit used to access resources in the compute environment.
  • Table 1 presents how the token is represented in a database, e.g., tokens utility 512 in Figure 5.
  • the table has: (1) a field for an identifier of the user's green plan associating the tokens to the user and a set of resources, (2) a field for the number representing the amount of network tokens, (3) a field for the number representing the amount of compute tokens, (4) a field holding the usage conditions, and (5) a field for the renewal conditions.
  • Those skilled in the art would know that the token's structure may be modified without affecting the embodiments discussed herein.
  • the number associated with the network and compute tokens may be equivalent to a number of Watts defined by the compute environment's operator. This relation between tokens unit and Watts can be adjusted to simplify the management of power in the compute environment. For instance, power profiles can be adjusted to consider one token to be equivalent to one Watt.
  • usage policies e.g., element 514 in Figure 5
  • events e.g., element 530 in Figure 5
  • One example of usage policies, described in code machine, is as follows:
  • Renewal policies (stored in element 514 and handled by utility 508 in Figure 5 ⁇ describe conditions to automatically refill the token budget.
  • An example of renewal policies, expressed in code machine, is as follows:
  • the virtual power application 320 in Figure 3 is responsible to calculate and provide tokens. This happens after step 7 in Figure 3.
  • the virtual power application 320 calculates the tokens based on (1) previous patterns of usage, (2) availability of actual power in the compute environment, and (3) energy efficiency features that might be associated with the user's selected plan. In one application, the virtual power application 320 does the above calculations in response to a first instantiation of a sen/ice function or, upon receiving an event informing that a token ran out and it had no automatic renewal policies.
  • FIG. 6 is a flowchart illustrating the process of calculating the tokens.
  • the modules and steps discussed now may be implemented in the virtual power application 320.
  • the token request 602 is received at a check request module 604.
  • the check request module 604 communicates with the token database manager 608 (which can be element 508 in Figure 5) for checking in the database the amount of available tokens (if any), and the power profiles associated with the compute and/or network resources necessary to support the user's required service function.
  • the database may include plan utility 610 that stores the plans offered by the compute environment's operator, token utility 612 that maintains the amount of available tokens, and token policy utility 6 4, which maintains power rules associated with various resources.
  • the check request module 604 calculates in step 620 the power required for the user's service function and energy saving capabilities associated with the selected green policy. These calculations are discussed later with reference to Figure 7. The result of these calculations is compared in step 822 with compute and network power capacity. If the power requirements from step 820 cannot be accommodated, the request from the user is denied. This is so for each request because the amount of tokens for compute and/or network resources cannot be higher than the resources power capacity. This is verified by converting the amount of calculated tokens in Watts and comparing it to the resources' power capacity, if this value is higher than the capacity, the request is denied.
  • step 626 determines if the user's request is new. if the user request is not new, in step 628 the existing policies associated with the user are checked and a decision is made in step 630 whether to modify renew or usage policies. If there is no need to modify either of this policy, the user's request is granted in step 630. Otherwise, the policies in the token policy utility 614 are updated first and then the request granted.
  • step 626 if in step 626 there is determined that a new request is received from the user, the process advances to step 634 for requesting initial tokens for the user. Before granting the request in step 632, the process
  • the virtual power application maps each option as a node in a tree where the nodes represents available configurations for network, compute and others resources that might be used by the service function, each node being associated with a cost.
  • This tree is illustrated in Figure 7 and it starts with a plurality of plans 702A-X that are available for the user.
  • Plan 702A is a green plan that includes a high number of energy efficiency features while plan 702X includes no energy efficiency features.
  • the various costs associated with various compute resources 706 are calculated first. Note that compute resources 708 show various CPU frequencies, number of CPUs, storage devices, etc.
  • the cost of each combination of these compute resources are calculated and summed in step 708. To these costs, more costs are added when other types of resources are considered. For example, Figure 7 shows that the cost associated with storage devices 710, cooling resources 712, and so on is added to the compute cost in step 714. Then, the various costs associated with network resources 718 are calculated in step 720. Figure 7 shows that for the network resources 718, the amount of bandwidth, packet loss, and VNF types may be some features to be considered. Other network specific features may be considered when calculating the cost.
  • the total cost of the compute and network resources is finalized in step 722, when the cost saving associated with the energy efficient features is subtracted from the actual cost of the compute and network resources. This cost is used before step 8 in Figure 3, for determining the amount of tokens to be assigned to the user.
  • the PEP utility logic is now discussed with regard to Figure 8.
  • the PEP utility 332 receives requests 802 to enforce usage policies and has to ensure that the requested service does not use more energy than the available energy. For example, if a user has tokens available in the budget and a usage policy requires to increase the usage of a resource during office hours, the PEP utility has to verify, in terms of power capacity, if it is possible to enforce the requested action.
  • Energy consumption forecasts can be provided by technologies such as PAR4 measures for servers, or watts-byte statistics for network nodes,
  • the PEP utility 332 may receive a request 802 for altering or enforcing allocated power or a request 804 for monitoring used power.
  • the request is analyzed in step 806 based on existing token policies as discussed with regard to Figure 5.
  • the PEP utility may obtain consumption information and tokens available from token database manager 508, also discussed with regard to Figure 5, and from consumption database manager 808.
  • Consumption database manager 808 has access to compute consumption information stored at 810 and network consumption stored at 812. This data may be the result of active measuring of the various resources of the compute environment, simulations, and/or previous consumption patterns.
  • a consumption metering module 814 that is based on known technologies for measuring power in the compute and network resources write information to the consumption database manager 808.
  • step 820 determines whether there are enough tokens for the user's service function. If the answer is yes, the method advances to step 822, where it is determined whether the network capacity is exceeded. If the answer is no, the method advances to step 824 where the compute capacity is checked, if the compute capacity is not exceeded, the method advances to step 828 and continues to enforce the usage policy. If there are not enough tokens in step 820, or the network capacity is exceeded in step 822 or the compute capacity is exceeded in step 824, the PEP utility takes action to reduce or deny usage in step 826. Two types of such actions have been discussed with regard to Figure 5.
  • a VNF request 902 is received and it is accompanied by a green plan as discussed with regard to steps 1-8 in Figure 3.
  • the method retrieves information in step 904 about the capabilities of the compute environment and the various policies from managers 508 and 808. At the same time, information is retrieved from the VNF/App manager 322.
  • the information from the VNF/App manager 322 is now discussed, in step 906, the VNF/App manager 322 verifies whether there are conflicts between the user's requirements and the existing service functions.
  • VNF lifecycie manager 908 This information is passed to VNF lifecycie manager 908 for determining whether to create, modify, destroy or get a status of the tokens associated with the service function.
  • the VNF lifecycie manager 908 also receives information from the token policies request 910 and from a check routes utility 912.
  • the check routes utility 912 verifies the available routes between various network resources.
  • the check conflicts utility 906 is configured to request tokens in step 914, from the virtual power application 320.
  • the check conflicts utility 906 is capable of advertising requirements and possible conflicts.
  • the V F lifecycie manager 908 is responsible for managing the VNFs lifecycie by performing tasks such as the creation 916, modification 918, destruction 920 and get status 922, all of which are related to management of conflicts. Such conflicts may involve two VNFs attempting to perform conflicting virtual power-related requests on the same resource or conflicting operations between energy efficiency features. The output from these steps is sent to a VNF database manager 924, which maintains a database 926 for ail the VNFs,
  • the VNF/App manager 322 is shown in Figure 9 to have as input two types of requests, one request 902 forwarded from the cloud manager 314 (see step 6 in Figure 3) to request resources to deploy a new VNF and one request 910 from the PEP utility 332 (see step 12 in Figure 3) to modify an already deployed VNF.
  • Request 902 is used to verify the power requirements and possible conflicts in order to request tokens and request 910 is used to reduce or increase energy usage.
  • the first step is to check if tokens were already provided by the virtual power application for the user's service function, in case tokens were not provided, the VNF/App Manager verifies the service requirements (step 904) associated with the requested VNF and the power consumption estimates provided by the user or administrator. If the request for tokens is answered positively, the iifecycle manager 908 requests the routes from the network controller 316 in order to configure and deploy the VNF.
  • the virtual power scheme discussed above could be implemented as an extension to distributed systems kernels such as the Apache Mesos.
  • the virtual power application would be implemented in the esos Master, the Mesos Slaves would be extended with the PEP functionality and the communication between the Master and the Slaves extended to allow for the transfer of the tokens.
  • This alternative embodiment illustrates the generality of the solution proposed in the enclosed embodiments and the capability to implement such solution in the existing compute environments.
  • a method of power management in a compute environment having a compute part and a network part is now discussed with regard to Figure 10.
  • the method includes a step 1000 of defining various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency, a step 1002 of assigning, in a virtual power controller, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency, a step 1004 of configuring a set of energy saving capabilities to be implemented on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency, a step 1006 of monitoring power consumption of the service function with a policy enforcement utility that is aware of the selected energy policy, and a step 008 of adjusting an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
  • gp various energy policies
  • the energy management embodiments discussed above may be implemented in an apparatus and/or method that uses a computing device during a planning phase to calculate a power budget based on device and virtual network function models that include energy management features.
  • the same computing device may still be used during the planning phase to define a set of tokens associated with this power budget.
  • the same computing device may, during a provisioning stage, distribute the tokens to PEPs either simultaneously of shortly after compute and/or network resources were allocated for the virtual network function, such that when a VNF is instantiated, the PEPs associated with it already have the assigned tokens.
  • the same or another computing device may be configured to monitor adherence to the power budget by receiving power consumption readings from compute environment's nodes (e.g., servers and switches) and service functions (e.g., applications or VNF) and to take action by adjusting the number of tokens or token renewal policies in case the power consumption exceeds a pre-configured limit.
  • compute environment's nodes e.g., servers and switches
  • service functions e.g., applications or VNF
  • a green policy PEP utility may be associated with an infrastructure resource (server or network node), which, at a provisioning stage, receives one or more tokens associated to a virtual network function and application.
  • the same green policy PEP utility may continuously read measurements of the energy consumption of the VNF or application on the infrastructure resource, compares the measurements with the content of the token associated to the network function, and allows or denies resource usage in case the token usage has been exceeded.
  • a computing device 1100 is illustrated in Figure 1 1 and it includes a processor 1102 that is connected through a bus 1 04 to a storage device 1 106.
  • Computing device 1 00 may also include an input/output interface 1108 through which data can be exchanged with the processor and/or storage device.
  • a keyboard, mouse or other device may be connected to the input/output interface 1108 to send commands to the processor and/or to collect data stored in storage device or to provide data necessary to the processor.
  • a monitor 1 110 may be attached to bus 1 104 for displaying the parts of the compute environment and associated power consumption.
  • the disclosed embodiments provide a computing device and method for managing power in a network part of a compute environment. It should be understood that this description is not intended to limit the invention. On the contrary, the embodiments are intended to cover alternatives, modifications and equivalents, which are included in the spirit and scope of the invention as defined by the appended claims. Further, in the detailed description of the embodiments, numerous specific details are set forth in order to provide a comprehensive understanding of the claimed invention. However, one skilled in the art would understand that various embodiments may be practiced without such specific details.
  • the embodiments may be embodied in a wireless communication device, a telecommunication network, as a method or in a computer program product. Accordingly, the embodiments may take the form of an entirely hardware embodiment or an embodiment combining hardware and software aspects. Further, the embodiments may take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions embodied in the medium. Any suitable computer- readable medium may be utilized, including hard disks, CD-ROMs, digital versatile disc (DVD), optical storage devices, or magnetic storage devices such a floppy disk or magnetic tape. Other non-limiting examples of computer readable media include flash- type memories or other known memories.
  • DPDK Data Plane Development Kit is a set of libraries and drivers for fast packet processing

Abstract

A method of power management in a compute environment (200) having a compute part and a network part. The method includes a step of defining various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency; a step of assigning, in a virtual power controller (320), based on an energy policy selected by a user (301), a service function (342) required by the user, and network resources (408) to be consumed by the service function, a corresponding amount of virtual currency; a step of configuring a set of energy saving capabilities to be deployed on the network resources (408) based on the energy policy selected by the user and the corresponding amount of virtual currency; a step of monitoring power consumption of the service function (342) with a policy enforcement utility (332) that is aware of the selected energy policy; and a step of adjusting an amount of actual electric power used by network resources (408) that support the service function (342) depending on a result of the monitoring step.

Description

Software-Defined Power Management for Network Resources
TECHNICAL RELD
[0001] The present invention generally relates to network energy
management, cloud computing, software-defined power for network resources, network function virtualization and, more particularly, to mechanisms and techniques for allocating and/or enforcing power usage associated with a network part of a compute environment.
BACKGROUND
[0002] An explosion of computing devices, stationary and portable, is currently underway. These computing devices include, but are not limited to, smartphones, laptops, tablets, phablets, etc. Demand for these devices is made more acute by social media's rapid evolution, which encourages a need to communicate and be in contact with others. Social applications, which are consumed by the users today, together with other applications and capabilities required by businesses, have grown so much that the amount of power and computing for supporting these applications require dedicated data centers. Various content and/or social media providers, including mobile communication providers, have developed their own data centers, which are located around the globe. Cloud networking was developed to address the need of running these application in the distributed data centers, away from the users.
[0003] However, each data center is in itself a large amalgam of physical devices, e.g., computing devices, servers, routers, etc. and virtual devices, e.g., virtual machine, virtual router, virtual switch, etc. Each of these devices consumes real electric power, which given the large size of the data center, is becoming a considerable expense associated with the data center. In this regard, although there is no consensus in the literature with respect to how much electricity the network part consumes in a data center, the available studies indicate numbers that vary between 4% and 33% of the total data center's consumption. There are some studies that indicate that these numbers could go up to 50% of the total power if the utilization is low (about 15%) and many of the servers are shutdown. Energy savings in the network part, complementing those achieved in the computing part (e.g., servers) of the computing environment are thus highly desirable as an operation expense reduction, in addition to providing increased green credentials.
[0004] For clarity, the compute part and the network part of a compute environment are illustrated in Figure 1. Figure 1 shows a compute environment 100 that includes two data centers 102 and 104 connected to each other through various links 108 and 1 10. The compute environment includes infrastructure (i.e., hardware) but also software associated with the infrastructure. A mechanism for coordinating the data transmitted between the various parts of the compute environment 100 may be located in the cloud 106. Each link 108 and 1 10 may include one or more wire lines and/or one or more wireless channels. The compute environment 100 may be structured as a compute part and a network part. The compute part may include compute resources, e.g., servers 120A-E, storage devices 122A-C, processors, one or more virtual machines, etc. while the network part may include network resources, e.g., physical and/or virtual routers 130A-B, real or virtual ports 132A-B, real and/or virtual switches, mapping tables, etc. In general, the network part includes any physical or virtual resource that supports networking functions.
[0005] It is noted that the transition towards Network Function Virtualization (NFV) will replace a significant number of middle boxes (e.g., a physical node having one or more functionalities as a fire wail, network address translation, packet inspection, etc.) in the network part of the computing environment 100 with Virtual Network Functions (VNF) executed on the compute part. This means that power consumption in the network part will be more intimately related to the power consumption on the compute part within the data center, distributed
telecommunication cloud, mobile telecommunication provider, etc.
[0008] To provide an interface of power management capabilities to the compute environment, a Green Abstraction Layer (GAL) is being standardized at the European Telecommunications Standards institute (ETSi, see, for example ETSI ES 203 237 v. 1.1.1). GAL defines a set of abstract data objects (known as energy- aware states) that describe the power management settings of energy management capabilities available on network nodes and at the network level itself. GAL descriptions include the performance level achievable for each of the energy state configurations available and the time taken by each capability to be applied. GAL uses a representation of how energy management elements are interconnected with each other within a network node. GAL also defines a management interface between node-level control of energy management features, and a hierarchy of network-level control planes that may monitor and set higher-level policies. A GAL- enabied prototype that uses pre-standard definitions to achieve network energy savings is presented in Bolla et a!., "A northbound interface for power management in next generation network devices," Communications Magazine, IEEE , vol.52, no.1 , pp. 49, 157, January 2014.
[0007] As a way to manage the power required by virtual instances, the Virtual Power document (R. Nathuji and K. Schwan, "VirtualPower: coordinated power management in virtuaiized enterprise systems," SIGOPS Oper. Syst. Rev. 41 , 8 (October 2007), 265-278) defines a set of soft states that virtual machines communicate to a hypervisor to indicate their desires for processor power saving, as well as a method for the hypervisor to combine soft states before applying them to the actual hardware CPU. Based on the Virtual Power document, VPM tokens are defined (see, R. Nathuji and K. Schwan, "VPM tokens: virtual machine-aware power budgeting in datacenters," in Proceedings of the 17th international symposium on High performance distributed computing, ACM, New York, NY, USA, 119-128, 2008). The VPM tokens is a token-based scheme for allocating power only to the compute part in a data center. The power tokens act as a virtual currency system, allowing for virtual power budgeting within the compute resources of the data center. Application of the VPM tokens shows that power savings in excess of 50% are possible when using an application-aware power budgeting strategy for compute resources.
[0008] Per-task Energy Accounting (PTEA) and Per-task Energy Metering (PTEM) (Liu et al., "Per-task Energy Accounting in Computing Systems," Computer Architecture Letters, IEEE, vol.13, no.2, July-December 2014) are technologies that enable to track and meter the amount of energy consumed for a given task in a shared hardware. The concept of PTEA is based on two pillars (1 ) the PTEM, which is based on models for each hardware component to derive the actual energy consumed by a given task, and (2) an energy metering component, which modulates the PTEM results to derive the energy consumed by a task with a share of the hardware. The PTEA mechanism calculates the usage of a hardware component by a given task at a point in time, e.g., cores in a multicore CPU, caches, etc.
[0009] PowerAssure inc. designed a solution (see, "Software Defined Power: isolating Applications from Power Problems to Maximize Availability," A
PowerAssure Inc. whitepaper) termed Software-Defined Power, that allows administrators to automaticaiiy move workloads within and between data centers in order to optimize power consumption. More specifically, the software-defined power monitors and predicts application activity, dynamically adding and removing servers from the available pool as necessary. Unneeded servers are powered down, resulting in power savings, but they can be rapidly and automaticaiiy returned to service when required. Sophisticated algorithms ensure that adequate headroom is always available to ensure service levels are met, even in rapidly changing environments. This solution takes into account real-time and day-ahead pricing, Demand-Response events, power quality and reliability forecasts. The solution relies on PAR4 (see PowerAssure Inc. whitepaper above) measurements on the servers to forecast how much energy would be consumed by a server under a particular set of workloads. On the network side, the system relies on power measurements being made available through standard protocols such as SNMP and widely-used management tools such as IBM Tivoii, HP Openview and Nagios to supply this data.
[0010] The VPM tokens and VirtuaiPower solutions discussed above are designed for the compute part of the compute environment, e.g., managing a power budget with respect to a number of virtual machines accommodated on a set of physical servers. These solutions do not take into account the power consumption in the network part of the compute environment, and do not attempt to manage the energy saving features that might be available in the network part.
[0011] Further, the above-discussed solutions are limited to a single data center, and they do not account for communications between multiple data centers. In addition, distributing VNFs throughout the data center infrastructure creates interdependencies between the energy consumption of the resources in the compute part, and these interdependencies are not possible to be predicted without knowledge of the virtual network level that employs the VNFs.
[0012] Per-task Energy Accounting and Metering provide a fine grained view of the CPU energy usage, enabling a third-party to estimate accurately the processor setup that leads to the lowest energy consumption. By itself, this technique does not manage the energy consumption, but only provides useful input towards software that implements the energy management.
[0013] The GAL document enables an operator to control, using abstract parameters, the energy performance levels modeled for physical network nodes, and then, using additional intelligence not included in the standard, to manage the lower- level energy saving primitives. However, GAL-enabied solutions have no application awareness (i.e., these solutions are blind to the Quality-of-Service (QoS)
requirements of an application) and handle all network-sent data in the same way, with the potential of providing QoS levels lower than expected. As such, current GAL-based solutions cannot address the needs of VNFs executing on standard compute resources instead of physical network nodes. [0014] A direct combination of GAL and VP tokens, apart from requiring an extension of the token concept from compute resources to network resources, which is not straightforward in itself, would still not provide application-awareness or virtual network function-awareness in the network. This is so because the network energy saving primitives being managed are not application-aware and GAL has no means to provide application-awareness.
[0015] The PowerAssure solution is centered on the compute part of the infrastructure (servers). PAR4 metrics are not defined for physical network nodes, and is unclear to what extent they might be relevant when the CPU utilization is due to VNFs rather than standardized data center benchmark applications. V Fs are expected to have different usage patterns due, for example, to accelerator hardware and libraries such as Intel DPDK. The PowerAssure solution does not appear to pro- actively distribute a network power budget between the applications that use VMs and network paths within the data center. In addition, there is no notion of managing energy consumed by network nodes outside the data center and used for inter-data center communications.
[0018] Thus, there is a need to provide methods and compute environments that can provide energy management tools not only for the compute part but also for the network part, especially when the network part involves virtual network functions.
SUMMARY
[0017] As the compute environments become more complex and are more flexible in terms of their hardware configurations, software configuration and the presence of virtual devices and virtual functions, and ail these resources consume a significant amount of electrical power, there is a need to monitor the network part in terms of energy consumption, and to provide network energy management capabilities that take into consideration the virtual part of the network, the interaction between the virtual functions and the physical devices, and also to be application aware.
[0018] Thus, in one embodiment, a method is presented for power management in a compute environment having a compute part and a network part. The method includes defining various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency; assigning, in a virtual power controller, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency; configuring a set of energy saving capabilities to be deployed on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency;
monitoring power consumption of the service function with a policy enforcement utility that is aware of the selected energy policy; and adjusting an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
[0019] According to another embodiment, there is a compute resource that includes a storage device configured to store various energy policies (gp) for a network part of a compute environment, the energy policies having assigned a different amount of virtual currency. The compute resource further includes a virtual power application configured to assign, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency. The virtual power application is also configured to define a set of energy saving capabilities to be deployed on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency. The compute resource further includes a policy enforcement utility. The policy enforcement is configured to monitor power consumption of the service function, wherein the policy enforcement utility is aware of the selected energy policy, and adjust an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
[0020] In still another embodiment, there is a computer readable medium including computer executable instructions, wherein the instructions, when executed by a computer, implement the above noted method.
[0021] Thus, it is an object to overcome some of the deficiencies discussed in the Background section and to provide a method and compute environment in which the network part is monitored from an energy point of view and various energy policies are implemented and/or enforced. One or more of the embodiments discussed herein advantageously provides such a mechanism. BRIEF DESCRIPTION OF THE DRAWINGS
[0022] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate one or more embodiments and, together with the description, explain these embodiments. In the drawings:
[0023] Figure 1 is a schematic diagram of a compute environment having a compute part and a network part;
[0024] Figure 2 is a schematic diagram of a compute environment in which a virtual power application is installed;
[0025] Figure 3 is a signal diagram of commands exchanged by various entities of the compute environment for instantiating a service function and associated virtual power budget;
[0028] Figure 4 is a schematic diagram of various modules that implement various entities of the compute environment;
[0027] Figure 5 is a schematic diagram illustrating power token operations for renewal and usage policies;
[0028] Figure 8 is a flowchart illustrating the process of calculating tokens associated with a power budget;
[0029] Figure 7 is a flowchart illustrating a process of calculating power requirements and energy saving capabilities;
[0030] Figure 8 is a schematic diagram of various processes that take place in the policy enforcement unit;
[0031] Figure 9 is a flowchart of a method for managing VNF requests;
[0032] Figure 10 is a flowchart of a method of power management in a compute environment; and [0033] Figure 1 1 is a schematic illustration of a computing device that can implement one or more of the methods discussed herein.
DETAILED DESCRIPTION
[0034] The following description of the embodiments refers to the accompanying drawings. The same reference numbers in different drawings identify the same or similar elements. The following detailed description does not limit the invention, instead, the scope of the invention is defined by the appended claims. The following embodiments are discussed, for simplicity, with regard to the terminology and structure of a compute environment that includes data centers having a managing module located in the cloud. However, the embodiments discussed herein are not limited to a data centers or data centers connected to the cloud, but they may be applied to other systems that involve hardware running one or more network functionalities.
[0035] Reference throughout the specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with an embodiment is included in at least one embodiment of the present invention. Thus, the appearance of the phrases "in one embodiment" or "in an embodiment" in various places throughout the specification is not necessarily all referring to the same embodiment. Further, the particular features, structures or characteristics may be combined in any suitable manner in one or more embodiments or claims.
[0036] An architecture of a compute environment that supports NFV includes at least three components. The first component is the VNFs, which are software implementations of network functions that can be deployed on a Network Function Virtualization Infrastructure (NFVI). The second component is the NFVi, which includes the totality of hardware and software components which build up the environment in which VNFs are deployed. The NFVI can span across several locations, e.g., at different data centers. The network part providing connectivity between these locations is regarded to be part of the NFVL The third component is the Network functions virtuaiization management and orchestration architectural framework (NFV- ANO Architectural Framework), which is the collection of all functional blocks, data repositories used by the functional blocks, and reference points and interfaces through which these functional blocks exchange information for the purpose of managing and orchestrating NFVI and VNFs.
[0037] According to an embodiment, a method is proposed herein that estimates a network power budget taking into account energy management capabilities of the nodes supporting the network and then automatically distributing this budget among service functions (e.g., applications and/or VNFs) according to policies defined by the operator of the compute environment. This method may include a policy enforcement unit that maintains and/or enforces the allocated power budget, as well as signaling associated to it. The method may use an interface between infrastructure controllers within the data center and within the WAN, thus extending the power budget between multiple data centers for a much more realistic evaluation of the energy consumed.
[0038] As will be seen from the following embodiments, the method may exhibit one or more of the following advantages:
- Allows a transparent control, from the operator, of the power consumed by the compute and network parts within an administrative domain (multiple data centers and/or the links between them). This means that the operator can optimize the costs for energy consumed by the nodes;
- Allows the operator to strike a balance between QoS and energy savings, which may result in passing some of the savings to the customers; - From a technical perspective, this method can be combined with solutions such as the VP tokens to provide additional power savings coming from the physical network part, reducing energy costs of the operators. Such a combined solution has an additional advantage, namely, it coordinates power savings between the compute and network parts and reduces the chances of conflicts, for example, network paths that connect a set of servers being transitioned to a sleep state at the same time as compute resources need to be woken up in order to handle a surge in compute demand;
- Features are to be implemented in software on virtual switches and on network controllers, making them relatively easy to deploy and control with a variety of protocols (e.g., OpenFiow extensions, OVSDB, SN P);
- Oniy a small number of features within the edge switch and its associated
communications might require standardization (in order to interoperate with equipment from other manufacturers), but it is possible to avoid it by making them available as open source via initiatives such as OpenDaylight and the Open vSwifch project.
[0039] According to an embodiment illustrated in Figure 2, a compute environment 200 includes a first data center 202 and a second data center 204. The two data centers are connected to the cloud 206, where various hardware and software are located, as discussed later. This simplistic structure of the compute environment omits many parts in an effort of making the understanding of the method easier. Data center 202 includes plural servers 210A-B (only two are shown in the figure for simplicity) that are connected to a physical router 212 for
communicating with the cloud and other data centers. Data center 202 may also include a cloud controller 214 (sometimes called cloud orchestrator) that manages compute and network requirements in the data center. A cloud controller, as discussed later, may be implemented as software instructions in a processor. A network controller 216 (which also may be implemented as software instructions in a processor) is responsible for allocating network resources in the data center. In one embodiment, the network controller 216 is also responsible for allocating energy efficiency functions (to be discussed later).
[0040] According to this embodiment, the data center 202 may also host a virtual power application 220 that acts as policy decision point (PDP) for the virtual software-defined power, as discussed later with regard to Figure 3. The virtual power application may be implemented as software instructions in a processor. The virtual power application effectively manages features that control the actual electric power used at least by the network resources. The term "virtual" qualifying the "power application" indicates that service functions that are called virtual, as the VNF, are also managed. The term "virtual" in this context does not imply something imaginary or abstract, but rather an industry accepted name for functions that are run in a guest machine, in hardware virtualization, the host machine is the actual machine on which the virtualization takes place, and the guest machine is the virtual machine. Traditionally, the terms "host" and "guest" are used to distinguish the software that runs on the physical machine from the software that runs on the virtual machine.
[0041] In one embodiment, the virtual power application may be deployed in the cloud 206, or it may be distributed among various data centers and/or the cloud. In another application, the virtual power application 220 is configured to also act as a policy decision maker in other data centers. A VNF manager 222 may also be deployed in data center 202. The V F manager may be implemented as software instructions in a processor. Local policy enforcement point (PEP) utility 230 may be implemented at each server 210 for enforcing the energy policy selected by a user, in one application, a PEP utility 232 may be implemented at each virtual machine 234 hosted by the server. Other PEP utilities 238 may be implemented in the cloud 206 for enforcing the energy policy at a global level. Note that cloud 206 includes at least routers 240 and a network controller 242 that communicate with the network controllers from the data centers. A PEP utility may include software that is run on a dedicated virtual machine or a physical machine.
[0042] The virtual power application 220 is configured to act as a source of virtual currency for various energy policies. The energy policies may be stored in a local data base and they may be associated with each user. Each energy policy has an assigned amount of currency. The amount of currency may be defined by the compute environment's operator. For example, a user may select an energy policy that includes many energy efficient features while another use may select an energy policy that includes no energy efficient features. Depending on the number of energy efficient features, different energy policies are associated or include a different amount of the virtual currency. The energy efficient features are configured after the user has selected an energy policy, as discussed later. In one embodiment, the virtual currency is represented by power tokens, a structure which is also discussed later.
[0043] The process 300 of deploying a service function and the steps associated with it are now discussed with regard to Figure 3. Figure 3 shows, at the top, the various "players" involved in this process. These players can be physical and/or virtual entities as now discussed. A user 301 may be an enterprise, a government agency, an organization or a person. User 301 sends in step 1 a request for service (i.e., service function) to a VNF/App Manager 322. The VNF/App Manager 322 may be element 222 in Figure 2. Note that because the service function may be an application or VNF, the manager 322 may be V F manager, an App Manager or both. For simplicity, in the following, this manager is called a VNF/App Manager with the understanding that it can be either of them or both of them.
[0044] The request for service in step 1 may include an energy policy that the user has selected from the data center's operator in advance. This means that in step 0, the operator of the data center has defined various energy policies to be available to the users. The energy policy may be one that include or not energy efficient functions. Because it is likely that under the current trend of conserving energy, the user will try to use energy efficient functions, the energy policy is called a green policy (gp). However, the operator may also offer energy policies that are less "green" or that include no energy efficient functions. One purpose of the currency used in this method is to manage the energy efficient functions. The request from the user may also include requirements for compute and network resources and VNF type, in this regard, the VNF type may be determined by the service function. Thus, when the user asks for a particular service function, there is a one-to-one relation to the VNF type. In a more complex scenario, the user asks for a service function but the provider has several VNF types that could perform that service function. In this case, an automatic choice could be made by the provider, for example, taking into account the amount of resources requested for the service function and the type of green plan. Alternatively, the provider could present a list of alternative VNF types to the user, which will then make a choice before submitting the final version of the request to the provider.
[0045] In response to this request, the VNF/App manager needs to figure out the implication of the users request for the compute environment, i.e., which are the compute and network resources needed to support the service function and associated green policy required by the user. To accomplish this goal, the VNF/App manager requests in step 2, from the cloud controller 314 (corresponding to element 214 in Figure 2), details about the resources that will be needed in the data center, in the network and outside the data center for supporting the users request. The cloud controller 314, which have access to ail this information, allocates in step 3 compute resources and configures energy efficient capabilities, based on the green plan which the user has selected. However, the cloud controller 314 may not be aware of the network resources necessary to support the user's network requirements.
[0046] Thus, to fill in this gap in information, the cloud controller 314 asks in step 4 network controller 316 (element 216 in Figure2) to estimate network resources needed by the service function for the given green plan. Note that the green plan features need to be known by the network controller for estimating the network resources. To understand why the green plan needs to be known, the following simple example is provided for a better understanding of the invention. This example is not intended to limit the invention. Suppose that the user needs to transmit mainly data that is not time sensitive. For this reason, the green plan selected by the user allows time delay in the data transmission. Thus, the network controller, when determining the network resources for supporting the service function, can allocate to this specific user network paths that accommodate delay, resulting in less cost in running the network. For this reason, this green plan is likely to be cheaper than another time-sensitive green plan which is selected for transmitting movies from the cloud to the user. For such a time-sensitive green plan, the network controller has to allocate resources that are more expensive to be run in order to not pause the movie while the user is watching. This example also indicates that according to this method, the various entities coordinating the power budgets are application-aware (e.g., is the application time sensitive or not), and thus, the compute environment better serves the users.
[0047] Network controller 316 allocates in step 5 the network resources (e.g., virtual and/or physical routers, switches, etc.) based on the energy efficient functions found in the green plan and then sends this information to the cloud controller. The cloud controller sends in step 6 to the V F/App manager 322 a list of the compute and network resources that will support the user's requests. With this information, the V F/App manager 322 sends in step 7 a request for tokens to the virtual power application 320 (element 220 in Figure 2). The VNF/App manager may also send in step 7 potential known conflicts between the compute and network resources of the users request and already deployed compute and network resources. The virtual power application, based on the user's selected green plan, the required service function, and the compute and/or network resources determined by the cloud controller, determines the number of tokens to be granted to the user and sends this information in step 8 back to the VNF/App manager 322.
[0048] In one application, the number of granted tokens is calculated by taking into account the overall power request, as well as the energy efficient capabilities that the VNF advertises (such as which P-states it might be able to support on the compute resources, and PAR4 requirements on the physical network resources). The tokens may be divided between compute and network virtual power tokens. The granted tokens may include renewal and spend policies, as will be discussed later. For example, tokens buckets could be defined in a way that they refill an empty or partly empty bucket automatically, after a given time interval; how fast tokens are allowed to be consumed from the bucket could be defined in terms of a committed and peak consumption rates, etc.
[0049] At this time, the VNF/App manager 322 instantiates the tokens to be associated with the user's request, and sends in step 9 this information to the VNF/App control 340. VNF/App control 340 is responsible for instantiating in step 10 the service function 342, i.e., either an application or a VNF instance. Note that step 10 may include instantiating only the application, only the VNF or both the application and the VNF. Also note that more than one application or more than one VNF may be instantiated at this step, it is likely that more than one VNF are instantiated in case when the sen/ice function is actually composed of several sub- functions, and each of these sub-functions would be implemented through a VNF. Such decomposition is expected to be performed by the provider (either
automatically, or manually), but this is beyond the scope of this invention.
[0050] After the instantiation process (i.e., the process of creating the application and/or VNF), the respective service function 342 acknowledges in step 0.1 to the VNF/App control that the function has been instantiated.
[0051] Still with regard to step 0, when the VNF/App instances 342 start executing, they report to the VNF/App control 340, which relays their identifiers to the VNF/App manager 322. Now, the VNF/App manager can install the tokens in the PEPs, which is discussed next. For example, the network PEP needs to know traffic from which flows should be counted towards the virtual power token, so it needs a way of identifying the flows. Known OpenFlow or IETF I2RS or iPP !PF!X flow identifiers could be used, depending on the PEP implementation; for the compute PEPs, process IDs could be used to identify the instance; the format depends on the operating system used for the implementation (it is well documented in literature).
[0052] In step 11 , the VNF/App control informs the V F/App manager 322 that the service function has been instantiated and also provides sen/ice function identifiers so that the VNF/App manager 322 can track the plural service functions associated with the many users that usually use the data centers. The identifier can be a simple code, similar to an ID, which is unique to each user. More sophisticated identifiers may be used to also provide information about the green policy, the size of the user, the equipment used by the user, etc.
[0053] At this stage, the instantiation phase of the method has been achieved, as the user's requests for compute and network resources have been honored in the compute environment. Thus, the VNF/App manager 322, in anticipation of the new phase of enforcing the green policy (i.e., monitoring and adjusting accordingly the tokens assigned to the user based on the user's consumption), sends in step 12 the allocated amount of tokens to PEP utility 332 (element 232 in Figure 2). in the same step, the VNF/App manager 322 may also configure a set of energy saving capabilities associated with the green policy selected by the user. This configuration step may alternatively take place in step 3 or step 9 or it may be distributed over these steps. Thus, the configuration step may take place in the VNF/App control 340 or PEP 332 or cloud controller 314.
[0054] The PEP utility may reside at multiple points in the same server and it may be configured to monitor each virtual machine in the server and the users associated with that virtual machine. PEP utility 332 acknowledges in step 12.1 that it is ready to monitor the user's service function in the data centers. PEP utility 332, as discussed later, has access to the user's green policy, the user's requirements, and the computer and network resources assigned to service these requirements. In one embodiment, a PEP utility is located on each of the network part forwarding node, whether physical or virtual, along the paths that the user may use. When the PEP utilities are in place, the VNF/App Manager 322 informs in step 13 VNF/App control 340 that both the instantiation and monitoring infrastructure is in place, which results in the VNF/App manager 322 informing in step 14 the user that its requirements were met and ready to be used.
[0055] Next, the method advances the enforcement part, which is discussed still with regard to Figure 3. In step 15, the PEP utility measures energy usage of the user's activities supported by the data centers. For example, the PEP utility calculates the amount of real power consumed by the compute and/or network resources supporting a particular VNF or application requested by the user. In a server, this function could be achieved by counting the number of real CPU cycles used and at which frequency they were consumed. In a physical network node, this function can be evaluated based on a set of node power profiles (known in the art or measured prior to deploying this utility) that codify the dependency between the number of packets forwarded for the VNF or application, the type of processing that needed to be applied and the energy management functionality that might have been active or inactive during the interval.
[0058] Based on these calculations in step 15, the PEP utility determines in step 18 whether the used power is within the power budget specified in the tokens. If the result of this determination is that the power consummation is within the power budget, no action is taken. However, if the result is that the power consumption is outside the allocated power budget, e.g., has exceeded the allowed power, the PEP utility takes action in step 7 and also reports the action to the VNF/App manager. The action or actions taken during this step are now discussed.
[0057] In one application, the action may depend on the type of PEP utility. If the PEP utility is configured to handle compute resources, then it may take the following actions, if the granted tokens were consumed by the user, the V F or application instance may not be scheduled to execute until new tokens become available, for example, through an automatic renewal mechanism to be discussed later. Alternately, the VNF or application may be allowed to "borrow" from future utilization in case that resources are available. Another alternative may be to allow borrowing only at a pre-defined spend rate, which is a fraction of the spend rate specified in the token.
[0058] If the PEP utility is configured to handle network resources, it may take the following actions. If the granted tokens were consumed, the packets to be transmitted over the network part may be buffered for short intervals (meaning that they will still consume some power while waiting, but less than when being transmitted through the network). Alternatively, a functionality such as Weighted Random Early Discard could be activated on the network flows associated with both the application or VNF in order to reduce the traffic. Those skilled in the art would imagine other actions for slowing down traffic.
[0059] In one application, the PEP utility handles both the compute and the network resources. Note that novel embodiments are discussed sometimes with regard to the network part. However, the novel concepts discussed herein may be adapted to apply to the compute part or to both the compute and network parts.
[0060] The action or actions discussed above may be implemented in various ways. In one application, the action is implemented in an autonomous manner by the PEP or VNF/App control or other entity based on the configured energy saving capabilities. For example, an Energy Efficient Ethernet capability would
autonomously change between sleep and awake modes, depending on two timers that could be configured as needed. The two timers may be implemented anywhere in the compute environment and they are used for determining the sleep and awake modes.
[0081] In another application, the PEP utility could, for example, decide to force a link (e.g., network connection) to transition into a sleep mode if the power budget was exceeded for a certain amount of time. Alternatively, the PEP utility could decide to force a CPU transition to a slower clock value, if that CPU executes a service function that went over the power budget. Other energy efficient capabilities are discussed with regard to Figure 7.
[0082] Returning to the method discussed with regard to Figure 3, the PEP utility 322 repeats steps 15-17 for constantly or continuously monitoring the power consumption in the corresponding compute and/or network resources. [0083] The various entities discussed above, e.g., cioud controller, network control, VNF/App control, PEPs, VNF/App manager and virtual power application may be implemented in hardware, virtual machines, software or a combination of them. For example, each of these entities may be implemented in its own controller. The controller may be a physical one that has at least a CPU, memory, bus connecting the CPU to the memory, and an input/output interface for communicating with the CPU. in another example, the controller is a guest virtual machine having a similar structure as the physical controller.
[0084] In still another example, illustrated in Figure 4, these various entities are implemented as software functions in a physical or virtual computing device. The computing device 400, e.g., server 210 from Figure 2, may include a processor 402 that runs the cloud controller 314, network controller 316, VNF/App control 340, VNF/App manager 322, and the virtual power application 320. These entities can communicate to each other through logic 404 and they also can be stored in memory 408. Memory 406 may also be configured to store various energy policies (gp) for the network part, the energy policies having been assigned a different amount of virtual currency. A virtual machine 234 may be run on the server and it may host the PEP utility 332 and other elements, for example, a virtual router 408. Note that virtual power application 320 is not an abstract application, it is a real application that manages the virtual currency (e.g., tokens) with the purpose of monitoring and handling the real power used by the underlying infrastructure.
[0085] Regarding the enforcement of energy policies, Figure 5 illustrates the power token operation for renewal and usage policies. There are some events that are triggered by the power usage in the network part. These events require the PEP utility to take some actions, which are now discussed. For simplicity, suppose that the triggering event is the expiration time of a token or tokens. In other words, suppose that based on the users selected green policy and the amount of tokens allocated by the virtual power application 320, i.e., the tokens allocated for a given users request, have run out. This event 502 is routed to the token policies module 504, which may be stored anywhere in the data center, in one application, the token policies module 504 is stored in the server where the PEP utility is stored. The token renew policies utility 506 requests the renew policies of the user associated with the event 502, from the token database manager 508. The token database manager 508 interacts with a plan utility 510, a tokens utility 512 and a token policy utility 514 for retrieving the necessary information. The plan utility 510 is responsible for storing all the plans that can be selected by the users. Among these plans, there is the plan (green policy) selected by the user associated with event 502. The tokens utility 512 stores the various amounts of tokens associated with each plan and token policy utility 514 stores the rules associated with each plan, tokens and user. For example, for a given plan and user, automatic renewal of tokens is allowed while for some other users, this is not permitted. Thus, based on the request from the token renew policy utility 506, the token database manager 508 interacts with the plan utility 510, tokens utility 512 and token policy utility 514 for collecting corresponding data associated with the user. This information is relayed back to the token renew policies 506, which decides in step 520 whether to automatically renew the tokens, if the answer is yes, the automatic renewal takes place in step 522, and the PEP utility is informed about this result. If the answer is no, a request is forwarded to the user in step 524 to inform that new steps are needed for renewing the green policy. [0088] Another event 530 may arrive at the token policies utility 504, while the amount of tokens is still available. Suppose in this scenario that the user's VNF was granted additional tokens, but the event 530 is associated with a power increase necessary to be implemented by the user because, for example, the user is changing its off hours operation to office hours operation. This change requires an increase in power as the user's employees will start using more compute and network resources. Thus, event 530 arrives at token usage policies utility 532, which decides based on the information contained in the event, and also from the information retrieved from the token database manager 508 (e.g., type of green policy, amount of tokens associated with the plan, etc.), whether to reduce usage in step 534 or to increase usage in step 536. Irrespective of the output of this step, the result is communicated to the PEP utility and other parties in the compute
environment (e.g., virtual power application) to take appropriate action.
[0087] The structure of the currency, e.g., a token, is now discussed. A token is the unit used to access resources in the compute environment. Table 1 presents how the token is represented in a database, e.g., tokens utility 512 in Figure 5. The table has: (1) a field for an identifier of the user's green plan associating the tokens to the user and a set of resources, (2) a field for the number representing the amount of network tokens, (3) a field for the number representing the amount of compute tokens, (4) a field holding the usage conditions, and (5) a field for the renewal conditions. Those skilled in the art would know that the token's structure may be modified without affecting the embodiments discussed herein.
Tabl e 1 - - Tokens Data base Representation
greenPlanJd Integer networkToken integer
computeToken integer
usagePolicy Token Policy
RenewalPolicy Token Policy
[0088] The number associated with the network and compute tokens may be equivalent to a number of Watts defined by the compute environment's operator. This relation between tokens unit and Watts can be adjusted to simplify the management of power in the compute environment. For instance, power profiles can be adjusted to consider one token to be equivalent to one Watt.
[0069] The usage policies (e.g., element 514 in Figure 5) noted above describe events (e.g., element 530 in Figure 5) that forward requests to increase or decrease the usage of resources, considering a constraint or combination of constraints, such as time and amount of available tokens in the budget. One example of usage policies, described in code machine, is as follows:
// the following two policies define the decrease rate for the token consumption during and outside office hours, effectively limiting the resource usage during office hours for users subscribing to a type "green 1" plan
[time of day ~~ 17:00; if green plan == "green 1"; token_decrease_rate = 1]
[time of day ~~ 08:00; if green plan ~~ "green 1"; token_decrease_rate = 3]
[0070] Renewal policies (stored in element 514 and handled by utility 508 in Figure 5} describe conditions to automatically refill the token budget. An example of renewal policies, expressed in code machine, is as follows:
//the follov/ing policy refills a "green 1" token with 100 units in case such a refill was not done for a 2 hour time interval [tokens == 0; if green__plan == "green 1" and last ncrease nterval > 2 hours; tokens = 100],
[0071] The virtual power application 320 in Figure 3 is responsible to calculate and provide tokens. This happens after step 7 in Figure 3. The virtual power application 320 calculates the tokens based on (1) previous patterns of usage, (2) availability of actual power in the compute environment, and (3) energy efficiency features that might be associated with the user's selected plan. In one application, the virtual power application 320 does the above calculations in response to a first instantiation of a sen/ice function or, upon receiving an event informing that a token ran out and it had no automatic renewal policies.
[0072] Figure 6 is a flowchart illustrating the process of calculating the tokens. The modules and steps discussed now may be implemented in the virtual power application 320. The token request 602 is received at a check request module 604. After receiving the token request 602 for tokens, the check request module 604 communicates with the token database manager 608 (which can be element 508 in Figure 5) for checking in the database the amount of available tokens (if any), and the power profiles associated with the compute and/or network resources necessary to support the user's required service function. The database may include plan utility 610 that stores the plans offered by the compute environment's operator, token utility 612 that maintains the amount of available tokens, and token policy utility 6 4, which maintains power rules associated with various resources. Based on the information obtained from utilities 610 to 614, the check request module 604 calculates in step 620 the power required for the user's service function and energy saving capabilities associated with the selected green policy. These calculations are discussed later with reference to Figure 7. The result of these calculations is compared in step 822 with compute and network power capacity. If the power requirements from step 820 cannot be accommodated, the request from the user is denied. This is so for each request because the amount of tokens for compute and/or network resources cannot be higher than the resources power capacity. This is verified by converting the amount of calculated tokens in Watts and comparing it to the resources' power capacity, if this value is higher than the capacity, the request is denied. However, if the power requirements from step 620 can be meet by the compute environment, the process advances to step 626 to determine if the user's request is new. if the user request is not new, in step 628 the existing policies associated with the user are checked and a decision is made in step 630 whether to modify renew or usage policies. If there is no need to modify either of this policy, the user's request is granted in step 630. Otherwise, the policies in the token policy utility 614 are updated first and then the request granted.
[0073] However, if in step 626 there is determined that a new request is received from the user, the process advances to step 634 for requesting initial tokens for the user. Before granting the request in step 632, the process
communicates with the token database manager 808 for verifying that tokens are available.
[0074] The step 620 of calculating power requirements and energy saving capabilities is now discussed with regard to Figure 7. For calculating the power requirements associated with a given service function, the virtual power application maps each option as a node in a tree where the nodes represents available configurations for network, compute and others resources that might be used by the service function, each node being associated with a cost. This tree is illustrated in Figure 7 and it starts with a plurality of plans 702A-X that are available for the user. Plan 702A is a green plan that includes a high number of energy efficiency features while plan 702X includes no energy efficiency features. The various costs associated with various compute resources 706 are calculated first. Note that compute resources 708 show various CPU frequencies, number of CPUs, storage devices, etc. The cost of each combination of these compute resources are calculated and summed in step 708. To these costs, more costs are added when other types of resources are considered. For example, Figure 7 shows that the cost associated with storage devices 710, cooling resources 712, and so on is added to the compute cost in step 714. Then, the various costs associated with network resources 718 are calculated in step 720. Figure 7 shows that for the network resources 718, the amount of bandwidth, packet loss, and VNF types may be some features to be considered. Other network specific features may be considered when calculating the cost.
[0075] The total cost of the compute and network resources is finalized in step 722, when the cost saving associated with the energy efficient features is subtracted from the actual cost of the compute and network resources. This cost is used before step 8 in Figure 3, for determining the amount of tokens to be assigned to the user.
[0076] The PEP utility logic is now discussed with regard to Figure 8. The PEP utility 332 receives requests 802 to enforce usage policies and has to ensure that the requested service does not use more energy than the available energy. For example, if a user has tokens available in the budget and a usage policy requires to increase the usage of a resource during office hours, the PEP utility has to verify, in terms of power capacity, if it is possible to enforce the requested action. Energy consumption forecasts can be provided by technologies such as PAR4 measures for servers, or watts-byte statistics for network nodes,
[0077] Returning to Figure 8, the PEP utility 332 may receive a request 802 for altering or enforcing allocated power or a request 804 for monitoring used power. The request is analyzed in step 806 based on existing token policies as discussed with regard to Figure 5. In addition, in step 808 the PEP utility may obtain consumption information and tokens available from token database manager 508, also discussed with regard to Figure 5, and from consumption database manager 808. Consumption database manager 808 has access to compute consumption information stored at 810 and network consumption stored at 812. This data may be the result of active measuring of the various resources of the compute environment, simulations, and/or previous consumption patterns. A consumption metering module 814 that is based on known technologies for measuring power in the compute and network resources write information to the consumption database manager 808.
[0078] Based on ail this information, the method advances to step 820, where it is determined whether there are enough tokens for the user's service function. If the answer is yes, the method advances to step 822, where it is determined whether the network capacity is exceeded. If the answer is no, the method advances to step 824 where the compute capacity is checked, if the compute capacity is not exceeded, the method advances to step 828 and continues to enforce the usage policy. If there are not enough tokens in step 820, or the network capacity is exceeded in step 822 or the compute capacity is exceeded in step 824, the PEP utility takes action to reduce or deny usage in step 826. Two types of such actions have been discussed with regard to Figure 5.
[0079] The logic behind the VNF/App manager 322 in Figure 3 is now discussed with regard to Figure 9. Note that the left part of the schematic diagram in Figure 9 has been discussed with regard to Figures 5 and 8. For this reason, this part is not described herein. A VNF request 902 is received and it is accompanied by a green plan as discussed with regard to steps 1-8 in Figure 3. The method retrieves information in step 904 about the capabilities of the compute environment and the various policies from managers 508 and 808. At the same time, information is retrieved from the VNF/App manager 322. The information from the VNF/App manager 322 is now discussed, in step 906, the VNF/App manager 322 verifies whether there are conflicts between the user's requirements and the existing service functions. This information is passed to VNF lifecycie manager 908 for determining whether to create, modify, destroy or get a status of the tokens associated with the service function. The VNF lifecycie manager 908 also receives information from the token policies request 910 and from a check routes utility 912. The check routes utility 912 verifies the available routes between various network resources.
[0080] The check conflicts utility 906 is configured to request tokens in step 914, from the virtual power application 320. The check conflicts utility 906 is capable of advertising requirements and possible conflicts. The V F lifecycie manager 908 is responsible for managing the VNFs lifecycie by performing tasks such as the creation 916, modification 918, destruction 920 and get status 922, all of which are related to management of conflicts. Such conflicts may involve two VNFs attempting to perform conflicting virtual power-related requests on the same resource or conflicting operations between energy efficiency features. The output from these steps is sent to a VNF database manager 924, which maintains a database 926 for ail the VNFs,
[0081] The VNF/App manager 322 is shown in Figure 9 to have as input two types of requests, one request 902 forwarded from the cloud manager 314 (see step 6 in Figure 3) to request resources to deploy a new VNF and one request 910 from the PEP utility 332 (see step 12 in Figure 3) to modify an already deployed VNF. Request 902 is used to verify the power requirements and possible conflicts in order to request tokens and request 910 is used to reduce or increase energy usage.
[0082] To deploy a new VNF, the first step is to check if tokens were already provided by the virtual power application for the user's service function, in case tokens were not provided, the VNF/App Manager verifies the service requirements (step 904) associated with the requested VNF and the power consumption estimates provided by the user or administrator. If the request for tokens is answered positively, the iifecycle manager 908 requests the routes from the network controller 316 in order to configure and deploy the VNF.
[0083] As an alternative embodiment, the virtual power scheme discussed above could be implemented as an extension to distributed systems kernels such as the Apache Mesos. The virtual power application would be implemented in the esos Master, the Mesos Slaves would be extended with the PEP functionality and the communication between the Master and the Slaves extended to allow for the transfer of the tokens. This alternative embodiment illustrates the generality of the solution proposed in the enclosed embodiments and the capability to implement such solution in the existing compute environments. [0084] A method of power management in a compute environment having a compute part and a network part is now discussed with regard to Figure 10. The method includes a step 1000 of defining various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency, a step 1002 of assigning, in a virtual power controller, based on an energy policy selected by a user, a service function required by the user, and network resources to be consumed by the service function, a corresponding amount of virtual currency, a step 1004 of configuring a set of energy saving capabilities to be implemented on the network resources based on the energy policy selected by the user and the corresponding amount of virtual currency, a step 1006 of monitoring power consumption of the service function with a policy enforcement utility that is aware of the selected energy policy, and a step 008 of adjusting an amount of actual electric power used by network resources that support the service function depending on a result of the monitoring step.
[0085] The energy management embodiments discussed above may be implemented in an apparatus and/or method that uses a computing device during a planning phase to calculate a power budget based on device and virtual network function models that include energy management features. The same computing device may still be used during the planning phase to define a set of tokens associated with this power budget. The same computing device may, during a provisioning stage, distribute the tokens to PEPs either simultaneously of shortly after compute and/or network resources were allocated for the virtual network function, such that when a VNF is instantiated, the PEPs associated with it already have the assigned tokens. [0088] The same or another computing device, during the operational phase, may be configured to monitor adherence to the power budget by receiving power consumption readings from compute environment's nodes (e.g., servers and switches) and service functions (e.g., applications or VNF) and to take action by adjusting the number of tokens or token renewal policies in case the power consumption exceeds a pre-configured limit.
[0087] A green policy PEP utility may be associated with an infrastructure resource (server or network node), which, at a provisioning stage, receives one or more tokens associated to a virtual network function and application.
[0088] The same green policy PEP utility, during the operational phase, may continuously read measurements of the energy consumption of the VNF or application on the infrastructure resource, compares the measurements with the content of the token associated to the network function, and allows or denies resource usage in case the token usage has been exceeded.
[0089] The above discussed methods may be implemented in software, hardware or a combination of the two in a computing device of the compute environment. A computing device 1100 is illustrated in Figure 1 1 and it includes a processor 1102 that is connected through a bus 1 04 to a storage device 1 106. Computing device 1 00 may also include an input/output interface 1108 through which data can be exchanged with the processor and/or storage device. For example, a keyboard, mouse or other device may be connected to the input/output interface 1108 to send commands to the processor and/or to collect data stored in storage device or to provide data necessary to the processor. A monitor 1 110 may be attached to bus 1 104 for displaying the parts of the compute environment and associated power consumption.
[0090] The disclosed embodiments provide a computing device and method for managing power in a network part of a compute environment. It should be understood that this description is not intended to limit the invention. On the contrary, the embodiments are intended to cover alternatives, modifications and equivalents, which are included in the spirit and scope of the invention as defined by the appended claims. Further, in the detailed description of the embodiments, numerous specific details are set forth in order to provide a comprehensive understanding of the claimed invention. However, one skilled in the art would understand that various embodiments may be practiced without such specific details.
[0091] As also will be appreciated by one skilled in the art, the embodiments may be embodied in a wireless communication device, a telecommunication network, as a method or in a computer program product. Accordingly, the embodiments may take the form of an entirely hardware embodiment or an embodiment combining hardware and software aspects. Further, the embodiments may take the form of a computer program product stored on a computer-readable storage medium having computer-readable instructions embodied in the medium. Any suitable computer- readable medium may be utilized, including hard disks, CD-ROMs, digital versatile disc (DVD), optical storage devices, or magnetic storage devices such a floppy disk or magnetic tape. Other non-limiting examples of computer readable media include flash- type memories or other known memories.
[0092] Although the features and elements of the present embodiments are described in the embodiments in particular combinations, each feature or element can be used alone without the other features and elements of the embodiments or in various combinations with or without other features and elements disclosed herein. The methods or flow charts provided in the present application may be implemented in a computer program, soflware or firmware tangibly embodied in a computer-readable storage medium for execution by a specifically programmed computer or processor. Abbreviation Explanation
CPU Central Processing Unit
DPDK Data Plane Development Kit is a set of libraries and drivers for fast packet processing
ETSI European Telecommunications Standards lnstitut<
GAL Green Abstraction Layer
OPEX Operational Expenses
OVSDB Open vSwitch Database
PDP Policy Decision Point
PEP Policy Enforcement Point
PTEA Per-Task Energy Accounting
PTEM Per-Task Energy Metering
QoS Quality of Service
SLA Service Level Agreement
SIMMP Simple Network Management Protocol
VNF Virtual Network Function
VM Virtual Machine
VPA Virtual Power Application
WAN Wide Area Network

Claims

WHAT IS CLAIMED IS:
1. A method of power management in a compute environment (200) having a compute part and a network part, the method comprising:
defining (1000) various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency;
assigning (1002), in a virtual power controller (320), based on an energy policy selected by a user (301), a service function (342) required by the user, and network resources (408) to be consumed by the service function, a corresponding amount of virtual currency;
configuring ( 004) a set of energy saving capabilities to be deployed on the network resources (408) based on the energy policy selected by the user and the corresponding amount of virtual currency;
monitoring (1006) power consumption of the service function (342) with a policy enforcement utility (332) that is aware of the selected energy policy; and
adjusting (1008) an amount of actual electric power used by network resources (408) that support the service function (342) depending on a result of the monitoring step.
2. The method of Claim 1 , wherein the virtual power controller assigns the amount of virtual currency to the service function for the entire compute environment.
3. The method of Claim 2, wherein the compute environment includes plural data centers and a cloud component interconnecting the plural data centers.
4. The method of Claim , wherein the policy enforcement utility is located in a local node of the compute environment, where the service function is instantiated.
5. The method of Claim 1 , further comprising:
calculating in the virtual power controller usage and/or renewal of the virtual currency for the selected energy policy,
6. The method of Claim 5, further comprising:
enforcing with the policy enforcement utility the usage and/or renewal of the virtual currency of the service function.
7. The method of Claim 1 , wherein the step of assigning comprises:
assigning, in the virtual power controller, the corresponding amount of virtual currency based not only on the network resources that support the service function, but also based on compute resources that support the service function.
8. The method of Claim 7, wherein the compute resources include at least one of a processor and a server.
9. The method of Claim 1 , wherein the service function is an application or a virtual network function.
10. The method of Claim 1 , wherein the network resources include at least one of a virtual switch, a physical switch, a virtual router, and a physical router.
11. The method of Claim 1 , wherein the step of assigning further comprises: assigning in an autonomous manner the corresponding amount of virtual currency by the energy saving capabilities; or
assigning in the policy enforcement utility the corresponding amount of virtual currency,
12. A compute resource (400) comprising:
a storage device (406) configured to store (1000) various energy policies (gp) for a network part of a compute environment (200), the energy policies having assigned a different amount of virtual currency;
a virtual power application (320) configured to assign (1002), based on an energy policy selected by a user (301), a service function (342) required by the user, and network resources (408) to be consumed by the service function, a
corresponding amount of virtual currency, and also configured to define (1004) a set of energy saving capabilities to be deployed on the network resources (408) based on the energy policy selected by the user and the corresponding amount of virtual currency; and
a policy enforcement utility (332) configured to
monitor (1006) power consumption of the service function (342), wherein the policy enforcement utility (332) is aware of the selected energy policy, and adjust ( 008) an amount of actual electric power used by network resources (408) that support the service function (342) depending on a result of the monitoring step.
13. The compute resource of Claim 12, wherein the virtual power controller assigns the amount of virtual currency to the service function for the entire compute environment.
14. The compute resource of Claim 13, wherein the compute environment includes plural data centers and a cloud component interconnecting the plural data centers.
15. The compute resource of Claim 12, wherein the policy enforcement utility is located in a local node of the compute environment, where the service function is instantiated.
16. The compute resource of Claim 12, wherein the virtual power controller calculates usage and/or renewal of the virtual currency for the selected energy policy.
17. The compute resource of Claim 18, wherein the policy enforcement utility enforces the usage and/or renewal of the virtual currency of the service function.
18. The compute resource of Claim 12, wherein the virtual power controller is configured to assign the corresponding amount of virtual currency based not only on the network resources that support the service function, but also based on compute resources that support the service function.
19. The compute resource of Claim 12, wherein the service function is an application or a virtual network function.
20. A computer readable medium, including computer executable instructions, wherein the instructions, when executed by a computer, implement a method of power management in a compute environment (200) having a compute part and a network part, the method comprising:
defining (1000) various energy policies (gp) for the network part, the energy policies having assigned a different amount of virtual currency;
assigning (1002), in a virtual power controller (320), based on an energy policy selected by a user (301), a service function (342) required by the user, and network resources (408) to be consumed by the service function, a corresponding amount of virtual currency;
configuring (1004) a set of energy saving capabilities to be deployed on the network resources (408) based on the energy policy selected by the user and the corresponding amount of virtual currency;
monitoring (1006) power consumption of the service function (342) with a policy enforcement utility (332) that is aware of the selected energy policy; and adjusting (1008) an amount of actual electric power used by network resources (408) that support the service function (342) depending on a result of the monitoring step.
PCT/IB2015/055362 2015-07-15 2015-07-15 Software-defined power management for network resources WO2017009689A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2015/055362 WO2017009689A1 (en) 2015-07-15 2015-07-15 Software-defined power management for network resources

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2015/055362 WO2017009689A1 (en) 2015-07-15 2015-07-15 Software-defined power management for network resources

Publications (1)

Publication Number Publication Date
WO2017009689A1 true WO2017009689A1 (en) 2017-01-19

Family

ID=53776908

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2015/055362 WO2017009689A1 (en) 2015-07-15 2015-07-15 Software-defined power management for network resources

Country Status (1)

Country Link
WO (1) WO2017009689A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822722A (en) * 2021-09-28 2021-12-21 北京沃东天骏信息技术有限公司 Virtual resource distribution control method and device and server
EP3972195A1 (en) * 2020-09-16 2022-03-23 Deutsche Telekom AG A power saving hardware node allowing for a high perfomance and high availability of a virtual network function

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110161696A1 (en) * 2009-12-24 2011-06-30 International Business Machines Corporation Reducing energy consumption in a cloud computing environment
US20140298052A1 (en) * 2013-03-29 2014-10-02 Telefonaktiebolaget L M Ericsson (pubI) Energy efficiency in software defined networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110161696A1 (en) * 2009-12-24 2011-06-30 International Business Machines Corporation Reducing energy consumption in a cloud computing environment
US20140298052A1 (en) * 2013-03-29 2014-10-02 Telefonaktiebolaget L M Ericsson (pubI) Energy efficiency in software defined networks

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BOLLA RAFFAELE ET AL: "A northbound interface for power management in next generation network devices", IEEE COMMUNICATIONS MAGAZINE, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 52, no. 1, 1 January 2014 (2014-01-01), pages 149 - 157, XP011537114, ISSN: 0163-6804, [retrieved on 20140112], DOI: 10.1109/MCOM.2014.6710077 *
BOLLA: "A northbound interface for power management in next generation network devices", COMMUNICATIONS MAGAZINE, IEEE, vol. 52, no. 1, January 2014 (2014-01-01), pages 149,157, XP011537114, DOI: doi:10.1109/MCOM.2014.6710077
LIU ET AL.: "Per-task Energy Accounting in Computing Systems", COMPUTER ARCHITECTURE LETTERS, IEEE, vol. 13, no. 2, July 2014 (2014-07-01), XP011568651, DOI: doi:10.1109/L-CA.2013.24
R. NATHUJI; K. SCHWAN: "Proceedings of the 17th international symposium on High performance distributed computing", 2008, ACM, article "VPM tokens: virtual machine-aware power budgeting in datacenters", pages: 119 - 128
R. NATHUJI; K. SCHWAN: "VirtualPower: coordinated power management in virtualized enterprise systems", SIGOPS OPER. SYST. REV., October 2007 (2007-10-01), pages 265 - 278
RIPAL NATHUJI ET AL: "VPM tokens: virtual machine-aware power budgeting in datacenters", CLUSTER COMPUTING, KLUWER ACADEMIC PUBLISHERS, BO, vol. 12, no. 2, 23 January 2009 (2009-01-23), pages 189 - 203, XP019728509, ISSN: 1573-7543, DOI: 10.1007/S10586-009-0077-Z *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3972195A1 (en) * 2020-09-16 2022-03-23 Deutsche Telekom AG A power saving hardware node allowing for a high perfomance and high availability of a virtual network function
CN113822722A (en) * 2021-09-28 2021-12-21 北京沃东天骏信息技术有限公司 Virtual resource distribution control method and device and server

Similar Documents

Publication Publication Date Title
Mohamed et al. Software-defined networks for resource allocation in cloud computing: A survey
Masdari et al. An overview of virtual machine placement schemes in cloud computing
US10530678B2 (en) Methods and apparatus to optimize packet flow among virtualized servers
Jennings et al. Resource management in clouds: Survey and research challenges
US11206193B2 (en) Method and system for provisioning resources in cloud computing
Corradi et al. VM consolidation: A real case based on OpenStack Cloud
Akhter et al. Energy aware resource allocation of cloud data center: review and open issues
Angel et al. End-to-end performance isolation through virtual datacenters
Wuhib et al. Dynamic resource allocation with management objectives—Implementation for an OpenStack cloud
Yousaf et al. Cost analysis of initial deployment strategies for virtualized mobile core network functions
US20110239010A1 (en) Managing power provisioning in distributed computing
Baccarelli et al. Q*: Energy and delay-efficient dynamic queue management in TCP/IP virtualized data centers
WO2014077904A1 (en) Policy enforcement in computing environment
WO2017010922A1 (en) Allocation of cloud computing resources
WO2017009689A1 (en) Software-defined power management for network resources
Lago et al. SinergyCloud: A simulator for evaluation of energy consumption in data centers and hybrid clouds
Akhter et al. Energy-aware virtual machine selection method for cloud data center resource allocation
Lucanin et al. A cloud controller for performance-based pricing
Carrega et al. Energy-aware consolidation scheme for data center cloud applications
Tesfatsion et al. PerfGreen: performance and energy aware resource provisioning for heterogeneous clouds
Liu et al. Virtual resource provision with enhanced QoS in cloud platforms
Ghribi et al. Exact and heuristic graph-coloring for energy efficient advance cloud resource reservation
Werner et al. Environment, services and network management for green clouds
Carrega et al. Coupling energy efficiency and quality for consolidation of cloud workloads
Gholamhosseinian et al. Cloud computing and sustainability: Energy efficiency aspects

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15745562

Country of ref document: EP

Kind code of ref document: A1

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112016007964

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 112016007964

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20160411

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15745562

Country of ref document: EP

Kind code of ref document: A1