WO2017019035A1 - Monitoring network utilization - Google Patents

Monitoring network utilization Download PDF

Info

Publication number
WO2017019035A1
WO2017019035A1 PCT/US2015/042434 US2015042434W WO2017019035A1 WO 2017019035 A1 WO2017019035 A1 WO 2017019035A1 US 2015042434 W US2015042434 W US 2015042434W WO 2017019035 A1 WO2017019035 A1 WO 2017019035A1
Authority
WO
WIPO (PCT)
Prior art keywords
switch module
load balanced
utilization
plurality
processor
Prior art date
Application number
PCT/US2015/042434
Other languages
French (fr)
Inventor
Justin E. York
Michael Stearns
Andrew Brown
Original Assignee
Hewlett Packard Enterprise Development Lp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Enterprise Development Lp filed Critical Hewlett Packard Enterprise Development Lp
Priority to PCT/US2015/042434 priority Critical patent/WO2017019035A1/en
Publication of WO2017019035A1 publication Critical patent/WO2017019035A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • H04L43/08Monitoring based on specific metrics
    • H04L43/0876Network utilization
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network-specific arrangements or communication protocols supporting networked applications
    • H04L67/10Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network
    • H04L67/1002Network-specific arrangements or communication protocols supporting networked applications in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers, e.g. load balancing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance or administration or management of packet switching networks
    • H04L41/06Arrangements for maintenance or administration or management of packet switching networks involving management of faults or events or alarms
    • H04L41/0654Network fault recovery
    • H04L41/0659Network fault recovery by isolating the faulty entity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing packet switching networks
    • H04L43/16Arrangements for monitoring or testing packet switching networks using threshold monitoring

Abstract

Example implementations relate to monitoring network utilization. For example, monitoring network utilization may include A non-transitory machine readable medium storing instructions executable by a processor to cause the processor to determine, using a switch module, a utilization baseline for a load balanced server among a plurality of load balanced servers, monitor, using the switch module, network utilization of the plurality of load balanced servers, and send a communication from the switch module to a chassis manager, instructing the chassis manager to power on or power off a load balanced server among the plurality of load balanced servers, based on the monitored network utilization and determined utilization baseline.

Description

MONITORING NETWORK UTILIZATION

Background

[0001] A network may communicate and exchange information. As an important tool to provide network services, a server may process large amounts of data. Owing to demands for large amounts of data, multiple servers may be packaged together in a server blade enclosure.

Brief Description of the Drawings

[0002] Figure 1 illustrates an example system for monitoring network utilization, according to the present disclosure.

[0003] Figures 2A-2D are block diagrams of an example system for monitoring network utilization, according to the present disclosure.

[0004] Figure 3 illustrates an example method according to the present disclosure.

Detailed Description

[0005] Hyperscale computing systems can span thousands of servers.

Often, the amount of revenue generated per server is only slightly higher than the total acquisition and operating cost of the server. In these environments, profitability is greatly influenced by any changes in operating cost of the servers. One way to improve operating costs is to automate as many activities as possible. Another way to improve operating costs is to reduce power consumption.

[0006] Some server systems may simply leave all servers powered on all of the time, relying on features of the central processing unit (CPU) and other microcontrollers to operate in a lower power state when idle. While this lowers power consumption, there is still overhead for the processor, memory, storage and other infrastructure that can keep power consumption significantly above 0 Watts.

[0007] Some server systems may install agent software onto the servers to monitor the utilization of the workload applications. Software agents are capable of powering servers off. However, using software agents to power on a server that is already powered off may introduce potential security holes or add additional infrastructure into the server system. Further, software agents would have to be written for every conceivable operating system that a user of the server system might use. As such, software agents may be undesirable because of the possibility of security vulnerabilities and added workload due in part to occasional software updates.

[0008] In contrast, monitoring network utilization according to the present disclosure provides an automated mechanism for monitoring the network utilization of a pool of load balanced servers and using that as an input to an algorithm to determine whether to power a server off or to power an additional server on.

Powering off underutilized servers saves power and increases utilization on the remaining pool of load balanced servers. Powering on additional servers when utilization reaches a certain level allows for increased demand to be satisfied while meeting performance requirements. [0009] Monitoring network utilization according to the present disclosure lowers the operational cost of providing workloads in a server system, such as a hyperscale server environment. Specifically, monitoring network utilization according to the present disclosure lowers the power consumption of large populations of servers by closely tying network utilization to the number of servers that are powered on at any given time. When the system is under heavy utilization, more servers are powered on to provide the required level of system performance. When the system is under lower utilization, more servers are powered off, lowering electrical consumption while maintaining application response times and

performance.

[0010] Figure 1 is a block diagram of an example system 100 for monitoring network utilization, according to the present disclosure. System 100 may include at least one computing device that is capable of communicating with at least one remote system. In the example of Figure 1 , system 100 includes a processor 101 and a machine-readable storage medium 103. Although the following descriptions refer to a single processor and a single machine-readable storage medium, the descriptions may also apply to a system with multiple processors and multiple machine-readable storage mediums. In such examples, the instructions may be distributed (e.g., stored) across multiple machine-readable storage mediums and the instructions may be distributed (e.g., executed by) across multiple processors.

[0011] Processor 101 may be one or more central processing units (CPUs), microprocessors, and/or other hardware devices suitable for retrieval and execution of instructions stored in machine-readable storage medium 103. In the particular embodiment shown in Figure 1 , processor 101 may receive, determine, and send instructions 105, 107, 109 for monitoring network utilization. As an alternative or in addition to retrieving and executing instructions, processor 101 may include one or more electronic circuits comprising a number of electronic components for performing the functionality of one or more of the instructions in machine-readable storage medium 103. With respect to the executable instruction representations (e.g., boxes) described and shown herein, it should be understood that part or all of the executable instructions and/or electronic circuits included within one box may, in alternate embodiments, be included in a different box shown in the figures or in a different box not shown.

[0012] Machine-readable storage medium 103 may be any electronic, magnetic, optical, or other physical storage device that stores executable

instructions. Thus, machine-readable storage medium 103 may be, for example, Random Access Memory (RAM), an Electrically-Erasable Programmable Readonly Memory (EEPROM), a storage drive, an optical disc, and the like. Machine- readable storage medium 103 may be disposed within system 100, as shown in Figure 1. In this situation, the executable instructions may be "installed" on the system 100. Alternatively, machine-readable storage medium 103 may be a portable, external or remote storage medium, for example, that allows system 100 to download the instructions from the portable/external/remote storage medium. In this situation, the executable instructions may be part of an "installation package". As described herein, machine-readable storage medium 103 may be encoded with executable instructions for monitoring network utilization.

[0013] Referring to Figure 1 , determine a utilization baseline instructions 105, when executed by a processor (e.g., 101 ), may cause system 100 to determine, using a switch module, a utilization baseline for a plurality of load balanced servers. Given that the system 100 is a hyperscale system, each of the plurality of load balanced servers may run the same workload. Therefore, to determine a utilization baseline for the plurality of load balanced servers, the switch module need only determine a utilization baseline for a single load balanced server. In some examples, the instructions executable by the processor to cause the processor to determine the utilization baseline includes instructions executable by the processor to cause the processor to determine a lower utilization threshold. Similarly, the instructions executable by the processor to cause the processor to determine the utilization baseline may include instructions executable by the processor to cause the processor to determine an upper utilization threshold. Depending on the workload being run, a specific utilization baseline can be measured and used as a configuration point for deciding when to power on or power off servers in the server system.

[0014] For example, if the baseline measurement shows that a fully utilized server for a specific workload is consuming 20% of the bandwidth available on its network port, then a value slightly below (e.g., at least 1 % below) the 20% threshold can be set as a trigger for turning on additional servers. Say all of the servers in the server system are now running at 19% to 20% traffic utilization, the switch module may determine that one more server should be powered on to absorb additional work requests and ensure sufficient performance of the system.

[0015] Likewise, if the servers are using significantly less than 20% of total bandwidth, the switch module may determine that one server should be powered off. In this example, if a number of servers are being measured at 13% utilization, one server may be powered off. Assuming utilization stays below the threshold, additional servers may be powered off.

[0016] Monitor network utilization instructions 107, when executed by a processor (e.g., 101 ), may cause system 100 to monitor, using the switch module, network utilization of the plurality of load balanced servers. For example, as discussed in relation to Figure 2, a plurality of traffic counters can monitor the traffic flow to and/or from each of the plurality of servers.

[0017] Power on or power off instructions 109, when executed by a processor (e.g., 101 ), may cause system 100 to send a communication from the switch module and/or a separate management entity to a chassis manager, instructing the chassis manager to power on or power off, a load balanced server among the plurality of load balanced servers, based on the monitored network utilization and determined utilization baseline. For example, the instructions may be executable by the processor to cause the processor to power off the load balanced server using the chassis manager and in response to a determination, by the switch module, that the monitored network utilization is at or below the lower utilization threshold.

Additionally and/or alternatively, the power on or power off instructions 109 may comprise instructions executable by the processor to cause the processor to power on the load balanced server using the chassis manager and in response to a determination by the switch module that the monitored network utilization is at or above the upper utilization threshold.

[0018] In some examples, the system 100 may further include instructions executable by the processor 101 to power off a first load balanced server using the chassis manager and power off a second load balanced server using the switch module and in response to a determination by the switch module that the monitored network utilization is at or below the lower utilization threshold after a threshold period of time. For example, a first server may be powered off, and network utilization may be monitored for a threshold period of time (e.g., 10 seconds, 30 seconds, 1 minute etc.). After the completion of the threshold period of time, a second server may be powered off if the network utilization is still at or below the lower utilization threshold. Similarly, if the network utilization is at or above the upper utilization threshold after the threshold period of time, the first server may be turned back on and/or no change may be made.

[0019] Figures 2A-2D are block diagrams of an example system 202 for monitoring network utilization, according to the present disclosure. The system 202 may include a network comprising a plurality of load balanced servers (204-1 , 204- 2, 204-3, ... 204-N, referred to collectively as the plurality of load balanced servers 204). As used herein, a load balanced server refers to a system in which all servers in a given load balanced domain are running the same workload and can be powered on and off interchangeably to service application requests. Such a system enables a server to be powered on or off without altering server workload, thereby leading to significant operational savings by powering off underutilized servers.

[0020] Further, the system 202 may include a switch module 206 coupled to the plurality of load balanced servers 204, the switch module 206 to monitor network utilization of the plurality of load balanced servers 204. The switch module 206 may include a number of components allowing it to inspect packets and make routing determinations. For instance, the switch module 206 may include a switch application specific integrated circuit (ASIC) 210. The switch ASIC 210 may include a plurality of traffic counters 212-1 , 212-2, 212-3, 212-P (collectively referred to herein as traffic counters 212) that count transmission and receipt of packets by the servers. The traffic counters 212 may be hardware or hardware and instructions to track the utilization of a server over time. Put another way, each server may be associated with a particular traffic counter which monitors transmission and receipt of packets by that particular server. The switch ASIC 210 may be able to manage the traffic counters 212 with no drop in forwarding performance of the switch module 206. Over a threshold period of time, which may be specified by an administrator, traffic averages, peak network usage and decreased network usage can be tallied. As illustrated in Figures 2A-2D, the switch module 206 may include a memory, such as random access memory (RAM) 214 to store server utilization data and routing tables, among other information. While RAM is provided herein as an example form of data storage, examples are not so limited and system 202 may include other forms of memory other than RAM. The collected network usage data can be used as input in a power management algorithm utilized by the switch module 206 and designed to maintain optimal efficiency (e.g., latency) and power usage levels within the server system.

[0021] As illustrated in Figures 2A-2D, the system 202 may include a chassis manager 208 coupled to the switch module 206. The chassis manager 208 may power on or power off a load balanced server among the plurality of load balanced servers 204 in response to receipt of a power control communication from the switch module 206. As used herein, a power control communication refers to a signal instructing the power on or power off of at least one load balanced server. While Figures 2A-2D illustrate a single switch module 206, examples are not so limited and the system 202 may include a plurality of switch modules.

[0022] In some examples, the switch module 206 may be implemented by, and incorporated in, a chassis manager 208. However, examples are not so limited, and the switch module 206 and chassis manager 208 may be separate and distinct components of system 202. Furthermore, the switch module 206 may include a traffic counter for each of the respective plurality of load balanced servers 204, the traffic counters to monitor network traffic to and/or from the plurality of load balanced servers 204 over a period of time. For instance, a traffic counter may monitor network traffic to and/or from load balanced server 204-1 , and another traffic counter may monitor network traffic to and/or from load balanced server 204- 2. Put another way, the monitored network utilization corresponds to a second network utilization value and the network utilization baseline corresponds to a first network utilization value, the system 202 further comprising the switch module 206 to determine a third network utilization value in response to the chassis manager 208 powering on or powering off a first load balanced server among the plurality of load balanced servers 204.

[0023] In some example, the switch module 206 may generate a network utilization baseline and the switch module 206 may send the power control communication to the chassis manager 208 in response to a determination by the switch module 206 that the monitored network utilization deviated from the network utilization baseline.

[0024] As illustrated in Figure 2A, each of the plurality of load balanced servers 204 may be powered on. Given that the system 202 is a hyperscale system, each of the plurality of load balanced servers may run the same workload and may be powered on and off interchangeably to service application requests. While Figures 2A-2D illustrate 4 load balanced servers in system 202, examples are not so limited and the system 202 may include more or fewer load balanced servers than illustrated. As illustrated in Figure 2A, 1660 watts may be consumed servicing 300 transactions per second, using 5.53 watts per transaction. This arrangement may correlate to 22% network throughput with a "medium" (e.g., 69%) level of activity on each load balanced server.

[0025] Similarly, as illustrated in Figure 2B, a load balanced server among the plurality of load balanced servers 204 may be powered off. For example, load balanced server 204-N may be powered off, while load balanced servers 204-1 , 204-2, and 204-3 remain powered on. In such an example, 1440 watts may be consumed in servicing 300 transactions per second, using 4.80 watts per transaction. This arrangement may correlate to 36% network throughput with a relatively "High" (e.g., 92%) level of activity on each load balanced server.

[0026] In yet another example, illustrated in Figure 2C, network utilization may increase, as determined by the switch module 206. For instance, 1500 watts may be consumed servicing 330 transactions, using 4.54 watts per transaction. In such an example, the 39% network utilization correlates to a maximally utilized server (e.g., all servers that are powered on are at 100% utilization). In such an example, load balanced server 204-N is powered off, and the remaining load balanced servers are powered on. However, if network utilization increases, latency issues may result. Put another way, additional network utilization may result in decreased performance and increased processing time for the remaining load balanced servers. In such instances, it may be beneficial to power on an additional load balanced server (e.g., power on load balanced server 204-N) to ensure latency does not increase. As illustrated in Figure 2D, load balanced server 204-N may be powered back on (e.g., using the switch module 206 and/or the chassis manager 208), and 1720 watts may be consumed servicing 330

transactions, using 5.21 watts per transaction. Such an arrangement may result in 76% utilization of each load balanced server.

[0027] Figure 3 illustrates an example method 31 1 according to the present disclosure. At 313, the method 31 1 may include generating a utilization baseline using network utilization data for a plurality of load balanced servers. In some examples, generating a utilization baseline may include collecting the network utilization data for the plurality of load balanced servers during a learning phase executed prior to the monitor phase; and identifying a low utilization threshold when all applications execute a low level of functionality. Similarly, the method 31 1 can include collecting the network utilization data for the plurality of load balanced servers during a learning phase executed prior to the monitor phase and identifying a high utilization threshold when all applications execute a high level of

functionality.

[0028] At 315, the method 31 1 may include monitoring network utilization of the plurality of load balanced servers during a monitor phase. At 317, the method 31 1 may include comparing the monitored network utilization to the generated utilization baseline. Further, at 319, the method 31 1 may include adjusting power allocation to the plurality of load balanced servers based on the comparison.

[0029] In the foregoing detailed description of the present disclosure, reference is made to the accompanying drawings that form a part hereof, and in which is shown by way of illustration how examples of the disclosure may be practiced. These examples are described in sufficient detail to enable those of ordinary skill in the art to practice the examples of this disclosure, and it is to be understood that other examples may be utilized and that process, electrical, and/or structural changes may be made without departing from the scope of the present disclosure.

[0030] The figures herein follow a numbering convention in which the first digit corresponds to the drawing figure number and the remaining digits identify an element or component in the drawing. Elements shown in the various figures herein can be added, exchanged, and/or eliminated so as to provide a number of additional examples of the present disclosure. In addition, the proportion and the relative scale of the elements provided in the figures are intended to illustrate the examples of the present disclosure, and should not be taken in a limiting sense. As used herein, the designators "N" and "P", particularly with respect to reference numerals in the drawings, indicate that a number of the particular feature so designated can be included with examples of the present disclosure. The designators can represent the same or different numbers of the particular features. Further, as used herein, "a number of" an element and/or feature can refer to one or more of such elements and/or features.

[0031] As used herein, "logic" is an alternative or additional processing resource to perform a particular action and/or function, etc., described herein, which includes hardware, e.g., various forms of transistor logic, application specific integrated circuits (ASICs), etc., as opposed to computer executable instructions, e.g., software firmware, etc., stored in memory and executable by a processor.

Claims

What is claimed:
1. A non-transitory machine readable medium storing instructions executable by a processor to cause the processor to:
determine, using a switch module, a utilization baseline for a load balanced server among a plurality of load balanced servers;
monitor, using the switch module, network utilization of the plurality of load balanced servers; and
send a communication from the switch module to a chassis manager, instructing the chassis manager to power on or power off a load balanced server among the plurality of load balanced servers, based on the monitored network utilization and determined utilization baseline.
2. The medium of claim 1 , wherein the instructions executable by the processor to cause the processor to determine, using the switch module, the utilization baseline includes instructions executable by the processor to cause the processor to determine, using the switch module, a lower utilization threshold.
3. The medium of claim 2, further comprising instructions executable by the processor to cause the processor to:
send a communication from the switch module to the chassis manager to, instructing the chassis manager to power off the load balanced server in response to a determination, by the switch module, that the monitored network utilization is at or below the lower utilization threshold.
4. The medium of claim 1 , wherein the instructions executable by the processor to cause the processor to determine, using the switch module, the utilization baseline includes instructions executable by the processor to cause the processor to determine, using the switch module, an upper utilization threshold.
5. The medium of claim 4, further comprising instructions executable by the processor to cause the processor to:
send a communication from the switch module to the chassis manager, instructing the chassis manager to power on the load balanced server in response to a determination by the switch module that the monitored network utilization is at or above the upper utilization threshold.
6. The medium of claim 1 , further comprising instructions executable by the processor to cause the processor to send a communication from the switch module to the chassis manager, instructing the chassis manager to power off a first load balanced server and power off a second load balanced server in response to a determination by the switch module that the monitored network utilization is at or below the lower utilization threshold after a threshold period of time.
7. A system for monitoring network utilization, the system comprising:
a network comprising a plurality of load balanced servers;
a switch module coupled to the plurality of load balanced servers, the switch module to:
determine a utilization baseline for a load balanced server among the plurality of load balanced servers; and
monitor network utilization of the plurality of load balanced servers; and a chassis manager coupled to the switch module, the chassis manager to power on or power off a load balanced server among the plurality of load balanced servers in response to receipt of a power control communication from the switch module, the power control communication based at least in part on the determined utilization baseline and the monitored network utilization.
8. The system of claim 7, wherein the switch module is implemented by, and incorporated in, the chassis manager.
9. The system of claim 7, wherein the switch module includes a traffic counter for each of the respective plurality of load balanced servers, the traffic counters to monitor network traffic to the plurality of load balanced servers over a period of time.
10. The system of claim 7, further comprising:
the switch module to send the power control communication to the chassis manager in response to a determination by the switch module that the monitored network utilization deviated from the network utilization baseline.
1 1. The system of claim 7, wherein the monitored network utilization
corresponds to a second network utilization value and the network utilization baseline corresponds to a first network utilization value, the system further comprising:
the switch module to determine a third network utilization value in response to the chassis manager powering on or powering off a first load balanced server among the plurality of load balanced servers.
12. The system of claim 1 1 , further comprising the chassis manager to power on or power off a second load balanced server among the plurality of load balanced servers based on the determined third network utilization value.
13. A method for monitoring network utilization, the method comprising:
generating, using a switch module coupled to a plurality of load balanced servers, a utilization baseline using network utilization data for the plurality of load balanced servers;
monitoring, using the switch module, network utilization of the plurality of load balanced servers during a monitor phase;
comparing, using the switch module, the monitored network utilization to the generated utilization baseline; and
adjusting power allocation, using the switch module, to the plurality of load balanced servers based on the comparison.
14. The method of claim 13, further comprising:
collecting, using the switch module, the network utilization data for the plurality of load balanced servers during a learning phase executed prior to the monitor phase; and
identifying, using the switch module, a low utilization threshold when all applications execute a low level of functionality.
15. The method of claim 13, further comprising: collecting, using the switch module, the network utilization data for the plurality of load balanced servers during a learning phase executed prior to the monitor phase; and
identifying, using the switch module, a high utilization threshold when all applications execute a high level of functionality.
PCT/US2015/042434 2015-07-28 2015-07-28 Monitoring network utilization WO2017019035A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2015/042434 WO2017019035A1 (en) 2015-07-28 2015-07-28 Monitoring network utilization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2015/042434 WO2017019035A1 (en) 2015-07-28 2015-07-28 Monitoring network utilization

Publications (1)

Publication Number Publication Date
WO2017019035A1 true WO2017019035A1 (en) 2017-02-02

Family

ID=57884981

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/042434 WO2017019035A1 (en) 2015-07-28 2015-07-28 Monitoring network utilization

Country Status (1)

Country Link
WO (1) WO2017019035A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008269249A (en) * 2007-04-19 2008-11-06 Ffc Ltd Power controller, virtual server management system, power supply control method, and power supply control program
JP2010086145A (en) * 2008-09-30 2010-04-15 Hitachi East Japan Solutions Ltd Distributed processing system
KR20100113383A (en) * 2009-04-13 2010-10-21 주식회사 엔씨소프트 Electric power management system for server and method thereof
JP2012252602A (en) * 2011-06-03 2012-12-20 Nippon Telegr & Teleph Corp <Ntt> Server management system, server management device, server management method and server management program
US20140129863A1 (en) * 2011-06-22 2014-05-08 Nec Corporation Server, power management system, power management method, and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008269249A (en) * 2007-04-19 2008-11-06 Ffc Ltd Power controller, virtual server management system, power supply control method, and power supply control program
JP2010086145A (en) * 2008-09-30 2010-04-15 Hitachi East Japan Solutions Ltd Distributed processing system
KR20100113383A (en) * 2009-04-13 2010-10-21 주식회사 엔씨소프트 Electric power management system for server and method thereof
JP2012252602A (en) * 2011-06-03 2012-12-20 Nippon Telegr & Teleph Corp <Ntt> Server management system, server management device, server management method and server management program
US20140129863A1 (en) * 2011-06-22 2014-05-08 Nec Corporation Server, power management system, power management method, and program

Similar Documents

Publication Publication Date Title
Buyya et al. Energy-efficient management of data center resources for cloud computing: a vision, architectural elements, and open challenges
Corradi et al. VM consolidation: A real case based on OpenStack Cloud
Jennings et al. Resource management in clouds: Survey and research challenges
Goudarzi et al. SLA-based optimization of power and migration cost in cloud computing
US8738972B1 (en) Systems and methods for real-time monitoring of virtualized environments
KR20130016237A (en) Managing power provisioning in distributed computing
Goudarzi et al. Energy-efficient virtual machine replication and placement in a cloud computing system
CN104854563B (en) What resource used automatically analyzes
Lefèvre et al. Designing and evaluating an energy efficient cloud
JP5363646B2 (en) Optimized virtual machine migration mechanism
US8214843B2 (en) Framework for distribution of computer workloads based on real-time energy costs
Nathuji et al. Exploiting platform heterogeneity for power efficient data centers
Gandhi et al. Autoscale: Dynamic, robust capacity management for multi-tier data centers
US10355959B2 (en) Techniques associated with server transaction latency information
Garg et al. Green cloud framework for improving carbon efficiency of clouds
CN104113585B (en) A method and apparatus for generating a state indicative of load balancing hardware-level interrupt
US8843772B2 (en) Systems and methods for dynamic power allocation in an information handling system environment
US8230245B2 (en) Method and system for operating-system-independent power management using performance verifications
US20070250838A1 (en) Computer workload redistribution
US8789061B2 (en) System and method for datacenter power management
US9632835B2 (en) Deployment of virtual machines to physical host machines based on infrastructure utilization decisions
US8904213B2 (en) Saving power by managing the state of inactive computing devices according to specific constraints
WO2009055368A2 (en) Systems and methods to adaptively load balance user sessions to reduce energy consumption
US7131019B2 (en) Method of managing power of control box
JP5496518B2 (en) Centralized power management method, device-side agent, centralized power management controller, and centralized power management system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15899827

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15899827

Country of ref document: EP

Kind code of ref document: A1