EP3062192A1 - Dynamic server power capping - Google Patents

Dynamic server power capping Download PDF

Info

Publication number
EP3062192A1
EP3062192A1 EP15177919.6A EP15177919A EP3062192A1 EP 3062192 A1 EP3062192 A1 EP 3062192A1 EP 15177919 A EP15177919 A EP 15177919A EP 3062192 A1 EP3062192 A1 EP 3062192A1
Authority
EP
European Patent Office
Prior art keywords
power
servers
power consumption
rack
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP15177919.6A
Other languages
German (de)
French (fr)
Other versions
EP3062192B1 (en
Inventor
Li-Tsung Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanta Computer Inc
Original Assignee
Quanta Computer Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanta Computer Inc filed Critical Quanta Computer Inc
Publication of EP3062192A1 publication Critical patent/EP3062192A1/en
Application granted granted Critical
Publication of EP3062192B1 publication Critical patent/EP3062192B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3209Monitoring remote activity, e.g. over telephone lines or network connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/504Resource capping
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the disclosure generally relates to server power management.
  • Data centers are often composed of multiple racks of servers.
  • the server racks are configured with power supply units that can safely supply up to a maximum amount of power to the servers in the rack. If one of the power supply units fail, the maximum amount of power that can be safely supplied by the power supply units may decrease. If the servers draw more than the maximum amount of power that can be safely supplied by the working power supply units, the working power supply units may fail and/or the servers in the rack may be damaged.
  • a system administrator can specify a maximum amount of power for a data center or a rack of servers. If the data center or rack consumes more than a threshold amount of power, the data center or rack can be put into power capping mode. Once in power capping mode, servers in the data center or rack can be power capped. The amount of power at which a server is capped can be dynamically determined based on the amount of discretionary power available to distribute among the power capped servers in the rack or data center and the amount of power that the server consumes relative to other servers in the data center or rack. Once power consumption in the data center or rack falls below a threshold level, power capping for the data center or rack can be disabled.
  • Particular implementations provide at least the following advantages: individual servers in a data center or rack can operate at maximum power while the data center or rack operates below a specified warning threshold; servers in the data center or rack are prevented from drawing too much power and causing power supply unit failures once the data center or rack is put into power capping mode; power capping mode can be automatically enabled; power capping power levels can be automatically determined and set for each server in the data center or rack.
  • FIG. 1 illustrates an example system 100 for managing power consumption of a data center.
  • system 100 can provide a mechanism by which a system administrator can configure server power capping at a data center or at a server rack using server management software 102.
  • system 100 can include a management device 104.
  • management device 104 can be a laptop computer, desktop computer, tablet device or other handheld device (e.g., smartphone).
  • Management device 104 can be configured with management software 102.
  • a server or data center administrator can interact with management software 102 to specify power capping parameters for data center 110 and/or server racks 120, 130, and 140.
  • data center 110 can correspond to a group of servers organized into one or more racks 120, 130 and/or 140.
  • Racks 120, 130 and/or 140 can include one or more servers, power supply units, switches, and/or rack management controllers, as described further below.
  • a rack of servers can be considered a subgroup or subset of the data center servers.
  • management device 104 can send power capping parameters to data center 110 and/or server racks 120, 130 and 140 through network 106.
  • network 106 can be a local area network, a wide area network, or the Internet.
  • the power capping parameters can include, for example, a power limit for data center 110, a power limit for server racks 120, 130 and/or 140, or a power limit for one or more servers (e.g., server 124) in a server rack.
  • a data center, server rack, and/or individual server can automatically enter a power capping mode based on the server administrator specified power capping parameters. For example, when a system administrator specifies a power limit for a group of servers (e.g., data center, rack, etc.), power capping mode can be dynamically started (e.g., entered, begun) for the group of servers when the total power consumption for the group of servers approaches the specified power limit (e.g., exceeds a threshold percentage of the specified power limit). Power capping mode can be dynamically exited (e.g., terminated, canceled) for the group of servers when the total power consumption for the group of servers drops below a threshold amount of the power limit set for the group of servers. Power capping for each individual server can be dynamically adjusted based on the individual server power requirements as compared to other servers in the group of servers, as described in more detail below.
  • power capping mode can be dynamically started (e.g., entered, begun) for the group of servers when the total power consumption for the group of
  • system 100 can include data center 110.
  • data center 110 can include server racks 120, 130 and/or 140.
  • Server rack 120 can include multiple servers 124.
  • Server rack 120 can include power shelf 122.
  • Power shelf 122 can include one or more power supply units.
  • Power shelf 122 can include a rack management controller, as described below.
  • a server administrator can use management software 102 to specify power capping parameters for data center 110.
  • a server administrator can specify a power limit (e.g., maximum power rating, power consumption limit, etc.) for data center 110 through a graphical user interface ( FIG. 3 ) of management software 102.
  • a power limit e.g., maximum power rating, power consumption limit, etc.
  • management software 102 can receive power usage statistics from each server rack 120, 130 and 140.
  • a rack management controller in each rack 120, 130 and 140 can monitor the power use of each rack and report power consumption metrics to management software 102.
  • management software 102 can connect to the rack management controller over a network connection to receive power consumption metrics from the rack management controller.
  • management software 102 can compare the power consumption of each rack to the power limit specified for data center 110. For example, management software 102 can sum the power consumption metrics received for each rack to determine the total power consumption for data center 110. If the total power consumption for data center 110 is greater than a threshold value (e.g., 90% of the specified data center power limit), then management software 102 can cause data center 110 to enter power capping mode. Management software 102 can monitor the total power consumption for data center 110 and cause data center 110 to exit power capping mode when the total power consumption for data center falls below a threshold value (e.g., 70% of the specified data center power limit). For example, different threshold values can be used to determine when to start and stop power capping mode for data center 110.
  • a threshold value e.g. 90% of the specified data center power limit
  • management software 102 can automatically determine power capping limits for server racks 120, 130 and/or 140 in data center 110. For example, management software 102 can limit the power consumption for racks in data center 110 by setting power consumption limits for the racks when data center 110 is in power capping mode.
  • management software 102 can determine the amount of discretionary power available to data center 110.
  • Discretionary data center power can, for example, correspond to the amount of power that can be distributed among server racks in data center 110 that have power capping enabled.
  • server rack 120 and server rack 130 can have power capping enabled, while server rack 140 is not power capped. Since server rack 140 is not power capped, server rack 140 will be able to draw as much power as it needs from the data center without any artificial limit (e.g., cap).
  • the discretionary data center power calculation should include powered off racks because the racks may not be power capped when the racks are subsequently powered on.
  • management software 102 can distribute the discretionary data center power to the power-capped racks (e.g., rack 120, rack 130). For example, management software 102 can determine the power limit for each power-capped rack (rLimit) based on the relative power consumption of each power-capped rack in data center 110. The discretionary data center power can be distributed based on a proportion of a power capped rack's power consumption relative to the total power consumption of the power capped racks in data center 110, for example.
  • management software 102 can determine the maximum power consumption of rack 120.
  • the maximum power consumption (rPowerConsumption) can be determined based on empirical data. For example, management software 102 can collect power consumption metrics from rack 120 over time (e.g., historical data) and calculate an average of historical power consumption or determine the highest power consumption metric collected based on the collected metrics.
  • the maximum power consumption can be configured based on the specifications of the rack and/or the servers therein.
  • the maximum power consumption can be an administrator-specified value.
  • management software 102 can determine the maximum power consumption of all power-capped racks in data center 110. For example, management software 102 can sum (e.g., total) the maximum power consumption values for all power-capped racks in data center 110 to determine the total power consumption (tdcPowerConsumption) of power-capped racks data center 110.
  • management software 102 can send the power capping limit to rack 120.
  • management software 102 can send the power capping limit to the rack management controller of rack 120.
  • rack 120 can automatically manage power consumption based on the rack power limit calculated by management software 102.
  • the rack management controller of rack 120 can manage power consumption of rack 120 using the power capping techniques described herein below.
  • FIG. 2 illustrates a system 200 for managing power consumption of a server rack.
  • System 200 illustrates additional details of system 100, described above.
  • management software 102 can send a rack power capping limit (rLimit) to rack 120.
  • rack power capping limit For example, management software 102 can automatically determine the rack power capping limit when data center power capping is performed, as described above.
  • Management software 102 can receive input from a server administrator specifying a power capping limit for rack 120.
  • management software 102 can send the rack power capping limit for rack 120 to rack management controller (RMC) 204.
  • RMC 204 can be a rack-level processor configured to monitor and control various functions of rack 120.
  • RMC 204 can be housed in power shelf 122 of rack 120.
  • RMC 204 can monitor the health of power supply units 204 and mitigate power failures by turning on backup battery units (not shown).
  • RMC 204 can monitor temperatures within rack 120 and turn on fans, turn off servers, or adjust power consumption to reduce temperatures and protect components.
  • RMC 204 can monitor power consumption of servers 210 and 220.
  • RMC 204 can be communicatively coupled to service controller (SC) 214 of server 210 and service controller (SC) 224 of server 220.
  • Service controller 214 and service controller 224 can be Baseboard Management Controllers (BMC) or Management Engine (ME), for example.
  • Service controllers 214 and 224 can monitor the power consumption of servers 210 and 220, using known mechanisms.
  • RMC 204 can periodically get power consumption metrics from Service controllers so that RMC 204 can monitor the power consumption of servers 210 and 220 and manage the power consumption of rack 120.
  • RMC 204 can, in turn, send the power consumption metrics to management software 102 so that a server administrator can monitor the power consumption of the servers and implement power capping policies.
  • RMC 204 can compare the power consumption of servers 210 and 220 to the rack power capping limit (rLimit) received from management software 102 to determine when to start power capping mode for rack 120. For example, RMC 204 can compare the power consumption of each server 210, 220 to the power limit specified by management software 102 for rack 120. RMC 204 can add up the power consumption metrics received from service controller 214 and service controller 224 for each server 210 and 220 in rack 120 to determine the total power consumption for rack 120. Total power consumption for rack 120 can be calculated by adding (e.g., summing, totaling) up the power consumption for all servers in rack 120, for example.
  • RMC 204 can automatically enter power capping mode for rack 120.
  • RMC 204 can monitor the total power consumption for rack 120 and exit power capping mode for rack 120 when the total power consumption for rack 120 falls below a threshold value (e.g., 70% of the specified rack power limit). For example, different threshold values can be used to determine when to enter and exit power capping mode for rack 120.
  • RMC 204 can automatically determine power capping limits for servers 210 and 220 in rack 120 when rack 120. For example, RMC 204 can limit the power consumption for servers in rack 120 by setting power consumption limits for the servers when rack 120 is in power capping mode. To determine power capping limits for servers 210 and 220 in rack 120, RMC 204 can determine the amount of discretionary power available to rack 120. Discretionary rack power can, for example, correspond to the amount of power that can be distributed among servers in rack 120 that have power capping enabled.
  • nServers non-capped servers
  • oServers the expected power consumption of servers that are currently powered off
  • rLimit power limit for rack 120
  • the discretionary rack power calculation should include powered off servers because the servers may not be power capped when the servers are powered on and may, therefore, draw too much power from the system.
  • RMC 204 can distribute the discretionary rack power to the power capped servers. For example, RMC 204 can determine the power limit (sLimit) for each power-capped server based on the relative power consumption of each power-capped server in rack 120. The discretionary rack power can be distributed to the power-capped servers based on a proportion of a power-capped server's power consumption relative to the total power consumption of the power-capped servers in rack 120. The power limit (sLimit) for a server can be the proportional amount of the discretionary rack power distributed to the server.
  • RMC 204 can determine the maximum power consumption of server 210.
  • the maximum power consumption (sPowerConsumption) of server 210 can be based on empirical data. For example, RMC 204 can collect power consumption metrics (historical data) from service controller 214 and service controller 224 over time. RMC 204 can determine the maximum power consumption by calculating an average of the historical power consumption. RMC can determine the maximum power consumption by determining the maximum power consumption metric in the collected power consumption metrics.
  • the maximum power consumption of server 210 can be based on the specifications of the server and/or the CPUs therein. The maximum power consumption of server 210 can be an administrator-specified value.
  • RMC 204 can determine the maximum power consumption of all power-capped servers in rack 120. For example, RMC 204 can add up (e.g., total) the maximum power consumption values for all power-capped servers in rack 120 to determine the total power consumption (trPowerConsumption) of power-capped servers rack 120.
  • RMC 204 can send the power capping limit to server 210.
  • RMC 204 can send the power capping limit to service controller 214 of server 210.
  • Service controller 214 can then adjust various components (e.g., CPU 212) of server 210 so that the server consumes no more power than the specified power capping limit for server 210.
  • service controller 214 can automatically manage power consumption of server 210 based on the server power limit calculated by RMC 204. For example, service controller 214 can adjust the configuration (e.g., p-state, power state, operating state, operating frequency) of CPU 212 to reduce the frequency at which CPU 212 operates and thereby reduce the amount of power consumed by server 210.
  • configuration e.g., p-state, power state, operating state, operating frequency
  • the power capping limit can be calculated by RMC 204 for server 220 in a similar manner as server 210.
  • RMC 204 can send a power capping limit to service controller 224 and service controller 220 can adjust various components (e.g., CPU 222) of server 220 so that server 220 consumes less power that the RMC-specified power capping limit.
  • Service controller 224 can automatically manage power consumption of server 220 based on the server power limit calculated by RMC 204. For example, service controller 224 can adjust the configuration (e.g., p-state) of CPU 222 to reduce the frequency at which CPU 222 operates and thereby reduce the amount of power consumed by server 220.
  • management software 102 can perform the functions of RMC 204 described above.
  • rack 120 may not include RMC 204.
  • management software 102 can receive power consumption metrics directly from service controllers 214 and 224, determine power capping limits for servers 210 and 220, and communicate the determined power capping limits to service controllers 214 and 224, as described above with reference to FIG. 2 and RMC 204.
  • FIG. 3 illustrates an example graphical user interface (GUI) 300 for managing power consumption of a data center.
  • GUI 300 can be an interface of management software 102, as described above.
  • a server administrator can provide input to GUI 300 to enable power capping for data centers, racks and/or servers.
  • the server administrator can provide input to GUI 300 to specify power consumption limits for data centers and racks that will be used to perform data center, rack and server power capping, as described above.
  • GUI 300 can include graphical element 302 for enabling power capping in a data center.
  • a server administrator i.e., user
  • GUI 300 can include graphical element 304 for specifying a power limit for data center 110.
  • the server administrator can select from graphical element 304 (e.g., from a pull-down menu, list, etc.) a value indicating an amount of power consumption that the data center cannot exceed.
  • the server administrator can enter text into graphical element 304 that indicates a maximum power value.
  • data center 110 When power capping is enabled in data center 110, data center 110 will enter power capping mode when the power consumption of the data center exceeds a threshold value (e.g., 90% of the data center power limit). Data center 110 will exit power capping mode when the power consumption of the data center drops below a threshold value (e.g., 70% of the data center power limit).
  • a threshold value e.g. 90% of the data center power limit.
  • Data center 110 will exit power capping mode when the power consumption of the data center drops below a threshold value (e.g., 70% of the data center power limit).
  • GUI 300 can include graphical element 306 for specifying power capping parameters for a server rack.
  • graphical element 306 can represent server rack 120 in data center 110.
  • GUI 300 can include a graphical element 306 corresponding to each rack in data center 110, for example.
  • graphical element 306 can include graphical element 308 for enabling power capping in a server rack.
  • a server administrator can select graphical element 306 to turn on and off power capping for server rack 120.
  • Graphical element 306 can include graphical element 310 for specifying a power limit for server rack 120.
  • the server administrator can select a value from graphical element 310 (e.g., a pull-down menu, list, etc.) a value indicating an amount of power consumption that server rack 120 cannot exceed.
  • the server administrator can enter text into graphical element 310 that indicates a maximum power value for server rack 120.
  • server rack 120 When power capping is enabled in server rack 120, server rack 120 will enter power capping mode when the power consumption of server rack 120 exceeds a threshold value (e.g., 90% of the rack power limit). Server rack 120 will exit power capping mode when the power consumption of server rack 120 drops below a threshold value (e.g., 70% of the rack power limit).
  • a threshold value e.g. 90% of the rack power limit.
  • Server rack 120 will exit power capping mode when the power consumption of server rack 120 drops below a threshold value (e.g., 70% of the rack power limit).
  • graphical element 306 can include graphical elements 312-322 for enabling and disabling power capping for servers in a server rack.
  • graphical elements 312-322 can correspond to servers (e.g., server 210, server 220) in server rack 120.
  • a server administrator can select individual graphical elements 312-322 (e.g., check boxes, switches, toggles, etc.) to enable or disable power capping for the corresponding server.
  • graphical element 306 can indicate the maximum power consumption for each server in a rack.
  • graphical element 306 can present graphical elements 312-322 that identify a corresponding server (e.g., server 210, server 220, etc.) in rack 120 and the maximum amount of power that each server consumes.
  • rack 120 can use the maximum power consumption values to determine how to distribute the rack's discretionary power to the servers in the rack.
  • discretionary power can be calculated by subtracting the power consumption of non-power capped servers and the power consumption of powered off servers from the power limit of the rack.
  • rack 120 includes three power capped servers (e.g., Server 1, Server 2, Server 3) corresponding to selected (e.g., checked box) graphical elements 312, 314, and 316.
  • Rack 120 includes two non-power capped servers (e.g., Server 4, Server 5) corresponding to graphical elements 318 and 320.
  • Rack 120 includes one powered off server (e.g., Server 6) corresponding to graphical element 322.
  • the power consumption of the two non-power capped servers (Server 4: 400 Watts; Server 5: 325 Watts) and the power consumption of the powered off server (Server 6: 400 Watts) can be subtracted from the power limit of rack 120 (2,000 Watts).
  • the discretionary power for rack 120 is 875 Watts (e.g., 2000-(400 + 325 + 400)).
  • the discretionary power for rack 120 can be distributed to the power capped servers (e.g., Server 1, Server 2, Server 3) based on the relative amount of power consumed by each server.
  • the amount of discretionary power allocated to Server 1 is 338 Watts (e.g., 875 * (600 / 1550)).
  • the amount of discretionary power allocated to Server 2 is 254 Watts (e.g., 875 * (450 / 1550)).
  • the amount of discretionary power allocated to Server 3 is 282 Watts (e.g., 875 * (500 / 1550)).
  • a similar process can be used to determine the discretionary power for the data center and the allocation of discretionary power to racks within the data center, as described above with reference to FIG. 1 .
  • FIG. 4 illustrates an example process 400 for dynamic server power capping.
  • process 400 can be used to manage server power consumption at the data center level and/or at the server rack level.
  • a computing device can monitor the power usage for a group of servers.
  • a server group can correspond to a data center.
  • a server group can correspond to a server rack.
  • Management device 104 can be configured with management software 102 to monitor the power usage of servers within data center 110.
  • Rack management controller 204 can be configured to monitor the power usage of servers within rack 120.
  • the computing device can determine that the power usage for the group of servers exceeds a first threshold level. For example, the management device 104 can determine that the power usage for data center 110 exceeds 90% of the power limit for data center 110.
  • Rack management controller 204 can determine that the power usage for rack 120 exceeds 90% of the power limit for rack 120.
  • the computing device can cause the group of servers to enter power capping mode. For example, in response to determining that the power usage for data center 110 exceeds the first threshold, management device 104 can cause data center 110 to enter power capping mode. Similarly, in response to determining that the power usage for rack 120 is greater than the first threshold, rack management controller 204 can cause rack 120 to enter power capping mode.
  • the computing device can determine the amount of discretionary power available for power capping.
  • management device 104 can determine the amount of discretionary power available to distribute to data capped server racks within data center 110.
  • Rack management controller 204 can determine the amount of discretionary power available to distribute to data capped servers within server rack 120.
  • the computing device can allocate the discretionary power.
  • management device 104 can allocate the discretionary data center power to power capped racks within data center 110 according to the proportional amount of power consumed by each rack within data center 110, as described above.
  • Rack management controller 204 can allocate the discretionary rack power to servers within rack 120 according to the proportional amount of power consumed by each server within rack 120, as described above. The amount of discretionary power allocated to rack or server becomes the power limit for the rack or server, for example.
  • the computing device can determine that power usage for a group of servers is below a second threshold level.
  • management device 104 can monitor power consumption of racks within data center 110 and detect when the power consumption of the racks drops below a second threshold level (e.g., 70% of the data center power limit).
  • Rack management controller 204 can monitor power consumption of servers within rack 120 and detect when the power consumption of the servers drops below a second threshold level (e.g., 70% of the rack power limit).
  • the computing device can cause the group of servers to exit power capping mode. For example, in response to detecting that the power consumption of the racks within data center 110 has dropped below the second threshold level, management device 104 can remove the power limits from the racks within data center 110 so that the power consumption of the racks in data center 110 is no longer limited. Similarly, in response to detecting that the power consumption of the servers within rack 120 has dropped below the second threshold level, rack management controller 204 can remove the power limits from the servers within rack 120 so that the power consumption of the servers in rack 120 is no longer limited.
  • FIG. 5 is a block diagram of an example system architecture 500 implementing the features and processes of FIGS. 1-4 .
  • the architecture 500 can be implemented on any electronic device that runs software applications derived from compiled instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc.
  • the architecture 500 can include one or more processors 502, one or more input devices 504, one or more display devices 506, one or more network interfaces 508 and one or more computer-readable mediums 510. Each of these components can be coupled by bus 512.
  • Display device 506 can be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology.
  • Processor(s) 502 can use any known processor technology, including but are not limited to graphics processors and multi-core processors.
  • Input device 504 can be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display.
  • Bus 512 can be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire.
  • Computer-readable medium 510 can be any medium that participates in providing instructions to processor(s) 502 for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.) or volatile media (e.g., SDRAM, ROM, etc.).
  • the computer-readable medium e.g., storage devices, mediums, and memories
  • non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
  • Computer-readable medium 510 can include various instructions for implementing an operating system 514 (e.g., Mac OS®, Windows®, Linux).
  • the operating system 514 can be multi-user, multiprocessing, multitasking, multithreading, real-time and the like.
  • the operating system 514 performs basic tasks, including but not limited to: recognizing input from input device 504; sending output to display device 506; keeping track of files and directories on computer-readable medium 510; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus 512.
  • Network communications instructions 516 can establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, etc.).
  • operating system 514 can perform at least some of the processes described in reference to FIGS. 1-4 , above.
  • a graphics processing system 518 can include instructions that provide graphics and image processing capabilities.
  • the graphics processing system 518 can implement the processes described with reference to FIGS. 1-4 .
  • Application(s) 520 can be an application that uses or implements the processes described in reference to FIGS. 1-4 .
  • applications 520 can include management software 204.
  • Application(s) 520 can include rack management controller software for managing power within a rack.
  • Application(s) 520 can include system controller software for managing power consumption within a server.
  • Service controller 522 can be a controller that operates independently of processor(s) 522 and/or operating system 514. In some implementations, service controller 522 can be powered and operational before processor(s) 502 are powered on and operating system 514 is loaded into processor(s) 502. For example, service controller 522 can provide for pre-OS management of the computing device through a dedicated network interface or other input device.
  • system controller 522 can be a baseboard management controller (service controller) that monitors device sensors (e.g., voltages, temperature, fans, etc.), logs events for failure analysis, provides LED guided diagnostics, performs power management, and/or provides remote management capabilities through an intelligent platform management interface (IPMI), keyboard, video, and mouse (KVM) redirection, serial over LAN (SOL), and/or other interfaces.
  • service controller 522 can be implement the processes described with reference to FIGS. 1-4 above.
  • service controller 522 can be configured to manage power consumption within a server.
  • the described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device.
  • a computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result.
  • a computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer.
  • a processor will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data.
  • a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks.
  • Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices such as EPROM, EEPROM, and flash memory devices
  • magnetic disks such as internal hard disks and removable disks
  • magneto-optical disks and CD-ROM and DVD-ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
  • ASICs application-specific integrated circuits
  • the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • the features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them.
  • the components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
  • the computer system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a network.
  • the relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • An API can define on or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation.
  • software code e.g., an operating system, library routine, function
  • the API can be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document.
  • a parameter can be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call.
  • API calls and parameters can be implemented in any programming language.
  • the programming language can define the vocabulary and calling convention that a programmer will employ to access functions supporting the API.
  • an API call can report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Power Sources (AREA)
  • Remote Monitoring And Control Of Power-Distribution Networks (AREA)

Abstract

In some implementations, a system administrator can specify a maximum amount of power for a data center or a rack of servers. If the data center or rack consumes more than a threshold amount of power, the data center or rack can be put into power capping mode. Once in power capping mode, servers in the data center or rack can be power capped. The amount of power at which a server is capped can be dynamically determined based on the amount of discretionary power available to distribute among the power capped servers in the rack or data center and the amount of power that the server consumes relative to other servers in the data center or rack. Once power consumption in the data center or rack falls below a threshold level, power capping for the data center or rack can be disabled.

Description

    TECHNICAL FIELD
  • The disclosure generally relates to server power management.
  • BACKGROUND
  • Data centers are often composed of multiple racks of servers. The server racks are configured with power supply units that can safely supply up to a maximum amount of power to the servers in the rack. If one of the power supply units fail, the maximum amount of power that can be safely supplied by the power supply units may decrease. If the servers draw more than the maximum amount of power that can be safely supplied by the working power supply units, the working power supply units may fail and/or the servers in the rack may be damaged.
  • SUMMARY
  • In some implementations, a system administrator can specify a maximum amount of power for a data center or a rack of servers. If the data center or rack consumes more than a threshold amount of power, the data center or rack can be put into power capping mode. Once in power capping mode, servers in the data center or rack can be power capped. The amount of power at which a server is capped can be dynamically determined based on the amount of discretionary power available to distribute among the power capped servers in the rack or data center and the amount of power that the server consumes relative to other servers in the data center or rack. Once power consumption in the data center or rack falls below a threshold level, power capping for the data center or rack can be disabled.
  • Particular implementations provide at least the following advantages: individual servers in a data center or rack can operate at maximum power while the data center or rack operates below a specified warning threshold; servers in the data center or rack are prevented from drawing too much power and causing power supply unit failures once the data center or rack is put into power capping mode; power capping mode can be automatically enabled; power capping power levels can be automatically determined and set for each server in the data center or rack.
  • Details of one or more implementations are set forth in the accompanying drawings and the description below. Other features, aspects, and potential advantages will be apparent from the description and drawings, and from the claims.
  • DESCRIPTION OF DRAWINGS
    • FIG. 1 illustrates an example system for managing power consumption of a data center.
    • FIG. 2 illustrates a system for managing power consumption of a server rack.
    • FIG. 3 illustrates an example graphical user interface for managing power consumption of a data center.
    • FIG. 4 illustrates an example process for dynamic server power capping.
    • FIG. 5 is a block diagram of an example system architecture implementing the features and processes of FIGS. 1-4.
  • Like reference symbols in the various drawings indicate like elements.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates an example system 100 for managing power consumption of a data center. For example, system 100 can provide a mechanism by which a system administrator can configure server power capping at a data center or at a server rack using server management software 102.
  • In some implementations, system 100 can include a management device 104. For example, management device 104 can be a laptop computer, desktop computer, tablet device or other handheld device (e.g., smartphone). Management device 104 can be configured with management software 102. In some implementations, a server or data center administrator can interact with management software 102 to specify power capping parameters for data center 110 and/or server racks 120, 130, and 140. For example, data center 110 can correspond to a group of servers organized into one or more racks 120, 130 and/or 140. Racks 120, 130 and/or 140 can include one or more servers, power supply units, switches, and/or rack management controllers, as described further below. A rack of servers can be considered a subgroup or subset of the data center servers.
  • In some implementations, management device 104 can send power capping parameters to data center 110 and/or server racks 120, 130 and 140 through network 106. For example, network 106 can be a local area network, a wide area network, or the Internet. The power capping parameters can include, for example, a power limit for data center 110, a power limit for server racks 120, 130 and/or 140, or a power limit for one or more servers (e.g., server 124) in a server rack.
  • In some implementations, a data center, server rack, and/or individual server can automatically enter a power capping mode based on the server administrator specified power capping parameters. For example, when a system administrator specifies a power limit for a group of servers (e.g., data center, rack, etc.), power capping mode can be dynamically started (e.g., entered, begun) for the group of servers when the total power consumption for the group of servers approaches the specified power limit (e.g., exceeds a threshold percentage of the specified power limit). Power capping mode can be dynamically exited (e.g., terminated, canceled) for the group of servers when the total power consumption for the group of servers drops below a threshold amount of the power limit set for the group of servers. Power capping for each individual server can be dynamically adjusted based on the individual server power requirements as compared to other servers in the group of servers, as described in more detail below.
  • In some implementations, system 100 can include data center 110. For example, data center 110 can include server racks 120, 130 and/or 140. Server rack 120 can include multiple servers 124. Server rack 120 can include power shelf 122. Power shelf 122 can include one or more power supply units. Power shelf 122 can include a rack management controller, as described below.
  • In some implementations, a server administrator can use management software 102 to specify power capping parameters for data center 110. For example, a server administrator can specify a power limit (e.g., maximum power rating, power consumption limit, etc.) for data center 110 through a graphical user interface (FIG. 3) of management software 102.
  • In some implementations, management software 102 can receive power usage statistics from each server rack 120, 130 and 140. For example, a rack management controller in each rack 120, 130 and 140 can monitor the power use of each rack and report power consumption metrics to management software 102. For example, management software 102 can connect to the rack management controller over a network connection to receive power consumption metrics from the rack management controller.
  • In some implementations, management software 102 can compare the power consumption of each rack to the power limit specified for data center 110. For example, management software 102 can sum the power consumption metrics received for each rack to determine the total power consumption for data center 110. If the total power consumption for data center 110 is greater than a threshold value (e.g., 90% of the specified data center power limit), then management software 102 can cause data center 110 to enter power capping mode. Management software 102 can monitor the total power consumption for data center 110 and cause data center 110 to exit power capping mode when the total power consumption for data center falls below a threshold value (e.g., 70% of the specified data center power limit). For example, different threshold values can be used to determine when to start and stop power capping mode for data center 110.
  • In some implementations, management software 102 can automatically determine power capping limits for server racks 120, 130 and/or 140 in data center 110. For example, management software 102 can limit the power consumption for racks in data center 110 by setting power consumption limits for the racks when data center 110 is in power capping mode.
  • In some implementations, to determine power capping limits for server racks 120, 130, and/or 140 in data center 110, management software 102 can determine the amount of discretionary power available to data center 110. Discretionary data center power can, for example, correspond to the amount of power that can be distributed among server racks in data center 110 that have power capping enabled. For example, server rack 120 and server rack 130 can have power capping enabled, while server rack 140 is not power capped. Since server rack 140 is not power capped, server rack 140 will be able to draw as much power as it needs from the data center without any artificial limit (e.g., cap).
  • In some implementations, discretionary data center power (ddcPower) can be calculated by subtracting the power consumption of non-capped racks (nRacks), e.g., racks with power capping disabled (e.g., rack 140), and the expected power consumption of racks that are currently powered off (oRacks) from the administrator-specified power limit (dcLimit) for the data center (e.g., ddcPower.= dcLimit - (nRacks + oRacks)). The discretionary data center power calculation should include powered off racks because the racks may not be power capped when the racks are subsequently powered on.
  • In some implementations, management software 102 can distribute the discretionary data center power to the power-capped racks (e.g., rack 120, rack 130). For example, management software 102 can determine the power limit for each power-capped rack (rLimit) based on the relative power consumption of each power-capped rack in data center 110. The discretionary data center power can be distributed based on a proportion of a power capped rack's power consumption relative to the total power consumption of the power capped racks in data center 110, for example.
  • In some implementations, management software 102 can determine the maximum power consumption of rack 120. The maximum power consumption (rPowerConsumption) can be determined based on empirical data. For example, management software 102 can collect power consumption metrics from rack 120 over time (e.g., historical data) and calculate an average of historical power consumption or determine the highest power consumption metric collected based on the collected metrics. The maximum power consumption can be configured based on the specifications of the rack and/or the servers therein. The maximum power consumption can be an administrator-specified value.
  • In some implementations, management software 102 can determine the maximum power consumption of all power-capped racks in data center 110. For example, management software 102 can sum (e.g., total) the maximum power consumption values for all power-capped racks in data center 110 to determine the total power consumption (tdcPowerConsumption) of power-capped racks data center 110.
  • In some implementations, management software 102 can determine the power capping limit for rack 120 (rLimit) by multiplying the discretionary data center power (ddcPower) by a ratio of rack 120 power consumption (rPowerConsumption) to total data center power consumption (tdcPowerConsumption). For example, management software can determine the rack power limit by calculating rLimit = ddcPower * (rPowerConsumption / tdcPowerConsumption).
  • In some implementations, once the power capping limit for rack 120 is calculated, management software 102 can send the power capping limit to rack 120. For example, management software 102 can send the power capping limit to the rack management controller of rack 120. In some implementations, rack 120 can automatically manage power consumption based on the rack power limit calculated by management software 102. For example, the rack management controller of rack 120 can manage power consumption of rack 120 using the power capping techniques described herein below.
  • FIG. 2 illustrates a system 200 for managing power consumption of a server rack. System 200 illustrates additional details of system 100, described above. In some implementations, management software 102 can send a rack power capping limit (rLimit) to rack 120. For example, management software 102 can automatically determine the rack power capping limit when data center power capping is performed, as described above. Management software 102 can receive input from a server administrator specifying a power capping limit for rack 120.
  • In some implementations, management software 102 can send the rack power capping limit for rack 120 to rack management controller (RMC) 204. For example, RMC 204 can be a rack-level processor configured to monitor and control various functions of rack 120. For example, RMC 204 can be housed in power shelf 122 of rack 120. RMC 204 can monitor the health of power supply units 204 and mitigate power failures by turning on backup battery units (not shown). RMC 204 can monitor temperatures within rack 120 and turn on fans, turn off servers, or adjust power consumption to reduce temperatures and protect components.
  • In some implementations, RMC 204 can monitor power consumption of servers 210 and 220. For example, RMC 204 can be communicatively coupled to service controller (SC) 214 of server 210 and service controller (SC) 224 of server 220. Service controller 214 and service controller 224 can be Baseboard Management Controllers (BMC) or Management Engine (ME), for example. Service controllers 214 and 224 can monitor the power consumption of servers 210 and 220, using known mechanisms. RMC 204 can periodically get power consumption metrics from Service controllers so that RMC 204 can monitor the power consumption of servers 210 and 220 and manage the power consumption of rack 120. RMC 204 can, in turn, send the power consumption metrics to management software 102 so that a server administrator can monitor the power consumption of the servers and implement power capping policies.
  • In some implementations, RMC 204 can compare the power consumption of servers 210 and 220 to the rack power capping limit (rLimit) received from management software 102 to determine when to start power capping mode for rack 120. For example, RMC 204 can compare the power consumption of each server 210, 220 to the power limit specified by management software 102 for rack 120. RMC 204 can add up the power consumption metrics received from service controller 214 and service controller 224 for each server 210 and 220 in rack 120 to determine the total power consumption for rack 120. Total power consumption for rack 120 can be calculated by adding (e.g., summing, totaling) up the power consumption for all servers in rack 120, for example. If the total power consumption for rack 120 is greater than a threshold value (e.g., 90% of the specified rack power limit), then RMC 204 can automatically enter power capping mode for rack 120. RMC 204 can monitor the total power consumption for rack 120 and exit power capping mode for rack 120 when the total power consumption for rack 120 falls below a threshold value (e.g., 70% of the specified rack power limit). For example, different threshold values can be used to determine when to enter and exit power capping mode for rack 120.
  • In some implementations, RMC 204 can automatically determine power capping limits for servers 210 and 220 in rack 120 when rack 120. For example, RMC 204 can limit the power consumption for servers in rack 120 by setting power consumption limits for the servers when rack 120 is in power capping mode. To determine power capping limits for servers 210 and 220 in rack 120, RMC 204 can determine the amount of discretionary power available to rack 120. Discretionary rack power can, for example, correspond to the amount of power that can be distributed among servers in rack 120 that have power capping enabled.
  • In some implementations, discretionary rack power (drPower) can be calculated by subtracting the power consumption of non-capped servers (nServers), e.g., servers with power capping disabled, and the expected power consumption of servers that are currently powered off (oServers) from the administrator or management software 102 specified power limit (rLimit) for rack 120 (e.g., drPower.= rLimit - (nServers + oServers)). The discretionary rack power calculation should include powered off servers because the servers may not be power capped when the servers are powered on and may, therefore, draw too much power from the system.
  • In some implementations, RMC 204 can distribute the discretionary rack power to the power capped servers. For example, RMC 204 can determine the power limit (sLimit) for each power-capped server based on the relative power consumption of each power-capped server in rack 120. The discretionary rack power can be distributed to the power-capped servers based on a proportion of a power-capped server's power consumption relative to the total power consumption of the power-capped servers in rack 120. The power limit (sLimit) for a server can be the proportional amount of the discretionary rack power distributed to the server.
  • In some implementations, RMC 204 can determine the maximum power consumption of server 210. The maximum power consumption (sPowerConsumption) of server 210 can be based on empirical data. For example, RMC 204 can collect power consumption metrics (historical data) from service controller 214 and service controller 224 over time. RMC 204 can determine the maximum power consumption by calculating an average of the historical power consumption. RMC can determine the maximum power consumption by determining the maximum power consumption metric in the collected power consumption metrics. The maximum power consumption of server 210 can be based on the specifications of the server and/or the CPUs therein. The maximum power consumption of server 210 can be an administrator-specified value.
  • In some implementations, RMC 204 can determine the maximum power consumption of all power-capped servers in rack 120. For example, RMC 204 can add up (e.g., total) the maximum power consumption values for all power-capped servers in rack 120 to determine the total power consumption (trPowerConsumption) of power-capped servers rack 120.
  • In some implementations, RMC 204 can determine the power capping limit for server 210 (sLimit) by multiplying the discretionary rack power (drPower) by a ratio of server 210 power consumption (sPowerConsumption) to total rack power consumption (trPowerConsumption). For example, RMC 204 can determine the server power limit by calculating sLimit = dsPower * (sPowerConsumption / trPowerConsumption).
  • In some implementations, once the power capping limit for server 210 is calculated, RMC 204 can send the power capping limit to server 210. For example, RMC 204 can send the power capping limit to service controller 214 of server 210. Service controller 214 can then adjust various components (e.g., CPU 212) of server 210 so that the server consumes no more power than the specified power capping limit for server 210. In some implementations, service controller 214 can automatically manage power consumption of server 210 based on the server power limit calculated by RMC 204. For example, service controller 214 can adjust the configuration (e.g., p-state, power state, operating state, operating frequency) of CPU 212 to reduce the frequency at which CPU 212 operates and thereby reduce the amount of power consumed by server 210.
  • The power capping limit can be calculated by RMC 204 for server 220 in a similar manner as server 210. For example, RMC 204 can send a power capping limit to service controller 224 and service controller 220 can adjust various components (e.g., CPU 222) of server 220 so that server 220 consumes less power that the RMC-specified power capping limit. Service controller 224 can automatically manage power consumption of server 220 based on the server power limit calculated by RMC 204. For example, service controller 224 can adjust the configuration (e.g., p-state) of CPU 222 to reduce the frequency at which CPU 222 operates and thereby reduce the amount of power consumed by server 220.
  • In some implementations, management software 102 can perform the functions of RMC 204 described above. For example, rack 120 may not include RMC 204. Thus, in some implementations, management software 102 can receive power consumption metrics directly from service controllers 214 and 224, determine power capping limits for servers 210 and 220, and communicate the determined power capping limits to service controllers 214 and 224, as described above with reference to FIG. 2 and RMC 204.
  • FIG. 3 illustrates an example graphical user interface (GUI) 300 for managing power consumption of a data center. For example, GUI 300 can be an interface of management software 102, as described above. A server administrator can provide input to GUI 300 to enable power capping for data centers, racks and/or servers. The server administrator can provide input to GUI 300 to specify power consumption limits for data centers and racks that will be used to perform data center, rack and server power capping, as described above.
  • In some implementations, GUI 300 can include graphical element 302 for enabling power capping in a data center. For example, a server administrator (i.e., user) can select graphical element 302 to enable or disable power capping in data center 110. GUI 300 can include graphical element 304 for specifying a power limit for data center 110. For example, the server administrator can select from graphical element 304 (e.g., from a pull-down menu, list, etc.) a value indicating an amount of power consumption that the data center cannot exceed. The server administrator can enter text into graphical element 304 that indicates a maximum power value. When power capping is enabled in data center 110, data center 110 will enter power capping mode when the power consumption of the data center exceeds a threshold value (e.g., 90% of the data center power limit). Data center 110 will exit power capping mode when the power consumption of the data center drops below a threshold value (e.g., 70% of the data center power limit).
  • In some implementations, GUI 300 can include graphical element 306 for specifying power capping parameters for a server rack. For example, graphical element 306 can represent server rack 120 in data center 110. GUI 300 can include a graphical element 306 corresponding to each rack in data center 110, for example.
  • In some implementations, graphical element 306 can include graphical element 308 for enabling power capping in a server rack. For example, a server administrator can select graphical element 306 to turn on and off power capping for server rack 120. Graphical element 306 can include graphical element 310 for specifying a power limit for server rack 120. For example, the server administrator can select a value from graphical element 310 (e.g., a pull-down menu, list, etc.) a value indicating an amount of power consumption that server rack 120 cannot exceed. The server administrator can enter text into graphical element 310 that indicates a maximum power value for server rack 120. When power capping is enabled in server rack 120, server rack 120 will enter power capping mode when the power consumption of server rack 120 exceeds a threshold value (e.g., 90% of the rack power limit). Server rack 120 will exit power capping mode when the power consumption of server rack 120 drops below a threshold value (e.g., 70% of the rack power limit).
  • In some implementations, graphical element 306 can include graphical elements 312-322 for enabling and disabling power capping for servers in a server rack. For example, graphical elements 312-322 can correspond to servers (e.g., server 210, server 220) in server rack 120. A server administrator can select individual graphical elements 312-322 (e.g., check boxes, switches, toggles, etc.) to enable or disable power capping for the corresponding server.
  • In some implementations, graphical element 306 can indicate the maximum power consumption for each server in a rack. For example, graphical element 306 can present graphical elements 312-322 that identify a corresponding server (e.g., server 210, server 220, etc.) in rack 120 and the maximum amount of power that each server consumes. When rack 120 is in power capping mode, rack 120 can use the maximum power consumption values to determine how to distribute the rack's discretionary power to the servers in the rack.
  • For example and as described above, discretionary power can be calculated by subtracting the power consumption of non-power capped servers and the power consumption of powered off servers from the power limit of the rack. Referring to graphical element 306, rack 120 includes three power capped servers (e.g., Server 1, Server 2, Server 3) corresponding to selected (e.g., checked box) graphical elements 312, 314, and 316. Rack 120 includes two non-power capped servers (e.g., Server 4, Server 5) corresponding to graphical elements 318 and 320. Rack 120 includes one powered off server (e.g., Server 6) corresponding to graphical element 322. To calculate discretionary power for rack 120 (e.g., Rack 1), the power consumption of the two non-power capped servers (Server 4: 400 Watts; Server 5: 325 Watts) and the power consumption of the powered off server (Server 6: 400 Watts) can be subtracted from the power limit of rack 120 (2,000 Watts). Thus, the discretionary power for rack 120 is 875 Watts (e.g., 2000-(400 + 325 + 400)).
  • Continuing the example above, the discretionary power for rack 120 can be distributed to the power capped servers (e.g., Server 1, Server 2, Server 3) based on the relative amount of power consumed by each server. For example, the amount of discretionary power allocated to Server 1 can be calculated by multiplying the discretionary power (875 Watts) by the ratio of the server's power consumption (e.g., 600 Watts) to total power consumption of all power capped servers in rack 120 (e.g., 600 Watts + 450 Watts + 500 Watts = 1550 Watts). Thus, the amount of discretionary power allocated to Server 1 is 338 Watts (e.g., 875 * (600 / 1550)). The amount of discretionary power allocated to Server 2 is 254 Watts (e.g., 875 * (450 / 1550)). The amount of discretionary power allocated to Server 3 is 282 Watts (e.g., 875 * (500 / 1550)).
  • A similar process can be used to determine the discretionary power for the data center and the allocation of discretionary power to racks within the data center, as described above with reference to FIG. 1.
  • FIG. 4 illustrates an example process 400 for dynamic server power capping. For example, process 400 can be used to manage server power consumption at the data center level and/or at the server rack level.
  • At step 402, a computing device can monitor the power usage for a group of servers. For example, a server group can correspond to a data center. A server group can correspond to a server rack. Management device 104 can be configured with management software 102 to monitor the power usage of servers within data center 110. Rack management controller 204 can be configured to monitor the power usage of servers within rack 120.
  • At step 404, the computing device can determine that the power usage for the group of servers exceeds a first threshold level. For example, the management device 104 can determine that the power usage for data center 110 exceeds 90% of the power limit for data center 110. Rack management controller 204 can determine that the power usage for rack 120 exceeds 90% of the power limit for rack 120.
  • At step 406, the computing device can cause the group of servers to enter power capping mode. For example, in response to determining that the power usage for data center 110 exceeds the first threshold, management device 104 can cause data center 110 to enter power capping mode. Similarly, in response to determining that the power usage for rack 120 is greater than the first threshold, rack management controller 204 can cause rack 120 to enter power capping mode.
  • At step 408, the computing device can determine the amount of discretionary power available for power capping. For example, management device 104 can determine the amount of discretionary power available to distribute to data capped server racks within data center 110. Rack management controller 204 can determine the amount of discretionary power available to distribute to data capped servers within server rack 120.
  • At step 410, the computing device can allocate the discretionary power. For example, management device 104 can allocate the discretionary data center power to power capped racks within data center 110 according to the proportional amount of power consumed by each rack within data center 110, as described above. Rack management controller 204 can allocate the discretionary rack power to servers within rack 120 according to the proportional amount of power consumed by each server within rack 120, as described above. The amount of discretionary power allocated to rack or server becomes the power limit for the rack or server, for example.
  • At step 414, the computing device can determine that power usage for a group of servers is below a second threshold level. For example, management device 104 can monitor power consumption of racks within data center 110 and detect when the power consumption of the racks drops below a second threshold level (e.g., 70% of the data center power limit). Rack management controller 204 can monitor power consumption of servers within rack 120 and detect when the power consumption of the servers drops below a second threshold level (e.g., 70% of the rack power limit).
  • At step 416, the computing device can cause the group of servers to exit power capping mode. For example, in response to detecting that the power consumption of the racks within data center 110 has dropped below the second threshold level, management device 104 can remove the power limits from the racks within data center 110 so that the power consumption of the racks in data center 110 is no longer limited. Similarly, in response to detecting that the power consumption of the servers within rack 120 has dropped below the second threshold level, rack management controller 204 can remove the power limits from the servers within rack 120 so that the power consumption of the servers in rack 120 is no longer limited.
  • Example System Architecture
  • FIG. 5 is a block diagram of an example system architecture 500 implementing the features and processes of FIGS. 1-4. The architecture 500 can be implemented on any electronic device that runs software applications derived from compiled instructions, including without limitation personal computers, servers, smart phones, media players, electronic tablets, game consoles, email devices, etc. In some implementations, the architecture 500 can include one or more processors 502, one or more input devices 504, one or more display devices 506, one or more network interfaces 508 and one or more computer-readable mediums 510. Each of these components can be coupled by bus 512.
  • Display device 506 can be any known display technology, including but not limited to display devices using Liquid Crystal Display (LCD) or Light Emitting Diode (LED) technology. Processor(s) 502 can use any known processor technology, including but are not limited to graphics processors and multi-core processors. Input device 504 can be any known input device technology, including but not limited to a keyboard (including a virtual keyboard), mouse, track ball, and touch-sensitive pad or display. Bus 512 can be any known internal or external bus technology, including but not limited to ISA, EISA, PCI, PCI Express, NuBus, USB, Serial ATA or FireWire.
  • Computer-readable medium 510 can be any medium that participates in providing instructions to processor(s) 502 for execution, including without limitation, non-volatile storage media (e.g., optical disks, magnetic disks, flash drives, etc.) or volatile media (e.g., SDRAM, ROM, etc.). The computer-readable medium (e.g., storage devices, mediums, and memories) can include, for example, a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
  • Computer-readable medium 510 can include various instructions for implementing an operating system 514 (e.g., Mac OS®, Windows®, Linux). The operating system 514 can be multi-user, multiprocessing, multitasking, multithreading, real-time and the like. The operating system 514 performs basic tasks, including but not limited to: recognizing input from input device 504; sending output to display device 506; keeping track of files and directories on computer-readable medium 510; controlling peripheral devices (e.g., disk drives, printers, etc.) which can be controlled directly or through an I/O controller; and managing traffic on bus 512. Network communications instructions 516 can establish and maintain network connections (e.g., software for implementing communication protocols, such as TCP/IP, HTTP, Ethernet, etc.). In some implementations, operating system 514 can perform at least some of the processes described in reference to FIGS. 1-4, above.
  • A graphics processing system 518 can include instructions that provide graphics and image processing capabilities. For example, the graphics processing system 518 can implement the processes described with reference to FIGS. 1-4. Application(s) 520 can be an application that uses or implements the processes described in reference to FIGS. 1-4. For example, applications 520 can include management software 204. Application(s) 520 can include rack management controller software for managing power within a rack. Application(s) 520 can include system controller software for managing power consumption within a server.
  • Service controller 522 can be a controller that operates independently of processor(s) 522 and/or operating system 514. In some implementations, service controller 522 can be powered and operational before processor(s) 502 are powered on and operating system 514 is loaded into processor(s) 502. For example, service controller 522 can provide for pre-OS management of the computing device through a dedicated network interface or other input device. For example, system controller 522 can be a baseboard management controller (service controller) that monitors device sensors (e.g., voltages, temperature, fans, etc.), logs events for failure analysis, provides LED guided diagnostics, performs power management, and/or provides remote management capabilities through an intelligent platform management interface (IPMI), keyboard, video, and mouse (KVM) redirection, serial over LAN (SOL), and/or other interfaces. Service controller 522 can be implement the processes described with reference to FIGS. 1-4 above. For example, service controller 522 can be configured to manage power consumption within a server.
  • The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language (e.g., Objective-C, Java), including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors or cores, of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
  • To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
  • The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
  • The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • One or more features or steps of the disclosed embodiments can be implemented using an API. An API can define on or more parameters that are passed between a calling application and other software code (e.g., an operating system, library routine, function) that provides a service, that provides data, or that performs an operation or a computation.
  • The API can be implemented as one or more calls in program code that send or receive one or more parameters through a parameter list or other structure based on a call convention defined in an API specification document. A parameter can be a constant, a key, a data structure, an object, an object class, a variable, a data type, a pointer, an array, a list, or another call. API calls and parameters can be implemented in any programming language. The programming language can define the vocabulary and calling convention that a programmer will employ to access functions supporting the API.
  • In some implementations, an API call can report to an application the capabilities of a device running the application, such as input capability, output capability, processing capability, power capability, communications capability, etc.
  • A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.

Claims (15)

  1. A method comprising:
    obtaining a first power limit for a group of servers;
    calculating a first threshold value based on the first power limit, where the first threshold value is less than the first power limit;
    determining that power consumption for the group of servers exceeds the first threshold value; and
    causing the group of servers to enter power capping mode when the power consumption for the group of servers exceeds the first threshold value.
  2. The method of claim 1, further comprising:
    determining an amount of discretionary power available to the group of servers;
    determining that a particular server in the group of servers has power capping enabled; and
    allocating a portion of the discretionary power to the particular server.
  3. The method of claim 2, further comprising:
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption, and/or
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption relative to a total power consumption of other power capped servers in the group of servers.
  4. The method of claim 2, wherein the allocated portion of the discretionary power is a power consumption limit for the particular server and preferably further comprising:
    sending the power consumption limit to the particular server, wherein the particular server's operating parameters are adjusted so that the particular server consumes less power than specified by the power consumption limit.
  5. The method of claim 1, further comprising:
    monitoring power consumption for the group of servers;
    determining that the power consumption is below a second threshold value; and
    causing the group of servers to exit power capping mode when the power consumption for the group of servers drops below the second threshold value.
  6. A non-transitory computer-readable medium including one or more sequences of instructions which, when executed by one or more processors, causes:
    obtaining a first power limit for a group of servers;
    calculating a first threshold value based on the first power limit, where the first threshold value is less than the first power limit;
    determining that power consumption for the group of servers exceeds the first threshold value; and
    causing the group of servers to enter power capping mode when the power consumption for the group of servers exceeds the first threshold value.
  7. The non-transitory computer-readable medium of claim 6, wherein the instructions cause:
    determining an amount of discretionary power available to the group of servers;
    determining that a particular server in the group of servers has power capping enabled; and
    allocating a portion of the discretionary power to the particular server.
  8. The non-transitory computer-readable medium of claim 6, wherein the instructions cause:
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption, and/or
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption relative to a total power consumption of other power capped servers in the group of servers.
  9. The non-transitory computer-readable medium of claim 6, wherein the allocated portion of the discretionary power is a power consumption limit for the particular server and wherein preferably the instructions cause:
    sending the power consumption limit to the particular server, wherein the particular server's operating parameters are adjusted so that the particular server consumes less power than specified by the power consumption limit.
  10. The non-transitory computer-readable medium of claim 6, wherein the instructions cause:
    monitoring power consumption for the group of servers;
    determining that the power consumption is below a second threshold value; and
    causing the group of servers to exit power capping mode when the power consumption for the group of servers drops below the second threshold value.
  11. A system comprising:
    one or more processors; and
    a non-transitory computer-readable medium including one or more sequences of instructions which, when executed by the one or more processors, causes:
    obtaining a first power limit for a group of servers;
    calculating a first threshold value based on the first power limit, where the first threshold value is less than the first power limit;
    determining that power consumption for the group of servers exceeds the first threshold value; and
    causing the group of servers to enter power capping mode when the power consumption for the group of servers exceeds the first threshold value.
  12. The system of claim 11, wherein the instructions cause:
    determining an amount of discretionary power available to the group of servers;
    determining that a particular server in the group of servers has power capping enabled; and
    allocating a portion of the discretionary power to the particular server.
  13. The system of claim 12, wherein the instructions cause:
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption, and/or
    allocating the portion of the discretionary power to the particular server based on the particular server's power consumption relative to a total power consumption of other power capped servers in the group of servers.
  14. The system of claim 12, wherein the allocated portion of the discretionary power is a power consumption limit for the particular server and wherein the instructions preferably cause:
    sending the power consumption limit to the particular server, wherein the particular server's operating parameters are adjusted so that the particular server consumes less power than specified by the power consumption limit.
  15. The system of claim 11, wherein the instructions cause:
    monitoring power consumption for the group of servers;
    determining that the power consumption is below a second threshold value; and
    causing the group of servers to exit power capping mode when the power consumption for the group of servers drops below the second threshold value.
EP15177919.6A 2015-02-25 2015-07-22 Dynamic server power capping Active EP3062192B1 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/631,024 US9250684B1 (en) 2015-02-25 2015-02-25 Dynamic power capping of a subset of servers when a power consumption threshold is reached and allotting an amount of discretionary power to the servers that have power capping enabled

Publications (2)

Publication Number Publication Date
EP3062192A1 true EP3062192A1 (en) 2016-08-31
EP3062192B1 EP3062192B1 (en) 2019-07-03

Family

ID=53794013

Family Applications (1)

Application Number Title Priority Date Filing Date
EP15177919.6A Active EP3062192B1 (en) 2015-02-25 2015-07-22 Dynamic server power capping

Country Status (5)

Country Link
US (1) US9250684B1 (en)
EP (1) EP3062192B1 (en)
JP (1) JP6243939B2 (en)
CN (1) CN105912090B (en)
TW (1) TWI566084B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11157067B2 (en) 2019-12-14 2021-10-26 International Business Machines Corporation Power shifting among hardware components in heterogeneous system
EP3916554A1 (en) * 2020-05-27 2021-12-01 Google LLC A throughput-optimized, quality-of-service aware power capping system

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9979674B1 (en) * 2014-07-08 2018-05-22 Avi Networks Capacity-based server selection
US9804937B2 (en) * 2014-09-08 2017-10-31 Quanta Computer Inc. Backup backplane management control in a server rack system
US20170212569A1 (en) * 2014-10-31 2017-07-27 Hewlett Packard Enterprise Development Lp Load discovery
US9575540B1 (en) * 2015-07-31 2017-02-21 Hon Hai Precision Industry Co., Ltd. Power consumption management device, system and method thereof
US10509456B2 (en) * 2016-05-06 2019-12-17 Quanta Computer Inc. Server rack power management
US10126798B2 (en) * 2016-05-20 2018-11-13 Dell Products L.P. Systems and methods for autonomously adapting powering budgeting in a multi-information handling system passive chassis environment
US10437303B2 (en) * 2016-05-20 2019-10-08 Dell Products L.P. Systems and methods for chassis-level view of information handling system power capping
CN106371546A (en) * 2016-08-30 2017-02-01 浪潮电子信息产业股份有限公司 Method and device for limiting power dissipation of whole cabinet
KR101830859B1 (en) 2016-10-27 2018-02-21 이상수 Method of diagnosing energy being consumed in internet data center by using virtual building model
CN109388488B (en) * 2017-08-02 2022-11-25 联想企业解决方案(新加坡)有限公司 Power allocation in computer system
US10884469B2 (en) * 2018-09-14 2021-01-05 Quanta Computer Inc. Method and system for dynamically allocating and optimizing power resources
CN110633169B (en) * 2019-01-07 2020-09-22 北京聚通达科技股份有限公司 Backup computer storage system
US10740089B1 (en) * 2019-03-12 2020-08-11 Dell Products L.P. System and method for power supply unit firmware update
US11934242B2 (en) * 2019-12-08 2024-03-19 Nvidia Corporation Intelligent data center including N independent coolable clusters having respective actual power demand (P ac) adjustable at, above or below an ostensible power demand (P os)
CN113625858A (en) * 2020-05-09 2021-11-09 鸿富锦精密电子(天津)有限公司 Data center energy-saving device and method
US11429176B2 (en) * 2020-05-14 2022-08-30 Dell Products L.P. Intelligent and predictive optimization of power needs across virtualized environments
CN111914002B (en) * 2020-08-11 2023-08-18 中国工商银行股份有限公司 Machine room resource information processing method and device and electronic equipment
CN112817746B (en) * 2021-01-15 2023-01-10 浪潮电子信息产业股份有限公司 CPU power adjusting method, device, equipment and readable storage medium
US20240211423A1 (en) * 2022-12-22 2024-06-27 Lenovo Enterprise Solutions (Singapore) Pte. Ltd. Auto switching video by power usage
US11863388B1 (en) * 2023-03-31 2024-01-02 Cisco Technology, Inc. Energy-aware traffic forwarding and loop avoidance

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070186121A1 (en) * 2006-02-07 2007-08-09 Fujitsu Limited Server system, power control method, and computer product
US20130268791A1 (en) * 2012-04-06 2013-10-10 Fujitsu Limited Information processing apparatus, control method, and computer-readable recording medium

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7467311B2 (en) * 2005-06-09 2008-12-16 International Business Machines Corporation Distributed system and method for managing power usage among server data processing systems
US7461274B2 (en) 2005-08-23 2008-12-02 International Business Machines Corporation Method for maximizing server utilization in a resource constrained environment
US7702931B2 (en) 2006-06-27 2010-04-20 Hewlett-Packard Development Company, L.P. Adjusting power budgets of multiple servers
US10339227B1 (en) 2007-06-08 2019-07-02 Google Llc Data center design
TW201017370A (en) * 2008-10-23 2010-05-01 Inventec Corp Multi-function power supply
CN102395937B (en) 2009-04-17 2014-06-11 惠普开发有限公司 Power capping system and method
US8504690B2 (en) * 2009-08-07 2013-08-06 Broadcom Corporation Method and system for managing network power policy and configuration of data center bridging
US8478451B2 (en) 2009-12-14 2013-07-02 Intel Corporation Method and apparatus for dynamically allocating power in a data center
WO2012036688A1 (en) 2010-09-16 2012-03-22 Hewlett-Packard Development Company, L.P. Power capping system
US8838286B2 (en) * 2010-11-04 2014-09-16 Dell Products L.P. Rack-level modular server and storage framework
TW201221987A (en) * 2010-11-23 2012-06-01 Inventec Corp Detection method for configuration of power supply units and detection system using the same
US8868936B2 (en) 2010-11-29 2014-10-21 Cisco Technology, Inc. Dynamic power balancing among blade servers in a chassis
JP5673167B2 (en) * 2011-02-07 2015-02-18 日本電気株式会社 Power management method for electrical equipment
US20120226922A1 (en) * 2011-03-04 2012-09-06 Zhikui Wang Capping data center power consumption
JP5663383B2 (en) * 2011-04-18 2015-02-04 株式会社日立製作所 Blade server power control method and system
US8661279B2 (en) 2011-07-19 2014-02-25 Hewlett-Packard Development Company, L.P. Power capping using C-states
CN103186216A (en) * 2011-12-29 2013-07-03 英业达科技有限公司 Cabinet server system and power supply control method thereof
TWI571733B (en) * 2012-01-10 2017-02-21 廣達電腦股份有限公司 Server rack system and power management method applicable thereto
US9201486B2 (en) 2012-02-23 2015-12-01 Cisco Technology, Inc. Large scale dynamic power budget adjustments for optimizing power utilization in a data center
US9189045B2 (en) 2012-10-08 2015-11-17 Dell Products L.P. Power management system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070186121A1 (en) * 2006-02-07 2007-08-09 Fujitsu Limited Server system, power control method, and computer product
US20130268791A1 (en) * 2012-04-06 2013-10-10 Fujitsu Limited Information processing apparatus, control method, and computer-readable recording medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11157067B2 (en) 2019-12-14 2021-10-26 International Business Machines Corporation Power shifting among hardware components in heterogeneous system
EP3916554A1 (en) * 2020-05-27 2021-12-01 Google LLC A throughput-optimized, quality-of-service aware power capping system
US11599184B2 (en) 2020-05-27 2023-03-07 Google Llc Throughput-optimized, quality-of-service aware power capping system
US11966273B2 (en) 2020-05-27 2024-04-23 Google Llc Throughput-optimized, quality-of-service aware power capping system

Also Published As

Publication number Publication date
TWI566084B (en) 2017-01-11
JP2016157443A (en) 2016-09-01
CN105912090A (en) 2016-08-31
US9250684B1 (en) 2016-02-02
EP3062192B1 (en) 2019-07-03
JP6243939B2 (en) 2017-12-06
CN105912090B (en) 2019-04-19
TW201631435A (en) 2016-09-01

Similar Documents

Publication Publication Date Title
US9250684B1 (en) Dynamic power capping of a subset of servers when a power consumption threshold is reached and allotting an amount of discretionary power to the servers that have power capping enabled
US9588571B2 (en) Dynamic power supply management
US9519515B2 (en) Remediating gaps between usage allocation of hardware resource and capacity allocation of hardware resource
EP3016226B1 (en) Preventing device power on after unrecoverable error
US10289183B2 (en) Methods and apparatus to manage jobs that can and cannot be suspended when there is a change in power allocation to a distributed computer system
US7539881B2 (en) System and method for dynamically adjusting power caps for electronic components based on power consumption
US9251027B2 (en) Information handling system performance optimization system
US20160306678A1 (en) Automatic Analytical Cloud Scaling of Hardware Using Resource Sub-Cloud
US20150134831A1 (en) Management method and apparatus
US10282236B2 (en) Dynamic load balancing for data allocation to servers
WO2016051203A2 (en) Benchmarking mobile devices
US20110016337A1 (en) Software-based power capping
US10037214B2 (en) Determining power capping policies for a computer device
US8612998B2 (en) Coordinating device and application break events for platform power saving
US20160378557A1 (en) Task allocation determination apparatus, control method, and program
CN103229125A (en) Dynamic power balancing among blade servers in chassis
US10114438B2 (en) Dynamic power budgeting in a chassis
US20210089109A1 (en) Management of turbo states based upon user presence
US11625088B2 (en) Peripheral interface power allocation
US9195514B2 (en) System and method for managing P-states and C-states of a system
US9904563B2 (en) Processor management
US20130179890A1 (en) Logical device distribution in a storage system
WO2022062937A1 (en) Task scheduling method and apparatus, and computer system
US20140173617A1 (en) Dynamic task completion scaling of system resources for a battery operated device
WO2019153993A1 (en) Video playback energy consumption control

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20160315

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20180226

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602015033009

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0001260000

Ipc: G06F0009500000

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 1/32 20060101ALI20181128BHEP

Ipc: G06F 9/50 20060101AFI20181128BHEP

Ipc: G06F 1/26 20060101ALI20181128BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20190121

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: AT

Ref legal event code: REF

Ref document number: 1151813

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190715

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015033009

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190703

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1151813

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191003

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191104

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191003

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191103

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20191004

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190731

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190722

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200224

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190731

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015033009

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG2D Information on lapse in contracting state deleted

Ref country code: IS

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190722

26N No opposition filed

Effective date: 20200603

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150722

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190703

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20240530

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20240611

Year of fee payment: 10

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20240529

Year of fee payment: 10