US20180088608A1 - Thermal capacity management - Google Patents

Thermal capacity management Download PDF

Info

Publication number
US20180088608A1
US20180088608A1 US15/819,318 US201715819318A US2018088608A1 US 20180088608 A1 US20180088608 A1 US 20180088608A1 US 201715819318 A US201715819318 A US 201715819318A US 2018088608 A1 US2018088608 A1 US 2018088608A1
Authority
US
United States
Prior art keywords
cabinet
cabinets
data center
user
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/819,318
Inventor
Mahmoud I. Ibrahim
Saurabh K. Shrivastava
Robert E. Wilcox
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panduit Corp
Original Assignee
Panduit Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panduit Corp filed Critical Panduit Corp
Priority to US15/819,318 priority Critical patent/US20180088608A1/en
Publication of US20180088608A1 publication Critical patent/US20180088608A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D23/00Control of temperature
    • G05D23/19Control of temperature characterised by the use of electric means
    • G05D23/1917Control of temperature characterised by the use of electric means using digital means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • G06F1/206Cooling means comprising thermal management
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20763Liquid cooling without phase change
    • H05K7/2079Liquid cooling without phase change within rooms for removing heat from cabinets
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20836Thermal management, e.g. server temperature control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • Embodiment of the present invention generally relate to the field of thermal capacity management within data centers, and more specifically, to methods and systems which provide feedback based on thermal information associated with parts of a data center.
  • Data centers are often designed with a projected capacity, which is usually more than twice the capacity utilized in the first day of its operation. Consequently, over time, equipment within the data center gets updated, replaced, and added as necessitated by the operational needs. Given the changes which take place during the life of a data center, it is important to be aware of what locations can be considered safe for new equipment.
  • the safety factor is not only dictated by the available rack unit (RU) spaces within cabinets, the cabinet weight limits, and power availability, but more importantly it is also dictated by the available cooling (thermal) capacity at a given cabinet.
  • At least some embodiment of the present invention are generally directed to systems and methods for providing feedback information based on thermal and power variables.
  • the present invention is a method comprising the steps of using temperature measurements and power meter readings to provide a real-time capacity usage in a given data center.
  • the present invention is a system for managing cooling capacity within a data center or within a subset of a data center, where the system includes at least one processor; and a computer readable medium connected to the at least one processor.
  • the computer readable medium includes instructions for collecting information from a plurality of cabinets, the information including an inlet temperature, a maximum allowable cabinet temperature, and a supply air temperature, where the collected temperatures are used to calculate a value Theta for each of the plurality of cabinets.
  • the computer readable medium further includes instructions for determining whether any of the calculated Theta values indicates that any of the plurality of cabinets' inlet temperatures is at least one of below, at, and above the respective maximum allowable cabinet temperature.
  • the computer readable medium further includes instructions for determining whether, based on any of the calculated Theta values, performed cooling capacity management will satisfy a user confidence level, and if the confidence level is satisfied, for distributing the remaining cooling capacity over at the plurality of cabinets.
  • the present invention is a non-transitory computer readable storage medium including a sequence of instructions stored thereon for causing a computer to execute a method for managing cooling capacity within a data center.
  • the method includes collecting cabinet information from each of a plurality of cabinets, the cabinet information including an inlet temperature, a maximum allowable cabinet temperature, and a supply air temperature.
  • the method also includes collecting a total power consumption for the plurality of cabinets.
  • the method also includes collecting a total cooling capacity for the plurality of cabinets.
  • the method also includes deriving a remaining cooling capacity for the plurality of cabinets.
  • the method also includes for each of the plurality of cabinets calculating a ⁇ value, each of the calculated ⁇ values being calculated at least in part from the respective collected cabinet information.
  • the method also includes for each of the calculated ⁇ values determining whether any one of the plurality of cabinets' inlet temperatures is at least one of below, at, and above the respective maximum allowable cabinet temperatures, where if any one of the inlet temperatures is at least one of at and above the respective maximum allowable cabinet temperatures, providing a first alarm, and where if all of the inlet temperatures are below the respective maximum allowable cabinet temperatures, determining whether each of the calculated ⁇ values is at least one of below, at, and above a user-defined ⁇ value, where if any one of the calculated ⁇ values is at least one of at and above the user-defined ⁇ value, providing a second alarm, and where if all of the calculated ⁇ values are below the user-defined ⁇ value, distributing the remaining cooling capacity over the plurality of cabinets.
  • FIG. 1 illustrates a flow chart representative of an embodiment of the present invention.
  • FIG. 2 illustrates the correlation between a confidence percentage and a Theta value.
  • FIG. 3 illustrates an executed embodiment of the present invention.
  • FIG. 1 this figure illustrates a flowchart showing the steps performed in one embodiment of the present invention.
  • cabinet inlet temperatures T max,i are obtained from the available cabinets within the data center. This can be achieved by monitoring temperature sensors which are installed within the cabinets. While in instances where only one sensor is installed, only one temperature reading can be obtained, in instances where multiple sensors are installed in a cabinet, it is preferable to use the maximum recorded temperature for the T max,i value. Alternatively, average values may be considered.
  • power consumption values P i are obtained from the available cabinets.
  • One way of obtaining the necessary real-time power readings is to collect power usage information from power outlet units (POUs) which are typically installed in data center cabinets. Each POU provides a total power usage reading for the respective cabinet. Adding the available POU readings from each of the cabinets present within a data center or within a subset of a data center provides the total power usage value ⁇ P i for the respective data center or for a respective subset of that data center.
  • POUs power outlet units
  • the total cooling capacity of a data center or of a subset of a data center is calculated. This can be done by using manufacturer-supplied data, such as the rated capacity of the cooling equipment within the data center. Using this data, the rated capacity of the cooling equipment within the data center or within a subset of a data center are summed together and are used to obtain the total remaining cooling capacity available at the current specified cooling equipment set-point.
  • step 115 the total power usage ⁇ P i calculated in step 105 is subtracted from the total cooling capacity calculated in step 110 .
  • the resulting P cool value is then possible to determine the remaining cooling capacity in step 115 .
  • Theta This parameter is computed for at least one cabinet, and preferably for every cabinet in a data center or a subset of a data center. For every cabinet, Theta is calculated using the maximum inlet cabinet temperature T max,i , maximum allowable temperature T Allowable , and the supply air temperature T SAT of the air being supplied by the cooling equipment, where ⁇ is derived using the following equation:
  • the maximum allowable temperature T Allowable can be obtained either from the manufacturer's specification or this value may be set to any value deemed appropriate by the user.
  • the supply air temperature T SAT of the air being supplied by the cooling equipment can be obtained by way of measuring said temperature at or near the equipment supplying the cooling air or at any position before the cabinet that is deemed to provide an accurate representation of the temperature of the air that is being supplied.
  • Theta can be described as the temperature gradient between the inlet temperature T max,i of each cabinet and the supply air temperature T SAT , with respect to a maximum allowable temperature T Allowable .
  • a Theta value of zero indicates that the cabinet inlet temperature is at the supply air temperature (no gradient).
  • a Theta value of one indicates that the cabinet inlet temperature is at the allowable temperature, and a value above one indicates a cabinet inlet temperature above the allowable temperature.
  • the calculated Theta value is used to determine the next course of action. If any one cabinet inlet temperature is at or above a set allowable temperature (evidenced by a Theta value being equal or greater than 1), the system determines that there is no additional cooling capacity available on any of the cabinets until the issue of the inlet temperature being higher than the allowable temperature is resolved to where the inlet temperature is lower than the allowable temperature. To notify the user of the potential risk of overheating, an alarm may be signaled to the user, as shown in in step 130 . This may be done in any number of suitable ways and can include electronic, visual, aural, or any other appropriate methods of delivery.
  • the user receives a message within data center management software used to manage the data center where the message provides a map-like representation of the data center with any of the problematic cabinets being highlighted a certain color.
  • the present invention may provide the user with potential ways to fix the issues causing the alarm. This may include, without limitations, suggestions to check the blanking panels, add perforated tiles, and/or change the cooling unit set-point.
  • the present invention compares the calculated Theta values against a predefined ⁇ user value.
  • the ⁇ user value corresponds to a specific user-confidence percentage, and the predefined correlation between the two is derived through a number of Computational Fluid Dynamics (CFD) models that are representative of actual data centers (as explained later in the specification).
  • CFD Computational Fluid Dynamics
  • the plot in FIG. 2 shows the confidence value in the cooling capacity management method used for different theta values. If, for example, the user specified Theta ( ⁇ user ) is 0.3 or below, the cooling capacity management method represented by the flow chart of FIG. 1 is likely to work 100% of the time, keeping a safe thermal environment for the IT equipment.
  • the cooling capacity management method represented by the flow chart of FIG. 1 is likely to work 70% of the time. Note that if and when a certain confidence level is selected, such a level correlates to the highest possible value of ⁇ user that will still correspond to the selected confidence level. Therefore, for example, if a confidence level of 100% is selected, the ⁇ user value used in the execution of the present invention will be 0.3 instead of 0.1.
  • step 140 determines that the calculated Theta values for a set of cabinets or all the cabinets within a data center fall below a predefined value ⁇ user .
  • the present invention distributes the remaining cooling capacity P cool over said cabinets in step 145 and provides the user with a confidence percentage that the executed distribution will successfully work. If, however, any of the calculated Theta values are equal to or greater than ⁇ user , the present invention outputs an alarm (similar to the alarm of step 130 ) in step 150 . This alarm can signal to the user that the cooling capacity management in accordance with the present invention would not achieve the sufficient confidence percentage.
  • the predefined ⁇ user value can be set by the user by way of selecting a desired confidence level, wherein based on the selected confidence level, the present invention determines the appropriate ⁇ user value.
  • the present invention would translate that percentage into a ⁇ user value of 0.4 and use that value in step 140 .
  • the correlation between the ⁇ user value and the confidence level is developed via a number of Computational Fluid Dynamics (CFD) models that are representative of real data centers.
  • CFD Computational Fluid Dynamics
  • the CFD models are ran for different conditions, changing a number of key variables such as: supply air temperature, cabinet power, and different types of IT equipment.
  • the CFD models are ran with different air ratios (AR).
  • AR air ratios
  • there ranges are from 0.8 AR to 2 AR.
  • Air ratio is defined as the ratio between the airflow supplied by the cooling units and the total airflow required for the IT equipment.
  • the maximum cabinet inlet temperatures are monitored. If a cabinet maximum inlet temperature exceeds a specified allowable temperature, thermal capacity is not managed. If all cabinet inlet temperatures are below the allowable temperatures, capacity is managed by distributing the available cooling capacity among all the cabinets equally. The model is then rerun using the new managed capacity for different ARs. Theta is calculated per cabinet for the baseline run with the minimum AR that provided safe cabinet inlet temperatures. The maximum Theta value is used for the percent confidence value in the present invention.
  • Theta values are collected to provide the overall percent confidence in the present invention.
  • the percent confidence is a way of providing the user with a barometer for confidence for the approach used for capacity management among the cabinets, for a given set of theta values in their data center.
  • FIG. 3 An example of the how a system in accordance with the present invention may be used is shown in FIG. 3 .
  • This figure illustrates two data center layouts (one being the current layout and one being the projected layout) and provides a user input interface where the user may select a particular confidence level.
  • the selection of the confidence level is done by way of a slider which ranges from “optimistic” to “conservative” with “conservative” being most confident and “optimistic” being least confident.
  • a particular confidence level may be inputted in any number of ways, including without limitation manual entry of a number or automatic entry based on at least one other factor. Having the necessary temperature values, the system calculates the maximum Theta value to be 0.05.
  • the present invention proceeds to the next step without triggering an alarm.
  • the 0.05 Theta value is then compared to the ⁇ user value which is derived from the selected confidence level percentage.
  • the selected confidence level percentage is ⁇ 100%, which translates to a ⁇ user value of 0.3. Since the maximum Theta is not greater than or equal to ⁇ user value, the system proceeds, yet again without triggering an alarm, to distribute the remaining cooling capacity evenly over all the cabinets under consideration. In this case, the remaining cooling capacity is distributed evenly, and thus each cabinet receives an additional 5.73 kw of cooling capacity.
  • alternate distribution schemes may be implemented.
  • references to a “data center” throughout this application and the claims may be understood to refer to the entire data center and/or to a subset of a data center.
  • Embodiment of the present invention may be implemented using at least one computer. At least some of the operations described above may be codified in computer readable instructions such that these operations may be executed by the computer.
  • the computer may be a stationary device (e.g., a server) or a portable device (e.g., a laptop).
  • the computer includes a processor, memory, and one or more drives or storage devices.
  • the storage devices and their associated computer storage media provide storage of computer readable instructions, data structures, program modules and other non-transitory information for the computer.
  • Storage devices include any device capable of storing non-transitory data, information, or instructions, such as: a memory chip storage including RAM, ROM, EEPROM, EPROM or any other type of flash memory device; a magnetic storage device including a hard or floppy disk, and magnetic tape; optical storage devices such as a CD-ROM disc, a BD-ROM disc, and a BluRayTM disc; and holographic storage devices.
  • the computer may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer.
  • the remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and may include many if not all of the elements described above relative to computer.
  • Networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • a computer may comprise the source machine from which data is being migrated, and the remote computer may comprise the destination machine. Note, however, that source and destination machines need not be connected by a network or any other means, but instead, data may be migrated via any media capable of being written by the source platform and read by the destination platform or platforms.
  • a computer When used in a LAN or WLAN networking environment, a computer is connected to the LAN through a network interface or an adapter. When used in a WAN networking environment, a computer typically includes a network interface card or other means for establishing communications over the WAN to environments such as the Internet. It will be appreciated that other means of establishing a communications link between the computers may be used.
  • an implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware.
  • any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary.
  • Those skilled in the art will recognize that optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Thermal Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Automation & Control Theory (AREA)
  • Cooling Or The Like Of Electrical Apparatus (AREA)
  • Central Air Conditioning (AREA)
  • Air Conditioning Control Device (AREA)

Abstract

Embodiments of the present disclosure generally relate to the field of thermal capacity management within data centers. In an embodiment, the present disclosure describes a method including using temperature measurements to provide real-time capacity usage information in a given data center and to use that information to perform moves/adds/changes with a particular level of confidence.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of, and claims the benefits of priority to, U.S. patent application Ser. No. 14/474,496, filed on Sep. 2, 2014 (now allowed), and U.S. Provisional Patent Application No. 61/873,632, filed on Sep. 4, 2013, which are incorporated herein by reference in their entireties.
  • FIELD OF INVENTION
  • Embodiment of the present invention generally relate to the field of thermal capacity management within data centers, and more specifically, to methods and systems which provide feedback based on thermal information associated with parts of a data center.
  • BACKGROUND
  • Data centers are often designed with a projected capacity, which is usually more than twice the capacity utilized in the first day of its operation. Consequently, over time, equipment within the data center gets updated, replaced, and added as necessitated by the operational needs. Given the changes which take place during the life of a data center, it is important to be aware of what locations can be considered safe for new equipment. The safety factor is not only dictated by the available rack unit (RU) spaces within cabinets, the cabinet weight limits, and power availability, but more importantly it is also dictated by the available cooling (thermal) capacity at a given cabinet.
  • Various systems direct to thermal capacity management have been developed. However, continued need by data center managers for new ways of evaluating thermal capacity of a data center and how electronic equipment impacts this capacity creates a need for new and improved systems and methods related to this field.
  • SUMMARY
  • Accordingly, at least some embodiment of the present invention are generally directed to systems and methods for providing feedback information based on thermal and power variables.
  • In an embodiment, the present invention is a method comprising the steps of using temperature measurements and power meter readings to provide a real-time capacity usage in a given data center.
  • In another embodiment, the present invention is a system for managing cooling capacity within a data center or within a subset of a data center, where the system includes at least one processor; and a computer readable medium connected to the at least one processor. The computer readable medium includes instructions for collecting information from a plurality of cabinets, the information including an inlet temperature, a maximum allowable cabinet temperature, and a supply air temperature, where the collected temperatures are used to calculate a value Theta for each of the plurality of cabinets. The computer readable medium further includes instructions for determining whether any of the calculated Theta values indicates that any of the plurality of cabinets' inlet temperatures is at least one of below, at, and above the respective maximum allowable cabinet temperature. The computer readable medium further includes instructions for determining whether, based on any of the calculated Theta values, performed cooling capacity management will satisfy a user confidence level, and if the confidence level is satisfied, for distributing the remaining cooling capacity over at the plurality of cabinets.
  • In yet another embodiment, the present invention is a non-transitory computer readable storage medium including a sequence of instructions stored thereon for causing a computer to execute a method for managing cooling capacity within a data center. The method includes collecting cabinet information from each of a plurality of cabinets, the cabinet information including an inlet temperature, a maximum allowable cabinet temperature, and a supply air temperature. The method also includes collecting a total power consumption for the plurality of cabinets. The method also includes collecting a total cooling capacity for the plurality of cabinets. The method also includes deriving a remaining cooling capacity for the plurality of cabinets. The method also includes for each of the plurality of cabinets calculating a θ value, each of the calculated θ values being calculated at least in part from the respective collected cabinet information. And the method also includes for each of the calculated θ values determining whether any one of the plurality of cabinets' inlet temperatures is at least one of below, at, and above the respective maximum allowable cabinet temperatures, where if any one of the inlet temperatures is at least one of at and above the respective maximum allowable cabinet temperatures, providing a first alarm, and where if all of the inlet temperatures are below the respective maximum allowable cabinet temperatures, determining whether each of the calculated θ values is at least one of below, at, and above a user-defined θ value, where if any one of the calculated θ values is at least one of at and above the user-defined θ value, providing a second alarm, and where if all of the calculated θ values are below the user-defined θ value, distributing the remaining cooling capacity over the plurality of cabinets.
  • These and other features, aspects, and advantages of the present invention will become better-understood with reference to the following drawings, description, and any claims that may follow.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a flow chart representative of an embodiment of the present invention.
  • FIG. 2 illustrates the correlation between a confidence percentage and a Theta value.
  • FIG. 3 illustrates an executed embodiment of the present invention.
  • DETAILED DESCRIPTION
  • Referring now to FIG. 1, this figure illustrates a flowchart showing the steps performed in one embodiment of the present invention. In the initial step 100, cabinet inlet temperatures Tmax,i are obtained from the available cabinets within the data center. This can be achieved by monitoring temperature sensors which are installed within the cabinets. While in instances where only one sensor is installed, only one temperature reading can be obtained, in instances where multiple sensors are installed in a cabinet, it is preferable to use the maximum recorded temperature for the Tmax,i value. Alternatively, average values may be considered.
  • In the next step 105, power consumption values Pi are obtained from the available cabinets. One way of obtaining the necessary real-time power readings is to collect power usage information from power outlet units (POUs) which are typically installed in data center cabinets. Each POU provides a total power usage reading for the respective cabinet. Adding the available POU readings from each of the cabinets present within a data center or within a subset of a data center provides the total power usage value ΣPi for the respective data center or for a respective subset of that data center.
  • In the next step 110, the total cooling capacity of a data center or of a subset of a data center is calculated. This can be done by using manufacturer-supplied data, such as the rated capacity of the cooling equipment within the data center. Using this data, the rated capacity of the cooling equipment within the data center or within a subset of a data center are summed together and are used to obtain the total remaining cooling capacity available at the current specified cooling equipment set-point.
  • Having the total power usage, it is then possible to determine the remaining cooling capacity in step 115. To do this, the total power usage ΣPi calculated in step 105 is subtracted from the total cooling capacity calculated in step 110. The resulting Pcool value.
  • Next, it is necessary to calculate a non-dimensional parameter Theta (θ). This parameter is computed for at least one cabinet, and preferably for every cabinet in a data center or a subset of a data center. For every cabinet, Theta is calculated using the maximum inlet cabinet temperature Tmax,i, maximum allowable temperature TAllowable, and the supply air temperature TSAT of the air being supplied by the cooling equipment, where θ is derived using the following equation:
  • θ = T max , i - T SAT T Allowable - T SAT
  • The maximum allowable temperature TAllowable can be obtained either from the manufacturer's specification or this value may be set to any value deemed appropriate by the user. The supply air temperature TSAT of the air being supplied by the cooling equipment can be obtained by way of measuring said temperature at or near the equipment supplying the cooling air or at any position before the cabinet that is deemed to provide an accurate representation of the temperature of the air that is being supplied.
  • Theta can be described as the temperature gradient between the inlet temperature Tmax,i of each cabinet and the supply air temperature TSAT, with respect to a maximum allowable temperature TAllowable. A Theta value of zero indicates that the cabinet inlet temperature is at the supply air temperature (no gradient). A Theta value of one indicates that the cabinet inlet temperature is at the allowable temperature, and a value above one indicates a cabinet inlet temperature above the allowable temperature.
  • As shown in step 125, the calculated Theta value is used to determine the next course of action. If any one cabinet inlet temperature is at or above a set allowable temperature (evidenced by a Theta value being equal or greater than 1), the system determines that there is no additional cooling capacity available on any of the cabinets until the issue of the inlet temperature being higher than the allowable temperature is resolved to where the inlet temperature is lower than the allowable temperature. To notify the user of the potential risk of overheating, an alarm may be signaled to the user, as shown in in step 130. This may be done in any number of suitable ways and can include electronic, visual, aural, or any other appropriate methods of delivery. In one embodiment, the user receives a message within data center management software used to manage the data center where the message provides a map-like representation of the data center with any of the problematic cabinets being highlighted a certain color. In a variation of this embodiment all the cabinets may be highlighted such that any cabinet having Theta≧1 appears red, any cabinet having 1>Theta>0 appears yellow, and any cabinet having Theta=0 appears green. Once the user has received an alarm, he may undertake the necessary action to remedy the problem. As illustrated in step 135, the present invention may provide the user with potential ways to fix the issues causing the alarm. This may include, without limitations, suggestions to check the blanking panels, add perforated tiles, and/or change the cooling unit set-point.
  • If all the cabinet inlet temperatures are below the allowable temperature (evidenced by having all the calculated Theta values remain below 1), the present invention compares the calculated Theta values against a predefined θuser value. The θuser value corresponds to a specific user-confidence percentage, and the predefined correlation between the two is derived through a number of Computational Fluid Dynamics (CFD) models that are representative of actual data centers (as explained later in the specification). The plot in FIG. 2 shows the confidence value in the cooling capacity management method used for different theta values. If, for example, the user specified Theta (θuser) is 0.3 or below, the cooling capacity management method represented by the flow chart of FIG. 1 is likely to work 100% of the time, keeping a safe thermal environment for the IT equipment. However if the θuser is 0.6, the cooling capacity management method represented by the flow chart of FIG. 1 is likely to work 70% of the time. Note that if and when a certain confidence level is selected, such a level correlates to the highest possible value of θuser that will still correspond to the selected confidence level. Therefore, for example, if a confidence level of 100% is selected, the θuser value used in the execution of the present invention will be 0.3 instead of 0.1.
  • Thus, if in step 140 it is determined that the calculated Theta values for a set of cabinets or all the cabinets within a data center fall below a predefined value θuser, the present invention distributes the remaining cooling capacity Pcool over said cabinets in step 145 and provides the user with a confidence percentage that the executed distribution will successfully work. If, however, any of the calculated Theta values are equal to or greater than θuser, the present invention outputs an alarm (similar to the alarm of step 130) in step 150. This alarm can signal to the user that the cooling capacity management in accordance with the present invention would not achieve the sufficient confidence percentage.
  • Note that the predefined θuser value can be set by the user by way of selecting a desired confidence level, wherein based on the selected confidence level, the present invention determines the appropriate θuser value. Thus, if the user had determined that the appropriate confidence percentage was at no less than ˜85%, the present invention would translate that percentage into a θuser value of 0.4 and use that value in step 140.
  • As noted previously, the correlation between the θuser value and the confidence level is developed via a number of Computational Fluid Dynamics (CFD) models that are representative of real data centers. The CFD models are ran for different conditions, changing a number of key variables such as: supply air temperature, cabinet power, and different types of IT equipment. For each case, the CFD models are ran with different air ratios (AR). In an embodiment, there ranges are from 0.8 AR to 2 AR. Air ratio is defined as the ratio between the airflow supplied by the cooling units and the total airflow required for the IT equipment.
  • For each CFD run, the maximum cabinet inlet temperatures are monitored. If a cabinet maximum inlet temperature exceeds a specified allowable temperature, thermal capacity is not managed. If all cabinet inlet temperatures are below the allowable temperatures, capacity is managed by distributing the available cooling capacity among all the cabinets equally. The model is then rerun using the new managed capacity for different ARs. Theta is calculated per cabinet for the baseline run with the minimum AR that provided safe cabinet inlet temperatures. The maximum Theta value is used for the percent confidence value in the present invention.
  • This work is repeatedly done for the remaining CFD models at different cases. The maximum Theta values are collected to provide the overall percent confidence in the present invention. The percent confidence is a way of providing the user with a barometer for confidence for the approach used for capacity management among the cabinets, for a given set of theta values in their data center.
  • An example of the how a system in accordance with the present invention may be used is shown in FIG. 3. This figure illustrates two data center layouts (one being the current layout and one being the projected layout) and provides a user input interface where the user may select a particular confidence level. In the currently described embodiment the selection of the confidence level is done by way of a slider which ranges from “optimistic” to “conservative” with “conservative” being most confident and “optimistic” being least confident. However, a particular confidence level may be inputted in any number of ways, including without limitation manual entry of a number or automatic entry based on at least one other factor. Having the necessary temperature values, the system calculates the maximum Theta value to be 0.05. Given that this value is below 1, the present invention proceeds to the next step without triggering an alarm. The 0.05 Theta value is then compared to the θuser value which is derived from the selected confidence level percentage. In the described embodiment, the selected confidence level percentage is ˜100%, which translates to a θuser value of 0.3. Since the maximum Theta is not greater than or equal to θuser value, the system proceeds, yet again without triggering an alarm, to distribute the remaining cooling capacity evenly over all the cabinets under consideration. In this case, the remaining cooling capacity is distributed evenly, and thus each cabinet receives an additional 5.73 kw of cooling capacity. In alternate embodiments, alternate distribution schemes may be implemented.
  • Note that the mention of the “data center” should not be interpreted as referring only to an entire data center, as it may refer only to a subset of a data center. Accordingly, references to a “data center” throughout this application and the claims may be understood to refer to the entire data center and/or to a subset of a data center.
  • Embodiment of the present invention may be implemented using at least one computer. At least some of the operations described above may be codified in computer readable instructions such that these operations may be executed by the computer. The computer may be a stationary device (e.g., a server) or a portable device (e.g., a laptop). The computer includes a processor, memory, and one or more drives or storage devices. The storage devices and their associated computer storage media provide storage of computer readable instructions, data structures, program modules and other non-transitory information for the computer. Storage devices include any device capable of storing non-transitory data, information, or instructions, such as: a memory chip storage including RAM, ROM, EEPROM, EPROM or any other type of flash memory device; a magnetic storage device including a hard or floppy disk, and magnetic tape; optical storage devices such as a CD-ROM disc, a BD-ROM disc, and a BluRay™ disc; and holographic storage devices.
  • The computer may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer. The remote computer may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and may include many if not all of the elements described above relative to computer. Networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet. For example, in the subject matter of the present application, a computer may comprise the source machine from which data is being migrated, and the remote computer may comprise the destination machine. Note, however, that source and destination machines need not be connected by a network or any other means, but instead, data may be migrated via any media capable of being written by the source platform and read by the destination platform or platforms. When used in a LAN or WLAN networking environment, a computer is connected to the LAN through a network interface or an adapter. When used in a WAN networking environment, a computer typically includes a network interface card or other means for establishing communications over the WAN to environments such as the Internet. It will be appreciated that other means of establishing a communications link between the computers may be used.
  • Those having skill in the art will recognize that the state of the art has progressed to the point where there is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. Those having skill in the art will appreciate that there are various vehicles by which processes and/or systems and/or other technologies described herein can be effected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; alternatively, if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. Hence, there are several possible vehicles by which the processes and/or devices and/or other technologies described herein may be effected, none of which is inherently superior to the other in that any vehicle to be utilized is a choice dependent upon the context in which the vehicle will be deployed and the specific concerns (e.g., speed, flexibility, or predictability) of the implementer, any of which may vary. Those skilled in the art will recognize that optical aspects of implementations will typically employ optically-oriented hardware, software, and or firmware.
  • Note that while this invention has been described in terms of several embodiments, these embodiments are non-limiting (regardless of whether they have been labeled as exemplary or not), and there are alterations, permutations, and equivalents, which fall within the scope of this invention. Additionally, the described embodiments should not be interpreted as mutually exclusive, and should instead be understood as potentially combinable if such combinations are permissive. It should also be noted that there are many alternative ways of implementing the methods and apparatuses of the present invention. It is therefore intended that claims that may follow be interpreted as including all such alterations, permutations, and equivalents as fall within the true spirit and scope of the present invention.

Claims (11)

We claim:
1. A method for data center thermal capacity management, comprising:
collecting cabinet information from a plurality of cabinets in a data center, the cabinet information including at least an inlet temperature and a maximum allowable cabinet temperature for each of the plurality of cabinets;
deriving a remaining cooling capacity for the data center;
for each cabinet among the plurality of cabinets, calculating a θ value using at least the collected cabinet information for the cabinet;
determining that all of the inlet temperatures for the plurality of cabinets are below their respective maximum allowable cabinet temperatures, and in response, determining whether each of the calculated θ values is below, at, or above a user-defined θ value;
if any one of the calculated θ values is at or above the user-defined θ value, providing an alarm; and
if all of the calculated θ values are below the user-defined θ value, distributing the derived remaining cooling capacity among the plurality of cabinets.
2. The method of claim 1, wherein distributing the derived remaining cooling capacity among the plurality of cabinets comprises:
distributing the derived remaining cooling capacity evenly among the plurality of cabinets.
3. The method of claim 1, comprising:
collecting a total power usage for the plurality of cabinets.
4. The method of claim 3, wherein collecting the total power usage for the plurality of cabinets comprises:
collecting total power usage readings from each cabinet among the plurality of cabinets; and
summing each of the collected total power usage readings for each cabinet among the plurality of cabinets to obtain the total power usage for the plurality of cabinets.
5. The method of claim 4, wherein collecting the total power usage reading from a cabinet comprises:
collecting the total power usage reading from a power outlet unit installed in the cabinet.
6. The method of claim 3, comprising:
calculating a total cooling capacity for the data center.
7. The method of claim 6, wherein calculating the total cooling capacity for the data center comprises:
obtaining a rated capacity of cooling equipment within the data center; and
summing the obtained rated capacities of the cooling equipment within the data center to obtain the total cooling capacity for the data center.
8. The method of claim 7, wherein deriving the remaining cooling capacity for the data center comprises:
subtracting the total power usage for the plurality of cabinets from the calculated total cooling capacity for the data center.
9. A method for data center thermal capacity management, comprising:
collecting an inlet temperature and a maximum allowable cabinet temperature for each cabinet among a plurality of cabinets in a data center;
for each cabinet among the plurality of cabinets, calculating a θ value using at least the collected inlet temperature and maximum allowable cabinet temperature for the cabinet;
determining whether any of the calculated θ values is greater than or equal to 1;
if any of the calculated θ values is greater than or equal to 1:
providing a first alarm to a user of a potential overheating risk; and
providing the user with suggestions for correcting the overheating risk;
if none of the calculated θ values is greater than or equal to 1, determining whether any of the calculated θ values is greater than or equal to a user-defined θ value;
if any one of the calculated θ values is greater than or equal to the user-defined θ value, providing a second alarm to the user to adjust the user-defined θ value; and
if none of the calculated θ values is greater than or equal to the user-defined θ value, distributing any remaining cooling capacity among the plurality of cabinets.
10. The method of claim 9, wherein collecting an inlet temperature for a cabinet among a plurality of cabinets in a data center comprises:
recording temperatures from a plurality of temperature sensors installed in the cabinet; and
selecting a maximum recorded temperature among the recorded temperatures as the inlet temperature for the cabinet.
11. The method of claim 9, wherein collecting an inlet temperature for a cabinet among a plurality of cabinets in a data center comprises:
recording temperatures from a plurality of temperature sensors installed in the cabinet; and
averaging the recorded temperatures to obtain the inlet temperature for the cabinet.
US15/819,318 2013-09-04 2017-11-21 Thermal capacity management Abandoned US20180088608A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/819,318 US20180088608A1 (en) 2013-09-04 2017-11-21 Thermal capacity management

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201361873632P 2013-09-04 2013-09-04
US14/474,496 US9851726B2 (en) 2013-09-04 2014-09-02 Thermal capacity management
US15/819,318 US20180088608A1 (en) 2013-09-04 2017-11-21 Thermal capacity management

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US14/474,496 Continuation US9851726B2 (en) 2013-09-04 2014-09-02 Thermal capacity management

Publications (1)

Publication Number Publication Date
US20180088608A1 true US20180088608A1 (en) 2018-03-29

Family

ID=52584324

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/474,496 Active 2036-06-24 US9851726B2 (en) 2013-09-04 2014-09-02 Thermal capacity management
US15/819,318 Abandoned US20180088608A1 (en) 2013-09-04 2017-11-21 Thermal capacity management

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14/474,496 Active 2036-06-24 US9851726B2 (en) 2013-09-04 2014-09-02 Thermal capacity management

Country Status (4)

Country Link
US (2) US9851726B2 (en)
EP (1) EP3042259B1 (en)
JP (2) JP6235149B2 (en)
WO (1) WO2015034859A1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2741088C (en) 2008-10-21 2017-07-11 Raritan Americas, Inc. Methods of achieving cognizant power management
US20130204593A1 (en) * 2012-01-31 2013-08-08 Panduit Corp. Computational Fluid Dynamics Systems and Methods of Use Thereof
US11507262B2 (en) 2017-02-22 2022-11-22 Ciena Corporation Methods and systems for managing optical network services including capacity mining and scheduling
CN107038100B (en) * 2017-03-22 2021-06-01 深圳市共济科技股份有限公司 Real-time capacity display method and system for data center
US11101884B1 (en) 2020-03-24 2021-08-24 Ciena Corporation Localizing anomalies of a fiber optic network within the context of a geographic map
US11812589B2 (en) * 2021-05-12 2023-11-07 Nvidia Corporation Intelligent refrigerant distribution unit for datacenter cooling systems

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070174024A1 (en) * 2005-05-02 2007-07-26 American Power Conversion Corporation Methods and systems for managing facility power and cooling
US7276915B1 (en) * 2005-02-01 2007-10-02 Sprint Communications Company L.P. Electrical service monitoring system
US20110203785A1 (en) * 2009-08-21 2011-08-25 Federspiel Corporation Method and apparatus for efficiently coordinating data center cooling units

Family Cites Families (121)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6776707B2 (en) * 1998-12-30 2004-08-17 Engineering Equipment And Services, Inc. Computer cabinet
US6754816B1 (en) 2000-10-26 2004-06-22 Dell Products L.P. Scalable environmental data calculation method customized by system configuration
US7143300B2 (en) 2001-07-25 2006-11-28 Hewlett-Packard Development Company, L.P. Automated power management system for a network of computers
US7065740B2 (en) 2001-08-24 2006-06-20 Microsoft Corporation System and method to automate the management of computer services and programmable devices
US7213065B2 (en) 2001-11-08 2007-05-01 Racemi, Inc. System and method for dynamic server allocation and provisioning
US7020586B2 (en) 2001-12-17 2006-03-28 Sun Microsystems, Inc. Designing a data center
US7061763B2 (en) 2002-01-29 2006-06-13 Telefonaktiebolaget Lm Ericsson (Publ) Cabinet cooling
US7210048B2 (en) 2003-02-14 2007-04-24 Intel Corporation Enterprise power and thermal management
US7350186B2 (en) 2003-03-10 2008-03-25 International Business Machines Corporation Methods and apparatus for managing computing deployment in presence of variable workload
US7051946B2 (en) * 2003-05-29 2006-05-30 Hewlett-Packard Development Company, L.P. Air re-circulation index
US7236896B2 (en) 2003-09-30 2007-06-26 Hewlett-Packard Development Company, L.P. Load management in a power system
US8145731B2 (en) 2003-12-17 2012-03-27 Hewlett-Packard Development Company, L.P. System and method for determining how many servers of at least one server configuration to be included at a service provider's site for supporting an expected workload
GB2411259A (en) 2004-02-19 2005-08-24 Global Datact Man Ltd Computer Asset Management System and Method
US7197433B2 (en) 2004-04-09 2007-03-27 Hewlett-Packard Development Company, L.P. Workload placement among data centers based on thermal efficiency
US7219507B1 (en) 2004-04-21 2007-05-22 Winbond Electronics Corporation Configurable, nonlinear fan control for system-optimized autonomous cooling
US7810341B2 (en) 2004-04-22 2010-10-12 Hewlett-Packard Development Company, L.P. Redundant upgradeable, modular data center cooling apparatus
US7031870B2 (en) * 2004-05-28 2006-04-18 Hewlett-Packard Development Company, L.P. Data center evaluation using an air re-circulation index
US7155318B2 (en) * 2004-11-05 2006-12-26 Hewlett-Packard Development Company, Lp. Air conditioning unit control to reduce moisture varying operations
US7792757B2 (en) 2004-11-17 2010-09-07 Iron Mountain Incorporated Systems and methods for risk based information management
US7523092B2 (en) 2004-12-14 2009-04-21 International Business Machines Corporation Optimization of aspects of information technology structures
US7426453B2 (en) 2005-01-14 2008-09-16 Hewlett-Packard Development Company, L.P. Workload placement based upon CRAC unit capacity utilizations
US8041967B2 (en) 2005-02-15 2011-10-18 Hewlett-Packard Development Company, L.P. System and method for controlling power to resources based on historical utilization data
US7805473B2 (en) 2005-03-23 2010-09-28 Oracle International Corporation Data center management systems and methods
JP4596945B2 (en) 2005-03-24 2010-12-15 富士通株式会社 Data center demand forecasting system, demand forecasting method and demand forecasting program
US7669431B2 (en) 2005-04-07 2010-03-02 Hewlett-Packard Development Company, L.P. Cooling provisioning for heat generating devices
US7885795B2 (en) 2005-05-02 2011-02-08 American Power Conversion Corporation Methods and systems for managing facility power and cooling
US7881910B2 (en) 2005-05-02 2011-02-01 American Power Conversion Corporation Methods and systems for managing facility power and cooling
US7644148B2 (en) 2005-05-16 2010-01-05 Hewlett-Packard Development Company, L.P. Historical data based workload allocation
US8175906B2 (en) 2005-08-12 2012-05-08 International Business Machines Corporation Integrating performance, sizing, and provisioning techniques with a business process
US20070067296A1 (en) 2005-08-19 2007-03-22 Malloy Patrick J Network capacity planning
US20070100685A1 (en) 2005-10-31 2007-05-03 Sbc Knowledge Ventures, L.P. Portfolio infrastructure management method and system
US8672732B2 (en) 2006-01-19 2014-03-18 Schneider Electric It Corporation Cooling system and method
US20070198383A1 (en) 2006-02-23 2007-08-23 Dow James B Method and apparatus for data center analysis and planning
US7941801B2 (en) 2006-03-07 2011-05-10 Oracle America Inc. Method and system for provisioning a virtual computer and scheduling resources of the provisioned virtual computer
WO2007139559A1 (en) 2006-06-01 2007-12-06 Exaflop Llc Controlled warm air capture
US7619868B2 (en) 2006-06-16 2009-11-17 American Power Conversion Corporation Apparatus and method for scalable power distribution
US7606014B2 (en) 2006-06-16 2009-10-20 American Power Conversion Corporation Apparatus and method for scalable power distribution
US7949992B2 (en) 2006-06-27 2011-05-24 International Business Machines Corporation Development of information technology system
US8046765B2 (en) 2006-07-25 2011-10-25 Hewlett-Packard Development Company, L.P. System and method for determining allocation of resource access demands to different classes of service based at least in part on permitted degraded performance
US7769843B2 (en) 2006-09-22 2010-08-03 Hy Performix, Inc. Apparatus and method for capacity planning for data center server consolidation and workload reassignment
US20080140469A1 (en) 2006-12-06 2008-06-12 International Business Machines Corporation Method, system and program product for determining an optimal configuration and operational costs for implementing a capacity management service
EP2123140B1 (en) 2007-01-24 2016-09-07 Schneider Electric IT Corporation System and method for evaluating equipment rack cooling performance
US7676280B1 (en) 2007-01-29 2010-03-09 Hewlett-Packard Development Company, L.P. Dynamic environmental management
US7861102B1 (en) 2007-04-30 2010-12-28 Hewlett-Packard Development Company, L.P. Unified power management architecture
US8046767B2 (en) 2007-04-30 2011-10-25 Hewlett-Packard Development Company, L.P. Systems and methods for providing capacity management of resource pools for servicing workloads
US7839401B2 (en) 2007-05-10 2010-11-23 International Business Machines Corporation Management of enterprise systems and applications using three-dimensional visualization technology
AU2008255030B2 (en) 2007-05-15 2014-02-20 Schneider Electric It Corporation Methods and systems for managing facility power and cooling
US8291411B2 (en) 2007-05-21 2012-10-16 International Business Machines Corporation Dynamic placement of virtual machines for managing violations of service level agreements (SLAs)
US7739388B2 (en) 2007-05-30 2010-06-15 International Business Machines Corporation Method and system for managing data center power usage based on service commitments
US8095488B1 (en) 2007-12-31 2012-01-10 Symantec Corporation Method and apparatus for managing configurations
US7940504B2 (en) 2007-06-21 2011-05-10 American Power Conversion Corporation Apparatus and method for scalable power distribution
US8094452B1 (en) 2007-06-27 2012-01-10 Exaflop Llc Cooling and power grids for data center
US8890505B2 (en) 2007-08-28 2014-11-18 Causam Energy, Inc. System and method for estimating and providing dispatchable operating reserve energy capacity through use of active load management
US7642914B2 (en) 2007-09-14 2010-01-05 International Business Machines Corporation Auto-locating system and method for data center mapping and monitoring
US7818499B2 (en) 2007-09-18 2010-10-19 Hitachi, Ltd. Methods and apparatuses for heat management in storage systems
US8411439B1 (en) 2007-09-28 2013-04-02 Exaflop Llc Cooling diversity in data centers
CN101933019A (en) 2007-10-29 2010-12-29 美国能量变换公司 Electrical efficiency measurement for data centers
US8131515B2 (en) 2007-11-20 2012-03-06 Hewlett-Packard Development Company, L.P. Data center synthesis
US7979250B2 (en) 2007-12-05 2011-07-12 International Business Machines Corporation Method of laying out a data center using a plurality of thermal simulators
US7832925B2 (en) 2007-12-05 2010-11-16 International Business Machines Corporation Apparatus and method for simulating heated airflow exhaust of an electronics subsystem, electronics rack or row of electronics racks
US8457938B2 (en) 2007-12-05 2013-06-04 International Business Machines Corporation Apparatus and method for simulating one or more operational characteristics of an electronics rack
US20090164811A1 (en) 2007-12-21 2009-06-25 Ratnesh Sharma Methods For Analyzing Environmental Data In An Infrastructure
US8122149B2 (en) 2007-12-28 2012-02-21 Microsoft Corporation Model-based datacenter management
US8438125B2 (en) 2008-02-12 2013-05-07 Acenture Global Services Limited System for assembling behavior models of technology components
US8395621B2 (en) 2008-02-12 2013-03-12 Accenture Global Services Limited System for providing strategies for increasing efficiency of data centers
JP4883491B2 (en) * 2008-02-13 2012-02-22 株式会社日立プラントテクノロジー Electronic equipment cooling system
US7933739B2 (en) 2008-03-13 2011-04-26 International Business Machines Corporation Automated analysis of datacenter layout using temperature sensor positions
US8140195B2 (en) 2008-05-30 2012-03-20 International Business Machines Corporation Reducing maximum power consumption using environmental control settings
US8053926B2 (en) 2008-06-16 2011-11-08 American Power Conversion Corporation Methods and systems for managing facility power and cooling
WO2009154623A1 (en) 2008-06-19 2009-12-23 Hewlett-Packard Development Company, L.P. Capacity planning
US7958219B2 (en) 2008-06-19 2011-06-07 Dell Products L.P. System and method for the process management of a data center
US8223025B2 (en) * 2008-06-26 2012-07-17 Exaflop Llc Data center thermal monitoring
US8306794B2 (en) 2008-06-26 2012-11-06 International Business Machines Corporation Techniques for thermal modeling of data centers to improve energy efficiency
US8346398B2 (en) 2008-08-08 2013-01-01 Siemens Industry, Inc. Data center thermal performance optimization using distributed cooling systems
CA2731668A1 (en) 2008-08-15 2010-02-18 Edsa Micro Corporation A method for predicting power usage effectiveness and data center infrastructure efficiency within a real-time monitoring system
JP5309815B2 (en) 2008-09-09 2013-10-09 富士通株式会社 Power supply management apparatus and power supply management method
US7984151B1 (en) 2008-10-09 2011-07-19 Google Inc. Determining placement of user data to optimize resource utilization for distributed systems
CA2741088C (en) 2008-10-21 2017-07-11 Raritan Americas, Inc. Methods of achieving cognizant power management
US8392928B1 (en) 2008-10-28 2013-03-05 Hewlett-Packard Development Company, L.P. Automated workload placement recommendations for a data center
US20100111105A1 (en) 2008-10-30 2010-05-06 Ken Hamilton Data center and data center design
CN102099790B (en) 2008-10-30 2012-12-19 株式会社日立制作所 Operation management apparatus of information processing system
US8209056B2 (en) 2008-11-25 2012-06-26 American Power Conversion Corporation System and method for assessing and managing data center airflow and energy usage
US20120133510A1 (en) 2010-11-30 2012-05-31 Panduit Corp. Physical infrastructure management system having an integrated cabinet
US8306935B2 (en) 2008-12-22 2012-11-06 Panduit Corp. Physical infrastructure management system
US7990710B2 (en) 2008-12-31 2011-08-02 Vs Acquisition Co. Llc Data center
US8046468B2 (en) 2009-01-26 2011-10-25 Vmware, Inc. Process demand prediction for distributed power and resource management
US8560677B2 (en) 2009-02-13 2013-10-15 Schneider Electric It Corporation Data center control
US8689017B2 (en) 2009-03-12 2014-04-01 Cisco Technology, Inc. Server power manager and method for dynamically managing server power consumption
US8355890B2 (en) 2009-05-08 2013-01-15 American Power Conversion Corporation System and method for predicting maximum cooler and rack capacities in a data center
US8249825B2 (en) 2009-05-08 2012-08-21 American Power Conversion Corporation System and method for predicting cooling performance of arrangements of equipment in a data center
JP5290044B2 (en) * 2009-05-11 2013-09-18 株式会社Nttファシリティーズ Air conditioner monitoring system and air conditioner monitoring method
US8380942B1 (en) 2009-05-29 2013-02-19 Amazon Technologies, Inc. Managing data storage
US7975165B2 (en) 2009-06-25 2011-07-05 Vmware, Inc. Management of information technology risk using virtual infrastructures
US8214327B2 (en) 2009-07-13 2012-07-03 International Business Machines Corporation Optimization and staging method and system
US8631411B1 (en) 2009-07-21 2014-01-14 The Research Foundation For The State University Of New York Energy aware processing load distribution system and method
US8250198B2 (en) 2009-08-12 2012-08-21 Microsoft Corporation Capacity planning for data center services
US8332670B2 (en) 2009-09-23 2012-12-11 Hitachi, Ltd. Method and apparatus for discovery and detection of relationship between device and power distribution outlet
US8370674B2 (en) * 2009-09-25 2013-02-05 Intel Corporation Method and apparatus for reducing server power supply size and cost
US8286442B2 (en) 2009-11-02 2012-10-16 Exaflop Llc Data center with low power usage effectiveness
US8433547B2 (en) 2009-12-03 2013-04-30 Schneider Electric It Corporation System and method for analyzing nonstandard facility operations within a data center
US8224993B1 (en) 2009-12-07 2012-07-17 Amazon Technologies, Inc. Managing power consumption in a data center
US8402140B2 (en) 2010-01-13 2013-03-19 Nec Laboratories America, Inc. Methods and apparatus for coordinated energy management in virtualized data centers
US8959217B2 (en) 2010-01-15 2015-02-17 Joyent, Inc. Managing workloads and hardware resources in a cloud resource
US8655610B2 (en) 2010-03-24 2014-02-18 International Business Machines Corporation Virtual machine placement for minimizing total energy cost in a datacenter
US8712950B2 (en) 2010-04-29 2014-04-29 Microsoft Corporation Resource capacity monitoring and reporting
US9207993B2 (en) 2010-05-13 2015-12-08 Microsoft Technology Licensing, Llc Dynamic application placement based on cost and availability of energy in datacenters
US8171142B2 (en) 2010-06-30 2012-05-01 Vmware, Inc. Data center inventory management using smart racks
JP2012038250A (en) * 2010-08-11 2012-02-23 Nec Corp Temperature prediction system for electronic device and temperature prediction method
WO2012047746A2 (en) 2010-10-04 2012-04-12 Avocent System and method for monitoring and managing data center resources in real time
JP5662102B2 (en) * 2010-10-25 2015-01-28 富士通株式会社 Air conditioning system
US8949091B2 (en) 2011-03-09 2015-02-03 Tata Consultancy Services Limited Method and system for thermal management by quantitative determination of cooling characteristics of data center
US8392575B1 (en) 2011-03-31 2013-03-05 Amazon Technologies, Inc. Clustered device dispersion in a multi-tenant environment
KR101483127B1 (en) 2011-03-31 2015-01-22 주식회사 케이티 Method and apparatus for data distribution reflecting the resources of cloud storage system
US8762522B2 (en) 2011-04-19 2014-06-24 Cisco Technology Coordinating data center compute and thermal load based on environmental data forecasts
US9506815B2 (en) 2011-06-27 2016-11-29 Hewlett Packard Enterprise Development Lp Temperature band operation logging
US8725307B2 (en) 2011-06-28 2014-05-13 Schneider Electric It Corporation System and method for measurement aided prediction of temperature and airflow values in a data center
US9295183B2 (en) 2011-09-16 2016-03-22 Tata Consultancy Services Limited Method and system for real time monitoring, prediction, analysis and display of temperatures for effective thermal management in a data center
US8880225B2 (en) * 2011-10-18 2014-11-04 International Business Machines Corporation Data center cooling control
US8842433B2 (en) * 2011-11-17 2014-09-23 Cisco Technology, Inc. Environmental control for module housing electronic equipment racks
US9043035B2 (en) 2011-11-29 2015-05-26 International Business Machines Corporation Dynamically limiting energy consumed by cooling apparatus
CA2878560C (en) * 2012-07-09 2019-06-11 Ortronics, Inc. Ventilating system for an electrical equipment cabinet and associated methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7276915B1 (en) * 2005-02-01 2007-10-02 Sprint Communications Company L.P. Electrical service monitoring system
US20070174024A1 (en) * 2005-05-02 2007-07-26 American Power Conversion Corporation Methods and systems for managing facility power and cooling
US20110203785A1 (en) * 2009-08-21 2011-08-25 Federspiel Corporation Method and apparatus for efficiently coordinating data center cooling units

Also Published As

Publication number Publication date
JP6469199B2 (en) 2019-02-13
JP2018022525A (en) 2018-02-08
EP3042259B1 (en) 2020-02-05
US9851726B2 (en) 2017-12-26
EP3042259A1 (en) 2016-07-13
US20150066219A1 (en) 2015-03-05
JP2016532220A (en) 2016-10-13
JP6235149B2 (en) 2017-11-22
WO2015034859A1 (en) 2015-03-12

Similar Documents

Publication Publication Date Title
US20180088608A1 (en) Thermal capacity management
US11503744B2 (en) Methods and systems for managing facility power and cooling
US8732706B2 (en) Generating governing metrics for resource provisioning
EP3146817B1 (en) Virtual data center environmental monitoring system
US20110213735A1 (en) Selecting An Installation Rack For A Device In A Data Center
US8201028B2 (en) Systems and methods for computer equipment management
US8209056B2 (en) System and method for assessing and managing data center airflow and energy usage
US8639482B2 (en) Methods and systems for managing facility power and cooling
US8315841B2 (en) Methods and systems for managing facility power and cooling
US20140089692A1 (en) Storage battery monitoring method, storage battery monitoring system, and storage battery system
KR102376355B1 (en) A method for optimizing the life span between filter replacement cycles and monitoring system for ventilation systems
US20140146845A1 (en) Thermally determining flow and/or heat load distribution in parallel paths
US20170082986A1 (en) Building management device, wide area management system, data acquiring method, and program
JP2018132230A (en) Temperature predication system and temperature prediction method
KR101505405B1 (en) Intelligent system for integrated operation of data center and method for operation thereof
US8249841B1 (en) Computerized tool for assessing conditions in a room
US9507344B2 (en) Index generation and embedded fusion for controller performance monitoring
Zhang et al. Real time thermal management controller for data center
US20160061668A1 (en) Temperature distribution prediction method and air conditioning management system
Deodhar et al. Coordinated real-time management of return-air-temperature-controlled cooling units in data centers
US9565789B2 (en) Determining regions of influence of fluid moving devices

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION