WO2015106039A1 - A system and method for intelligent data center power management and energy market disaster recovery - Google Patents

A system and method for intelligent data center power management and energy market disaster recovery Download PDF

Info

Publication number
WO2015106039A1
WO2015106039A1 PCT/US2015/010704 US2015010704W WO2015106039A1 WO 2015106039 A1 WO2015106039 A1 WO 2015106039A1 US 2015010704 W US2015010704 W US 2015010704W WO 2015106039 A1 WO2015106039 A1 WO 2015106039A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
data center
loads
application
power
Prior art date
Application number
PCT/US2015/010704
Other languages
French (fr)
Inventor
Daniel Kawaa KEKAI
Arnold Castillo MAGCALE
Original Assignee
Nautilus Data Technologies, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/542,011 external-priority patent/US10437636B2/en
Application filed by Nautilus Data Technologies, Inc. filed Critical Nautilus Data Technologies, Inc.
Publication of WO2015106039A1 publication Critical patent/WO2015106039A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention relates to intelligent power management and data recovery facilities.
  • a data center is a facility designed to house, maintain, and power a plurality of computer systems.
  • the computer systems within the data center are generally rack- mounted where a number of electronics units are stacked within a support frame.
  • a conventional Tier 4 data center is designed with 2N+1 redundancy for all power distribution paths. This means that each power distribution component is redundant (2 of each component) plus there is another component added for another layer of redundancy. Essentially, if N is the number of components required for functionality, then 2N would mean you have twice the number of components required. The +1 means not only do you have full redundancy (2N) but you also have a spare, i.e. you can take any component offline and still have full redundancy. With this design you can lose one of the three components but still retain full redundancy in case of failover. Building a Tier 4 data center is cost prohibitive due to the additional power distribution components that must be purchased to provide 2N+1 redundancy for all power distribution paths.
  • Tier 2 data center is designed with a single power distribution path with redundant power distribution components. Tier 2 data centers can be built with lower capital expenses but do not offer the same level of redundancy that many businesses running critical systems and applications require.
  • the described system and method for intelligent data center power management and energy market disaster recovery may employ continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention.
  • the system and method may enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location.
  • a computer automated system for intelligent power management comprising a processing unit coupled to a memory element, and having instructions encoded thereon, which instructions cause the system to, via a collection layer, collect infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively, and further cause the system to analyze the collected data by a single or plurality of analytic engines; and trigger, based on the analyzed collected data, a single or plurality of operational state changes.
  • a method comprising, via a collection layer, collecting infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively, analyzing the collected data by a single or plurality of analytic engines; and further comprising triggering, based on the analyzed collected data, a single or plurality of operational state changes.
  • FIG. 1 illustrates a logical view of intelligent data center power management.
  • FIG. 2 illustrates a logical view of an embodiment employed in a data center.
  • a data center is a facility designed to house, maintain, and power a plurality of computer systems.
  • the computer systems within the data center are generally rack-mounted where a number of electronics units are stacked within a support frame.
  • a conventional Tier 4 data center is designed with 2N+1 redundancy (where N is the number of power distribution components) for all power distribution paths, meaning each power distribution component is redundant (2 of each component) plus there is another component added for another layer of redundancy. With this design you can lose one of the three components but still retain full redundancy in case of failover. Building a Tier 4 data center is cost prohibitive due to the additional power distribution components that must be purchased to provide 2N+1 redundancy for all power distribution paths.
  • a conventional Tier 2 data center is designed with a single power distribution path with redundant power distribution components.
  • Tier 2 data centers can be built with lower capital expenses but do not offer the same level of redundancy that many businesses running critical systems and applications require. Embodiments of the invention disclosed below solve this problem.
  • the system and method described may be employed to provide Tier 4 type levels of data center power redundancy in data centers built to Tier 2 standards. This drastically cuts capital expenses while providing the benefits of Tier 4 type levels of data center power redundancy.
  • Embodiments disclosed include an improved and superior system and method.
  • the disclosed embodiments may be employed to provide Tier 4 type levels of power distribution redundancy in data centers built to Tier 2 standards.
  • the systems and methods described include means to continuously monitor and analyze utility energy market status and enable intelligent application and data center load balancing that may provide financial benefits for moving applications and power loads from one data center location using power during peak energy hours to another data center location using power during off-peak hours.
  • the described systems and methods may quickly move applications and power loads from one data center to another enabling disaster recovery from utility energy market outages.
  • Embodiments disclosed include improved and superior systems and methods. The claimed invention differs from what currently exists.
  • the disclosed systems and methods may be employed to provide Tier 4 type levels of power distribution redundancy in data centers built to Tier 2 standards. Furthermore, in preferred embodiments, the systems and methods described may continuously monitor and analyze utility energy market status and enable intelligent application and data center load balancing that may provide financial benefits for moving applications and power loads from one data center location using power during peak energy hours to another data center location using power during off-peak hours. The described systems and methods may quickly move applications and power loads from one data center to another enabling disaster recovery from utility energy market outages.
  • Tier 2 data centers are not designed to provide Tier 4 type levels of redundancy and may not have the ability to easily migrate applications or power loads from data center to data center. This may prohibit intelligent power management across data centers and the ability for disaster recovery from utility energy market outages.
  • Embodiments disclosed include systems and methods for intelligent data center power management and energy market disaster recovery, and may employ continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention.
  • the system and method may enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location
  • Fig. 1 illustrates a logical view of intelligent data center power management.
  • the system comprises a data collection layer 100, a single or plurality of infrastructure elements 102, a single or plurality of application elements 104, a single or plurality of power elements 106, a single or plurality of virtual machine elements 108, an analytics, automation, and actions layer 1 10 that comprises an analytics engine 1 12, an automation engine 1 14, and an action engine 1 16, an energy market analysis layer 1 18, and intelligent market elements 120.
  • the data collection layer is caused to collect infrastructure data from a single or plurality of infrastructure elements 102, application data from a single or plurality of application elements 104, power data from a single or plurality of power elements 106, and virtual machine data from a single or plurality of virtual machine elements 108.
  • a preferred embodiment also includes an analytics, automation, and actions layer 1 10, which comprises a single or plurality of analytics engines 1 12, a single or plurality of automation software engines 1 14, and a single or plurality of actions software engines 116.
  • the embodiment further includes an energy market analysis engine 1 18, and a network connection to a single or plurality of energy markets 120
  • FIG.l logical view
  • Fig. 2 logical data center view
  • Fig. 1 shows a logical view entailed in an embodiment.
  • An embodiment comprises a collection layer 100, infrastructure elements 102, application elements 104, power elements 106, virtual machine elementsl08, analytics/automation/actions layer 110, analytics engine 112, automation software 1 14, actions software 116, energy markets analysis layer 118 and intelligent energy market 120 elements.
  • Fig. 2 shows a logical view of an embodiment employed in a data center.
  • the illustrated embodiment includes systems and methods comprising of a plurality of Tier 2 data centers 200, 202, 204 that may all be running applications, virtual machines, and the described systems and methods, global energy markets 206 and an IP network 208.
  • data collection layer 100 continuously collects data from a plurality of infrastructure elements 102, application elements 104, power elements 106 and virtual machine elements 108.
  • the data collected is then analyzed by a plurality of analytic engines 1 12 with the resulting data analysis triggering the automation software 114 and enabling the actions software 116 to make data center operational state changes for application load balancing or power load balancing across multiple data centers 200, 202, 204.
  • the data centers 200, 202, 204 are connected to one another by IP network 208 which may also connect to a plurality of energy markets.
  • the energy market analysis layer 118 will use data collected from energy market 206 elements to automatically manage data center and application disaster recovery from utility energy market 206 outages.
  • data collected is used to measure or quantify parameters, and if these parameters fall within defined acceptable ranges, the logic causes the system to go to the next parameter. If the next parameter falls outside of the predefined acceptable ranges, defined actions will be executed to bring the said parameter within the acceptable range. For example, if the power load is greater than the power supply, the load is reduced or the supply is increased, to conform to a predefined range. After execution of the defined action, (in this case the power load and supply), the data for the same parameter will be collected again, the parameter will be checked again, and if the parameter now falls within the acceptable range, then the logic causes the system to move to the next parameter.
  • defined actions in this case the power load and supply
  • the system and method includes means for intelligent management of data center power distribution loads, application loads and virtual machine loads, across multiple data centers.
  • An embodiment includes a computer automated system comprising a processing unit coupled with a memory element, and having instructions encoded thereon, which instructions cause the system to automatically handle automated data center operation state changes, and to dynamically balance power loads and application loads across multiple data centers.
  • the system further includes an analysis engine which comprises instructions that cause the system to collect and analyze data from a plurality of energy markets, and to enable automatic data center operation state changes, thereby enabling data center and application disaster recovery from utility energy market outages.
  • An additional, alternate embodiment includes a predictive analytics engine comprising instructions that cause the system to model and to enable scenario modeling for and of designated applications, virtual machines, and power loads.
  • Preferred embodiments can thus predict outages caused by energy market failures, application loads, virtual machine loads or power loads in a data center.
  • Yet another embodiment includes a system and method for automatically managing virtual machine instances, enabling the killing of virtual servers or banks of physical computer systems during low application loads and turning up virtual machines or banks of physical computer systems prior to expected peak loads.
  • the method and system may be deployed in a single central location to manage multiple data centers locations. Modifications and variations of the above are possible, and in some instances desirable, as would be apparent to a person having ordinary skill in the art.
  • Preferred embodiments disclosed can be employed to enable Tier 4 type level redundancy to existing Tier 2 data centers. Preferred embodiments can enable load balancing of applications and power loads across multiple existing data centers.
  • the described systems and methods may be employed to enable disaster recovery across multiple data centers for utility energy market outages.
  • systems and methods may be used for dynamic problem resolutions for applications, virtual machines, physical computer systems, network connectivity.
  • the systems and methods may also be employed to analyze data center operation state before and after scheduled maintenance changes and may uncover unknown interdependencies or unanticipated changes in behavior.
  • the power management and energy market disaster recovery system and method is highly reconfigurable, and can be adapted for use in office buildings, residential homes, schools, government buildings, cruise ships, naval vessels, mobile homes, temporary work sites, remote work sites, hospitals, apartment buildings, etc.
  • Other variations, modifications, and applications are possible, as would be apparent to a person having ordinary skill in the art.
  • the power management and energy market disaster recovery system and method is highly reconfigurable, and can be used in a variety of situations/applications, including but not limited to buildings or dwellings, in an energy- — efficient and cost— effective manner.
  • Embodiments disclosed allow intelligent data center power management and energy market disaster recovery, employing continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention.
  • Embodiments disclosed further enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location.
  • the steps executed to implement the embodiments of the invention may be part of an automated or manual embodiment, and programmable to follow a sequence of desirable instructions.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Health & Medical Sciences (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • Software Systems (AREA)
  • General Business, Economics & Management (AREA)
  • General Engineering & Computer Science (AREA)
  • Educational Administration (AREA)
  • Development Economics (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Systems and methods for intelligent data center power management and energy market disaster recovery comprised of data collection layer, infrastructure elements, application elements, power elements, virtual machine elements, analytics/automation/actions layer, analytics or predictive analytics engine, automation software, actions software, energy markets analysis layer and software and intelligent energy market analysis elements or software. Plurality of data centers employ the systems and methods comprised of a plurality of Tier 2 data centers that may be running applications, virtual machines and physical computer systems to enable data center and application disaster recovery from utility energy market outages. Systems and methods may be employed to enable application load balancing and data center power load balancing across a plurality of data centers may lead to financial benefits when moving application and power loads from one data center location using power during peak energy hours to another data center location using power during off-peak hours.

Description

A System And Method For Intelligent Data Center Power Management And Energy
Market Disaster Recovery
CROSS REFERENCE TO RELATED APPLICATIONS
[0001] This application claims reference to Provisional Patent application number 61/925,540 filed on January 8, 2014, entitled "A system and method for intelligent data center power management and energy market disaster recovery", and U.S. Patent Application Serial number 14/542,01 1, filed November 14, 2014.
FIELD
[0001] The present invention relates to intelligent power management and data recovery facilities.
BACKGROUND OF THE INVENTION
[0002] A data center is a facility designed to house, maintain, and power a plurality of computer systems. The computer systems within the data center are generally rack- mounted where a number of electronics units are stacked within a support frame.
[0003] A conventional Tier 4 data center is designed with 2N+1 redundancy for all power distribution paths. This means that each power distribution component is redundant (2 of each component) plus there is another component added for another layer of redundancy. Essentially, if N is the number of components required for functionality, then 2N would mean you have twice the number of components required. The +1 means not only do you have full redundancy (2N) but you also have a spare, i.e. you can take any component offline and still have full redundancy. With this design you can lose one of the three components but still retain full redundancy in case of failover. Building a Tier 4 data center is cost prohibitive due to the additional power distribution components that must be purchased to provide 2N+1 redundancy for all power distribution paths.
[0004] A conventional Tier 2 data center is designed with a single power distribution path with redundant power distribution components. Tier 2 data centers can be built with lower capital expenses but do not offer the same level of redundancy that many businesses running critical systems and applications require.
[0005] The described system and method for intelligent data center power management and energy market disaster recovery may employ continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention. The system and method may enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location.
SUMMARY
[0006] A computer automated system for intelligent power management, comprising a processing unit coupled to a memory element, and having instructions encoded thereon, which instructions cause the system to, via a collection layer, collect infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively, and further cause the system to analyze the collected data by a single or plurality of analytic engines; and trigger, based on the analyzed collected data, a single or plurality of operational state changes.
[0007] In a computer automated system for intelligent power management and comprising a processing unit coupled to a memory element having instructions encoded thereon, a method comprising, via a collection layer, collecting infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively, analyzing the collected data by a single or plurality of analytic engines; and further comprising triggering, based on the analyzed collected data, a single or plurality of operational state changes.
BRIEF DESCRIPTION OF THE DRAWINGS
[0008] Fig. 1 illustrates a logical view of intelligent data center power management.
[0009] Fig. 2 illustrates a logical view of an embodiment employed in a data center.
DETAILED DESCRIPTION OF THE INVENTION
[0010] As stated above, A data center is a facility designed to house, maintain, and power a plurality of computer systems. The computer systems within the data center are generally rack-mounted where a number of electronics units are stacked within a support frame.
[0011] A conventional Tier 4 data center is designed with 2N+1 redundancy (where N is the number of power distribution components) for all power distribution paths, meaning each power distribution component is redundant (2 of each component) plus there is another component added for another layer of redundancy. With this design you can lose one of the three components but still retain full redundancy in case of failover. Building a Tier 4 data center is cost prohibitive due to the additional power distribution components that must be purchased to provide 2N+1 redundancy for all power distribution paths.
[0012] A conventional Tier 2 data center is designed with a single power distribution path with redundant power distribution components. Tier 2 data centers can be built with lower capital expenses but do not offer the same level of redundancy that many businesses running critical systems and applications require. Embodiments of the invention disclosed below solve this problem.
[0013] The system and method described may be employed to provide Tier 4 type levels of data center power redundancy in data centers built to Tier 2 standards. This drastically cuts capital expenses while providing the benefits of Tier 4 type levels of data center power redundancy.
[0014] The claimed invention differs from what currently exists. Embodiments disclosed include an improved and superior system and method. The disclosed embodiments may be employed to provide Tier 4 type levels of power distribution redundancy in data centers built to Tier 2 standards. Furthermore the systems and methods described include means to continuously monitor and analyze utility energy market status and enable intelligent application and data center load balancing that may provide financial benefits for moving applications and power loads from one data center location using power during peak energy hours to another data center location using power during off-peak hours. The described systems and methods may quickly move applications and power loads from one data center to another enabling disaster recovery from utility energy market outages. [0015] Embodiments disclosed include improved and superior systems and methods. The claimed invention differs from what currently exists. The disclosed systems and methods may be employed to provide Tier 4 type levels of power distribution redundancy in data centers built to Tier 2 standards. Furthermore, in preferred embodiments, the systems and methods described may continuously monitor and analyze utility energy market status and enable intelligent application and data center load balancing that may provide financial benefits for moving applications and power loads from one data center location using power during peak energy hours to another data center location using power during off-peak hours. The described systems and methods may quickly move applications and power loads from one data center to another enabling disaster recovery from utility energy market outages.
[0016] Tier 2 data centers are not designed to provide Tier 4 type levels of redundancy and may not have the ability to easily migrate applications or power loads from data center to data center. This may prohibit intelligent power management across data centers and the ability for disaster recovery from utility energy market outages.
[0017] Embodiments disclosed include systems and methods for intelligent data center power management and energy market disaster recovery, and may employ continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention. The system and method may enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location [0018] Fig. 1 illustrates a logical view of intelligent data center power management. The system comprises a data collection layer 100, a single or plurality of infrastructure elements 102, a single or plurality of application elements 104, a single or plurality of power elements 106, a single or plurality of virtual machine elements 108, an analytics, automation, and actions layer 1 10 that comprises an analytics engine 1 12, an automation engine 1 14, and an action engine 1 16, an energy market analysis layer 1 18, and intelligent market elements 120. In the system, the data collection layer is caused to collect infrastructure data from a single or plurality of infrastructure elements 102, application data from a single or plurality of application elements 104, power data from a single or plurality of power elements 106, and virtual machine data from a single or plurality of virtual machine elements 108. A preferred embodiment also includes an analytics, automation, and actions layer 1 10, which comprises a single or plurality of analytics engines 1 12, a single or plurality of automation software engines 1 14, and a single or plurality of actions software engines 116. The embodiment further includes an energy market analysis engine 1 18, and a network connection to a single or plurality of energy markets 120
[0019] One embodiment of the described system and method is shown in Fig.l (logical view) and Fig. 2 (logical data center view).
[0020] Fig. 1 shows a logical view entailed in an embodiment. An embodiment comprises a collection layer 100, infrastructure elements 102, application elements 104, power elements 106, virtual machine elementsl08, analytics/automation/actions layer 110, analytics engine 112, automation software 1 14, actions software 116, energy markets analysis layer 118 and intelligent energy market 120 elements. [0021] Fig. 2 shows a logical view of an embodiment employed in a data center. The illustrated embodiment includes systems and methods comprising of a plurality of Tier 2 data centers 200, 202, 204 that may all be running applications, virtual machines, and the described systems and methods, global energy markets 206 and an IP network 208.
[0022] According to an embodiment, data collection layer 100 continuously collects data from a plurality of infrastructure elements 102, application elements 104, power elements 106 and virtual machine elements 108. The data collected is then analyzed by a plurality of analytic engines 1 12 with the resulting data analysis triggering the automation software 114 and enabling the actions software 116 to make data center operational state changes for application load balancing or power load balancing across multiple data centers 200, 202, 204. Preferably, the data centers 200, 202, 204 are connected to one another by IP network 208 which may also connect to a plurality of energy markets. The energy market analysis layer 118 will use data collected from energy market 206 elements to automatically manage data center and application disaster recovery from utility energy market 206 outages.
[0023] According to an embodiment, data collected is used to measure or quantify parameters, and if these parameters fall within defined acceptable ranges, the logic causes the system to go to the next parameter. If the next parameter falls outside of the predefined acceptable ranges, defined actions will be executed to bring the said parameter within the acceptable range. For example, if the power load is greater than the power supply, the load is reduced or the supply is increased, to conform to a predefined range. After execution of the defined action, (in this case the power load and supply), the data for the same parameter will be collected again, the parameter will be checked again, and if the parameter now falls within the acceptable range, then the logic causes the system to move to the next parameter.
[0024] According to an embodiment the system and method includes means for intelligent management of data center power distribution loads, application loads and virtual machine loads, across multiple data centers. An embodiment includes a computer automated system comprising a processing unit coupled with a memory element, and having instructions encoded thereon, which instructions cause the system to automatically handle automated data center operation state changes, and to dynamically balance power loads and application loads across multiple data centers. The system further includes an analysis engine which comprises instructions that cause the system to collect and analyze data from a plurality of energy markets, and to enable automatic data center operation state changes, thereby enabling data center and application disaster recovery from utility energy market outages.
[0025] All of the elements above are necessary.
[0026] An additional, alternate embodiment includes a predictive analytics engine comprising instructions that cause the system to model and to enable scenario modeling for and of designated applications, virtual machines, and power loads.
Preferred embodiments can thus predict outages caused by energy market failures, application loads, virtual machine loads or power loads in a data center.
[0027] Yet another embodiment includes a system and method for automatically managing virtual machine instances, enabling the killing of virtual servers or banks of physical computer systems during low application loads and turning up virtual machines or banks of physical computer systems prior to expected peak loads.
[0028] The method and system may be deployed in a single central location to manage multiple data centers locations. Modifications and variations of the above are possible, and in some instances desirable, as would be apparent to a person having ordinary skill in the art.
[0029] Preferred embodiments disclosed can be employed to enable Tier 4 type level redundancy to existing Tier 2 data centers. Preferred embodiments can enable load balancing of applications and power loads across multiple existing data centers.
[0030] The described systems and methods may be employed to enable disaster recovery across multiple data centers for utility energy market outages.
[0031] Additionally: In another embodiment the systems and methods may be used for dynamic problem resolutions for applications, virtual machines, physical computer systems, network connectivity. The systems and methods may also be employed to analyze data center operation state before and after scheduled maintenance changes and may uncover unknown interdependencies or unanticipated changes in behavior.
[0032] The power management and energy market disaster recovery system and method is highly reconfigurable, and can be adapted for use in office buildings, residential homes, schools, government buildings, cruise ships, naval vessels, mobile homes, temporary work sites, remote work sites, hospitals, apartment buildings, etc. Other variations, modifications, and applications are possible, as would be apparent to a person having ordinary skill in the art.
[0033] Additionally, partial or complete embodiments of the disclosed invention can be utilized in alternate applications without departing from the scope and spirit of the
disclosure. For example, the power management and energy market disaster recovery system and method is highly reconfigurable, and can be used in a variety of situations/applications, including but not limited to buildings or dwellings, in an energy- — efficient and cost— effective manner.
[0034] Embodiments disclosed allow intelligent data center power management and energy market disaster recovery, employing continuous collection, monitoring and analysis of data from application services, power distribution components, virtual machines, data center facility infrastructure and utility energy markets to enable dynamic data center operation actions for migrating application loads and power loads from one data center to another without the need for manual intervention. Embodiments disclosed further enable data center and application disaster recovery from utility energy market outages by quickly migrating applications loads from one data center location to another data center location.
[0035] Since various possible embodiments might be made of the above invention, and since various changes might be made in the embodiments above set forth, it is to be understood that all matter herein described or shown in the accompanying drawings is to be interpreted as illustrative and not to be considered in a limiting sense. Thus it will be understood by those skilled in the art that although the preferred and alternate embodiments have been shown and described in accordance with the Patent Statutes, the invention is not limited thereto or thereby.
[0036] The figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present invention. It should also be noted that, in some alternative implementations, the functions noted/illustrated may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. [0037] The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or
"comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
[0038] In general, the steps executed to implement the embodiments of the invention, may be part of an automated or manual embodiment, and programmable to follow a sequence of desirable instructions.
[0039] The present invention and some of its advantages have been described in detail for some embodiments. It should be understood that although some example embodiments of the power management and energy market disaster recovery system and method are described with reference to a waterborne data center, the system and method is highly reconfigurable, and embodiments include reconfigurable systems that may be dynamically adapted to be used in other contexts as well. It should also be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims. An embodiment of the invention may achieve multiple objectives, but not every embodiment falling within the scope of the attached claims will achieve every objective. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, and composition of matter, means, methods and steps described in the specification. A person having ordinary skill in the art will readily appreciate from the disclosure of the present invention that processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed are equivalent to, and fall within the scope of, what is claimed. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims

Claims What is claimed is:
1 . A computer automated system for intelligent power management, comprising a
processing unit coupled to a memory element, and having instructions encoded thereon, which instructions cause the system to:
via a collection layer, collect infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively; analyze the collected data by a single or plurality of analytic engines; and trigger, based on the analyzed collected data, a single or plurality of operational state changes.
2. The computer automated system of claim 1 wherein the triggered operational state changes further comprise operational state changes for at least one of application load balancing and power load balancing across a plurality of data centers.
3. The system of claim 2 wherein the data centers are connected to one another and are further connected to a plurality of energy markets via a network.
4. The system of claim 3 wherein the system is further caused to: via an energy market analysis layer, automatically manage data center and application disaster recovery from utility energy market outages based on data collected from energy market elements.
The system of claim 2 wherein the system is further caused to:
based on the collected data, measure a plurality of parameters wherein if a first measured parameter falls within a predefined range, go to a second measurable parameter; and
wherein if the measured parameter falls outside of the predefined range, execute a predefined action to bring the said parameter within the predefined range.
The system of claim 2, further comprising means for intelligent management of data center power distribution loads, application loads and virtual machine loads, across multiple data centers.
The computer automated system of claim 2 wherein the encoded
instructions further cause the system to :
automatically handle data center operation state changes, and dynamically balance power loads and application loads across multiple data centers.
The computer automated system of claim 1 wherein the single or plurality of analytic engines are further caused to:
collect data from a plurality of energy markets;
analyze the collected data from the said plurality of energy markets; and based on the collected analyzed data, enable automatic data center operation state changes, thereby allowing data center and application disaster recovery from utility energy market outages.
9. The computer automated system of claim 1 wherein the analytic engines comprise a predictive analytics engine, which comprises instructions that cause the system to:
model and enable scenario modeling for and of designated applications, virtual machines, and power loads; and
predict outages caused by energy market failures, application loads, virtual machine loads or power loads in a data center.
10. The computer automated system of claim 1 wherein the instructions further cause the system to:
automatically manage virtual machine instances, enabling the killing of virtual servers or banks of physical computer systems during low application loads and turning up virtual machines or banks of physical computer systems prior to expected peak loads.
1 1 . In a computer automated system for intelligent power management and comprising a processing unit coupled to a memory element having instructions encoded thereon, a method comprising:
via a collection layer, collecting infrastructure data, application data, power data, and machine element data from a plurality of corresponding infrastructure elements, application elements, power elements, and virtual machine elements, respectively; analyzing the collected data by a single or plurality of analytic engines; and triggering, based on the analyzed collected data, a single or plurality of operational state changes.
12. The method of claim 11 wherein the triggering of operational state changes further comprises triggering operational state changes for at least one of application load balancing and power load balancing across a plurality of data centers.
13.The method of claim 1 1 wherein each data center is connected to each other data center, and to single or plurality of energy markets via a network.
14. The method of claim 13 further comprising: via an energy market analysis layer, automatically managing data center and application disaster recovery from utility energy market outages based on data collected from energy market elements.
15. The method of claim 11 further comprising: based on the collected data, measuring a plurality of parameters wherein if a first measured parameter falls within a predefined range, going to a second measurable parameter; and wherein if the measured parameter falls outside of the predefined range, executing a predefined action to bring the said parameter within the predefined range. .
16. The method of claim 1 1 , further comprising intelligently managing data center power distribution loads, application loads and virtual machine loads, across multiple data centers.
17. The method of claim 1 1 further comprising :
automatically handling data center operation state changes, and dynamically balancing power loads and application loads across multiple data centers.
18. The method of claim 1 1 further comprising, via the single or plurality of analytic engines :
collecting data from a plurality of energy markets;
analyzing the collected data from the said plurality of energy markets; and based on the collected analyzed data, enabling automatic data center operation state changes, thereby allowing data center and application disaster recovery from utility energy market outages.
19. The method of claim 1 1 wherein the said analyzing by the analytic engines comprise via a predictive analytics engine:
modeling and enabling scenario modeling for and of designated applications, virtual machines, and power loads; and
predicting outages caused by energy market failures, application loads, virtual machine loads or power loads in a data center.
20. The method of claim 11 further comprising:
automatically managing virtual machine instances, enabling the killing of virtual servers or banks of physical computer systems during low application loads and turning up virtual machines or banks of physical computer systems prior to expected peak loads.
PCT/US2015/010704 2014-01-09 2015-01-08 A system and method for intelligent data center power management and energy market disaster recovery WO2015106039A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201461925530P 2014-01-09 2014-01-09
US61/925,530 2014-01-09
US14/542,011 2014-11-14
US14/542,011 US10437636B2 (en) 2014-01-09 2014-11-14 System and method for intelligent data center power management and energy market disaster recovery

Publications (1)

Publication Number Publication Date
WO2015106039A1 true WO2015106039A1 (en) 2015-07-16

Family

ID=53524346

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/010704 WO2015106039A1 (en) 2014-01-09 2015-01-08 A system and method for intelligent data center power management and energy market disaster recovery

Country Status (1)

Country Link
WO (1) WO2015106039A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9791908B2 (en) 2013-11-07 2017-10-17 Schneider Electric It Corporation Systems and methods for protecting virtualized assets
US9933843B2 (en) 2011-12-22 2018-04-03 Schneider Electric It Corporation Systems and methods for reducing energy storage requirements in a data center
EP3347817A4 (en) * 2015-09-30 2019-01-09 Huawei Technologies Co., Ltd. An approach for end-to-end power efficiency modeling for data centers
US11749988B2 (en) 2014-01-09 2023-09-05 Nautilus True, Llc System and method for intelligent data center power management and energy market disaster recovery

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990395B2 (en) * 1994-12-30 2006-01-24 Power Measurement Ltd. Energy management device and architecture with multiple security levels
WO2009055368A2 (en) * 2007-10-21 2009-04-30 Citrix Systems, Inc. Systems and methods to adaptively load balance user sessions to reduce energy consumption
US20110072293A1 (en) * 2009-09-24 2011-03-24 Richard James Mazzaferri Systems and Methods for Attributing An Amount of Power Consumption To A Workload

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6990395B2 (en) * 1994-12-30 2006-01-24 Power Measurement Ltd. Energy management device and architecture with multiple security levels
WO2009055368A2 (en) * 2007-10-21 2009-04-30 Citrix Systems, Inc. Systems and methods to adaptively load balance user sessions to reduce energy consumption
US20110072293A1 (en) * 2009-09-24 2011-03-24 Richard James Mazzaferri Systems and Methods for Attributing An Amount of Power Consumption To A Workload

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9933843B2 (en) 2011-12-22 2018-04-03 Schneider Electric It Corporation Systems and methods for reducing energy storage requirements in a data center
US9791908B2 (en) 2013-11-07 2017-10-17 Schneider Electric It Corporation Systems and methods for protecting virtualized assets
US11749988B2 (en) 2014-01-09 2023-09-05 Nautilus True, Llc System and method for intelligent data center power management and energy market disaster recovery
EP3347817A4 (en) * 2015-09-30 2019-01-09 Huawei Technologies Co., Ltd. An approach for end-to-end power efficiency modeling for data centers
US10401933B2 (en) 2015-09-30 2019-09-03 Futurewei Technologies, Inc. Approach for end-to-end power efficiency modeling for data centers

Similar Documents

Publication Publication Date Title
US11182201B1 (en) System and method for intelligent data center power management and energy market disaster recovery
US11749988B2 (en) System and method for intelligent data center power management and energy market disaster recovery
Xue et al. Practise: Robust prediction of data center time series
CN106250306B (en) A kind of performance prediction method suitable for enterprise-level O&M automation platform
Miclea et al. About dependability in cyber-physical systems
WO2015106039A1 (en) A system and method for intelligent data center power management and energy market disaster recovery
CN105247379B (en) The system and method analyzed for uninterruptible power supply battery detection and data
US20100211956A1 (en) Method and system for continuous optimization of data centers by combining server and storage virtualization
CN102916831B (en) Method and system for acquiring health degree of business system
CN103955510A (en) Massive electricity marketing data integration method uploaded by ETL cloud platform
US9009533B2 (en) Home/building fault analysis system using resource connection map log and method thereof
CN112347548A (en) Method and system for realizing tunnel digital display based on BIM and 3DGIS technical system
CN103390933A (en) Central detection method of distributed data acquisition mode of dispatching automation system
Kjølle et al. Vulnerability analysis related to extraordinary events in power systems
Bhuiyan et al. Towards cyber-physical systems design for structural health monitoring: Hurdles and opportunities
CN109525036B (en) Method, device and system for monitoring mains supply state of communication equipment
CN105975524A (en) Data integration method and system used for geology monitoring
CN110213087B (en) Complex system fault positioning method based on dynamic multilayer coupling network
KR20210046165A (en) LOAD TRANSFER SYSTEM and METHOD for POWER TRANSMISSION and POWER SUPPLY
CN101782942A (en) Multi-node protection efficiency evaluation system with multiple protection capabilities
CN108802764A (en) The construction method and structure system of the self-checking system of satellite ground strengthening system
CN105471986A (en) Data center construction scale assessment method and apparatus
CN108646140A (en) A kind of method and apparatus of determining faulty equipment
CN114445162A (en) Method for reversely tracing enterprise invoice system configuration
CN109753383B (en) Score calculation method and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15735382

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15735382

Country of ref document: EP

Kind code of ref document: A1