WO2011076486A1 - Procédé et système pour allocation dynamique d'une charge de travail dans un centre de calcul qui optimise la consommation d'énergie globale - Google Patents

Procédé et système pour allocation dynamique d'une charge de travail dans un centre de calcul qui optimise la consommation d'énergie globale Download PDF

Info

Publication number
WO2011076486A1
WO2011076486A1 PCT/EP2010/067598 EP2010067598W WO2011076486A1 WO 2011076486 A1 WO2011076486 A1 WO 2011076486A1 EP 2010067598 W EP2010067598 W EP 2010067598W WO 2011076486 A1 WO2011076486 A1 WO 2011076486A1
Authority
WO
WIPO (PCT)
Prior art keywords
computer
computers
instantaneous
group
task
Prior art date
Application number
PCT/EP2010/067598
Other languages
English (en)
Inventor
Arcangelo Di Balsamo
Alex Donatelli
Gerd Breiter
Fabio Benedetti
Hans-Dieter Wehle
Original Assignee
International Business Machines Corporation
Compagnie Ibm France
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Compagnie Ibm France filed Critical International Business Machines Corporation
Publication of WO2011076486A1 publication Critical patent/WO2011076486A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/20Cooling means
    • G06F1/206Cooling means comprising thermal management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/329Power saving characterised by the action undertaken by task scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05KPRINTED CIRCUITS; CASINGS OR CONSTRUCTIONAL DETAILS OF ELECTRIC APPARATUS; MANUFACTURE OF ASSEMBLAGES OF ELECTRICAL COMPONENTS
    • H05K7/00Constructional details common to different types of electric apparatus
    • H05K7/20Modifications to facilitate cooling, ventilating, or heating
    • H05K7/20709Modifications to facilitate cooling, ventilating, or heating for server racks or cabinets; for data centers, e.g. 19-inch computer racks
    • H05K7/20836Thermal management, e.g. server temperature control
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • the present invention generally relates to green computing and more particularly the invention aims at providing a dynamic workload allocation in a computing center which optimizes the overall energy consumption.
  • Data-center activity moves more and more from processing workload of scheduled batch jobs to dynamic ⁇ ⁇ demand' job execution following the business requirement.
  • This dynamic workload is less predictive than the scheduled workload, there is thus a problem in sizing the back-end infrastructure of the data-center to guarantee predefined performance targets.
  • the performance targets are defined, for instance, in a Service Level Agreement (SLA) with a customer for whom the job execution service is provided. If the back-end is oversized it might lead to waste of computational power and thus to a waste of energy. If the back-end is too small, the SLA may be violated which implies loose of business opportunities .
  • SLA Service Level Agreement
  • Energy consumption for job execution in a data center may depend on factors such as the duration the machines are used, the type of machines used and the air-cooling demand. The more the machines are loaded, the more the energy consumption increases on a given company infrastructure.
  • Power the paragraph 2.3 "Power” of the article by Albert Greenberg, James Hamilton, David A. Maltz, Parveen Patel from Microsoft published in January 2009 under the title "The Cost of a Cloud: Research Problems in Data Center Networks", in the Volume 39, Number 1 of the ACM STGCOMM Computer Communication Review and available at the following Web address:
  • This prior art article contains an analysis of the costs in a typical data-center and provide some hints about how reducing them using a cloud architecture.
  • the article classifies the costs, amortized costs being spread into 45% for Servers (CPU, Memory, storage systems) , 25% for Infrastructure (Power distribution and cooling) , 15% for Power draw (Electrical utility costs) and 15 % for Network (Link, transit equipment) costs. If one wants to optimize the use of an existing company data-center infrastructure which is either physical or virtual, it is required to reduce the costs for Power draw which represent a significant part of costs, even if not the biggest one.
  • the 2.3 paragraph of the article recommends that power costs be tracked using the Power Usage Efficiency (PUE) metric defined by the Green Grid Association at the following Web address:
  • PUE Power Usage Efficiency
  • the object is reached, according to claim 1, with a method of assigning a processing task to a computer in a group of computers, said method comprising the steps of:
  • the step of assigning said task applies to whichever of said computer subset has the highest efficiency index and, in said subset, to whichever of the computer having the lowest temperature.
  • the object is also reached, according to claim 3, with the method of claims 1 or 2 further comprising if all the instantaneous temperatures pass the given temperature threshold, requesting that the task be processed by a processing service external to the location of the group of computers .
  • the object is also reached, according to claim 4, with the method of any one of claims 1 to 3, wherein if the highest efficiency index calculated is lesser than a given efficiency threshold, requesting that the task be processed by a processing service external to the location of the group of computers .
  • the object is also reached, according to claim 5, with the method of any one of claims 1 to 4 wherein the efficiency index GI_i is computed for each computer i as
  • GI_i CP_i / ( ( ( CP_i / CP_Tot) * (FP_Tot - ITP_Tot) ) + PS_i)
  • CP_i the processing throughput of computer i
  • CP_Tot the processing throughput for the group of computers
  • FP_Tot the total instantaneous energy consumption of the location of the group of computers
  • ITP_Tot is the instantaneous energy consumption of the group of computers
  • PS_i is the instantaneous energy consumption of one computer.
  • SI_i GI_i *(1/ Ec) , where GI_i is the efficiency index of computer I and Ec is the total energy cost in the location where the system is located.
  • the object is also reached, according to claim 9, with the system for assigning a processing task to a computer in a group of computers and communication means for assigning tasks, said system comprising:
  • an Allocation Manager adapted to receive from a scheduler a pre-requisite execution environment of the task to be assigned
  • the Allocation Manager being adapted to select the computers in the group having the task pre-requisite execution environment and for which the temperature does not pass a given temperature threshold;
  • the Allocation Manager being adapted to calculate an efficiency index representing the efficiency of each computer taking into account the computer instantaneous energy consumption and processing throughput determined for that computer;
  • the Allocation Manager being adapted to use communication means to assign said task to whichever of said selected computers is determined to presently have the highest instantaneous efficiency index.
  • the object is also reached, according to claim 10, with the system of claim 9 comprising means for requesting through communication means that the task be processed by a processing service external to the location of the group of computers if the temperature of all computers passes a given temperature threshold or if the highest calculated efficiency index is lesser than a given efficiency threshold.
  • the solution is a dynamic workload allocation system which optimizes the allocation of the workload within the company's infrastructure.
  • the optimization minimizes the overall energy demands by allocating workload on machines with lower energy consumption first, then and by keeping uniform the temperature within the data-center because avoiding hot-spots decreases the air-cooling demands.
  • the energy consumption or the temperature of processing servers becomes too high (which means that we are approaching to the computation power limit)
  • the further workload is automatically routed to an external computing provider using a network connection outside of the company's boundary.
  • the billing of such computation services is usually "pay per use", so if the local data-center is able to keep up with the workload, there will be no additional costs.
  • the solution provides a mechanism to use mostly the most efficient computers and to use the less efficient ones for handling only peak conditions. Maybe the solution can also consider to power off or hibernate computers which have the worst GI, and powering them only when the most efficient computers are going to be over-loaded.
  • PUE is calculated as (Total Facility Power)/ (IT Equipment Power) .
  • Total Facility Power is calculated for the whole data-center and not for the single computer.
  • the present solution takes into consideration the computational capacity of the computer because a data-center may have a good PUE (meaning that the most of the facility power is used to power the computers) , but the computers may not offer a high computational power. Also one other data center may have a worse PUE, but some computers from this data-center may provide a higher computational power. Using the "green index", the latter data-center could result to be the better than the former .
  • Providing the thresholds either on GI and on maximum computer temperature as parameters allows to make them varying over time. For instance the users can decide to plan to use thresholds dynamically adapted according the power they are able to generate using their own solar power generation plant in a given time frame.
  • FIG. 1 shows the system environment of the preferred embodiment of the invention
  • Fig. 2 is the general flowchart of the invention for scheduling workflows according to the preferred embodiment
  • Fig. 3 is the detailed flowchart of the invention for selecting the target computer on the basis of temperature and power savings according to the preferred embodiment;
  • Fig. 4 shows the formula of the Green Index computed by the Allocation Manager according to the preferred embodiment.
  • Fig. 1 shows the system environment of the preferred embodiment of the invention.
  • the Service Requestors (130) submits their jobs.
  • the same solution of the invention may apply to workload of other types of jobs such as interactive tasks or any job for which the execution can be scheduled.
  • a remote accessing mechanism such as Telnet, Remote Desktop etc.. is needed to allows the Service Requestor to interact with the computer on which the job runs.
  • a front end server (115) interfaces the Service Requestors but it is up to the scheduler installed on a central server responsible for system management functions to distribute software on distributed system or collect monitoring information, one function being handled by the scheduler which schedules jobs on the pool of computers (125) for their execution.
  • the system management is performed by a client-server application the server application running on the central server and a client application (the agent 120) runs on each computer of the pool of computers. For the various system management functions the server and the agents communicate together.
  • the scheduler uses the communication service of the system management server to control jobs execution on the distributed computers.
  • SCM Service Center Manager
  • the Dynamic Workload Execution Service Allocation Manager 110
  • This additional component is able to takes into account data coming from a new independent computing equipment on which a component the Heat/Power Manager (140) is able to monitor heat and power and to calculate variables upon request of the Allocation Manager and sends back these results to the Allocation manager.
  • the Heat/Power Manager equipment is an independent equipment which requires computing resources and specific hardware including cabling and probes.
  • the Allocation Manager communicates with the Heat/Power Manager to obtain Heat/power data collected by the Heat/power Manager and will select a computer among all the computers according to an algorithm oriented to saving power and reducing unbalance heat in the data-center with over heating computers. If there is not any computer in the pool that the Allocation Manager can select, the Allocation Manager requests a new machine to an external computing service provider (150) . After the request is processed and the new machine provisioned with the required software, the Allocation Manager submits the job on the new machine.
  • an external computing service provider 150
  • the job submitters may be either human Service Requestors (130) or other computing systems not represented in Fig. 1.
  • a Workload Environment Administrator (105) who is the administrator in charge, with the prior art solutions, to provide job workload dispatching policies and information to the Scheduler, provides in the preferred embodiment of the invention, environment related policies and information to the Scheduler which stores them in a Scheduling Data Repository shared with the Allocation Manager (110), still according to the preferred embodiment.
  • the Heat/Power Manager stores historical data as well as information from the Administrator (105) in the Heat/Power data Repository (145).
  • the same Workload Environment Administrator directly and separately controls the Scheduler/Allocation Manager and the equipment for Heat/Power Management services.
  • the Scheduler/Allocation Manager and the equipment for Heat/Power Management services can be controlled through the SCM.
  • the Scheduler, Allocation Manager and Heat/Power Manager are preferable implemented as computer programs in the preferred embodiment rather than as in hardware, as a hardware micro-coded logic, for instance.
  • the configuration parameters (thresholds) and inputs (job definition describing pre-requisite) are provided to these programs as known in the art.
  • Fig. 2 is the general flowchart of the invention for scheduling workflows according to the preferred embodiment.
  • the Scheduler will receive the request to submit a given job.
  • the Scheduler delegates the selection of a computer to the new the independent component, the Allocation Manager.
  • the process starts when the Scheduler invokes the Allocation Manager (200) to select a computer for executing a job having a certain job definition.
  • the job definition comprises a job identifier and the pre-requisite for its execution.
  • the pre-requisite includes the operating system, its patch level and the other software which needs to be previously installed for the job execution.
  • the scheduler provides the job definition as input to the Allocation Manager.
  • the Allocation Manager first performs a filtering (210) of the existing computers (either physical and virtual) by selecting a subset of the local computers in the data-center which correspond to the requirement of the job definition and which thus have the suitable pre-requisite installed for the job execution.
  • the computer installation can be checked by communicating with the agents installed on each computer.
  • the Scheduler of prior art applies a policy of selecting among the computers suitable for the job execution the "fastest” computer or the "most efficient” computer based on the contract made by the user. For instance, a customer submitting jobs to this data-center may pay more to have an execution speed maximized.
  • this policy is first apply by the Allocation Manager which identifies the fastest computer among the computers suitable for job execution, the green optimization being performed by the Allocation Manager after this selection. If the fastest computer which has been selected is over heating or is over powering because it is overloaded, the job is not executed to this computer.
  • the Heat/Power Manager equipment collects (205) the heat and power information globally for the data-center and for each computer of the data-center. This information is sent (215) to the Allocation Manager on a periodical basis and/or on request.
  • the Allocation Manager always uses the last heat and power information received from the Heat/Power Manager.
  • the Allocation Manager for each computer suitable for execution of the job verifies that the temperature and does not exceed a threshold defined by the Administrator for that machine (or for the machine type, to which it belongs to) .
  • the temperature threshold is a configuration parameter for the Allocation Manager component which is a program in the preferred embodiment.
  • the Allocation Manager only selects (220) the computers for which the temperature is under the threshold. The job is then allocated on the machine for which the temperature and which the most green efficient.
  • the job is sent for execution on the selected computer (240) by the Allocation Manager.
  • the system manages the provisioning of the resources required on the external systems but also tracks billing of the delegated jobs.
  • Fig. 3 is the detailed flowchart of the invention for selecting (step 225) the target computer on the basis of temperature and power savings according to the preferred embodiment.
  • the Allocation Manager reads the last temperature and power information collected by the Heat/Power Manager. In the preferred embodiment, the last information transmitted by the Heat/Power Manager is used. In other embodiments the information is requested on the fly to the Heat/Power Manager by the Allocation Manager. It is noted that, the temperature measured on the computer increases with the power and thus increases with the computer workload.
  • the temperature monitoring is used to split uniformly the workload inside the center in order to avoid hot spots: by having all computers with almost the same temperature, the temperature is balanced inside the data canter and the cooling system is not over used. Moreover, if the temperature threshold of a computer is exceeded the workload is not sent to that computer (220) .
  • the Allocation Manager computes the Green Index (GI) for each of the already selected computers.
  • the computer which is to be selected must be green enough to execute the job. This means that each computer cannot be overloaded and cannot consume too much power in the data-center.
  • the GI measure a good behavior of the computer in terms of power consumption taking into account the power consumed by the computer and the data center and the computing capacities of the computer and the data-center in the total.
  • the Allocation Manager using the power information collected by the Heat/Power Manager computes the GI of each computer and if the highest GI is lesser than a certain threshold (answer Yes to test 310), no computer of the data-center can be selected for job execution (350) and the process returns to step 230 in the general flowchart for requesting a computer in an outside cloud computing service.
  • the computers are classified (320) by group of GI ranges for instance group A for GI from 1 to 3 and group B for GI from 3 to 7.
  • the group of computers with the highest GI is selected (330) and in the group the computer with the lowest temperature is selected (340) as the target computer.
  • the process returns to step 230 of the general flowchart.
  • the job will be eventually executed on the target computer selected in step 340.
  • the Heat threshold and the GI ranges are preferably defined by an administrator (105) they are given as configuration parameters to the Allocation Manager program in the preferred embodiment.
  • One other simpler embodiment would have been to replace the steps 320 330 340 for selection of the target computer on the basis of its index value to select the computer having the highest Green Index.
  • Fig. 4 shows the formula of the Green Index computed by the Allocation Manager according to the preferred embodiment.
  • the Green Index of a computer defines "how much green" this computer is. The bigger the index is for a computer, the cheaper a job execution is on this computer. If the Green index computed for one computer is under a threshold value, the computer is not selected for next job execution.
  • the threshold value of the Green index is given as a configuration parameter to the Allocation Manager component.
  • the Green Index (GI) is computed taking into account its power consumption and its computational performance the GI of a computer reflects how the computer is efficient in the data-center .
  • PUE Total Facility Power
  • IT Equipment Power Total Equipment Power
  • the Green, Index focusing on the electrical utility costs (power draw) is a function of the power and computational capacity variables.
  • the computer Green Index (GI) formula (400) uses as variables the power of each computer, the total power of all the computers and the power of all facilities of the data-center.
  • the GI also takes into account the computational capacity of the computer and the computational capacity of all the computers .
  • the computer power is, for instance, measured in Kilo Watt per hour and the computational power is, for instance, measured in Teraflops in the preferred embodiment.
  • the computational power of a computer does not vary if the computer configuration is not changed: FLOPS (Floating Point Operation Per Second) is the number of floating point operations performed by the computer in one second. This unit takes into account the hardware and operating system capacities of a computer.
  • the computer power is periodically measured, it varies with the workload of the computer. The more the computer has jobs executing, the more the computer power increases.
  • the GI (400) is a ratio which increases with the computational power CP_i on the computer. Also if the power of the computer PS_i decreases, the index is bigger.
  • the Green Index takes into account the impact of the computational capacity of the computer on the power used for the rest of the data-center facilities out of the computers (FP_Tot - ITP_Tot) , for instance power for the cooling system. The impact is proportional to the ratio of the IP capacity of the computer on the total capacity of the computers (CP_i / CP_Tot) .
  • the system periodically calculates the GI of each of the computers belonging to the resource pool.
  • the GI may change over time: for instance, during the night, the FP_Tot is less than during the day, because the temperature is lower, and that requires less energy for the cooling system.
  • SI i Saving index for System i (Teraflops/$ )
  • GI_i Green index of system i (Teraflops / KWh)
  • Ec Energy cost in the location where the system is located ($/KWh)
  • the SI_i index varies over time also because the energy cost usually changes over time.
  • the method and system of the preferred embodiment and any other possible other embodiment manage workload of batch jobs or any other type of processing tasks such as interactive tasks which the execution can be scheduled.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Thermal Sciences (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Power Sources (AREA)

Abstract

La présente invention concerne un procédé, un programme informatique et un système permettant d'assigner une tâche de traitement à un ordinateur dans un groupe d'ordinateurs. Le procédé consiste à recevoir d'abord des données de la température instantanée et de la consommation d'énergie instantanée collectées dans le centre de données, puis à sélectionner les ordinateurs qui ont l'exécution d'environnement pré-requise pour laquelle la température ne dépasse pas un seuil donné. Un indice de rendement instantané représentant le rendement de chaque ordinateur est calculé : il tient compte de la consommation d'énergie instantanée et du débit de traitement déterminés pour chaque ordinateur. Un mode de réalisation consiste à sélectionner l'ordinateur ayant le plus grand indice de rendement. Si aucun ordinateur n'a une température instantanée sous le seuil ou si aucun ordinateur n'a un indice de rendement au-dessus d'une valeur raisonnable donnée, la tâche est envoyée à des fins d'exécution à un service de traitement externe au site du groupe d'ordinateurs.
PCT/EP2010/067598 2009-12-23 2010-11-16 Procédé et système pour allocation dynamique d'une charge de travail dans un centre de calcul qui optimise la consommation d'énergie globale WO2011076486A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09180558 2009-12-23
EP09180558.0 2009-12-23

Publications (1)

Publication Number Publication Date
WO2011076486A1 true WO2011076486A1 (fr) 2011-06-30

Family

ID=43640135

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2010/067598 WO2011076486A1 (fr) 2009-12-23 2010-11-16 Procédé et système pour allocation dynamique d'une charge de travail dans un centre de calcul qui optimise la consommation d'énergie globale

Country Status (2)

Country Link
TW (1) TW201137776A (fr)
WO (1) WO2011076486A1 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2620837A1 (fr) * 2012-01-24 2013-07-31 Hitachi Ltd. Procédé de gestion de fonctionnement de système de traitement d'informations
CN104317654A (zh) * 2014-10-09 2015-01-28 南京大学镇江高新技术研究院 基于动态温度预测模型的数据中心任务调度方法
US8954984B2 (en) 2012-04-19 2015-02-10 International Business Machines Corporation Environmentally aware load-balancing
TWI476694B (zh) * 2012-06-08 2015-03-11 Chunghwa Telecom Co Ltd Virtual Operation Resource Water Level Early Warning and Energy Saving Control System and Method
US10037501B2 (en) 2013-12-18 2018-07-31 International Business Machines Corporation Energy management costs for a data center
CN109087158A (zh) * 2018-06-12 2018-12-25 佛山欧神诺陶瓷有限公司 一种自动汇总并分配多渠道商机的方法及其系统
US10198295B2 (en) 2014-05-21 2019-02-05 University Of Leeds Mechanism for controlled server overallocation in a datacenter
EP3683677A1 (fr) * 2019-01-15 2020-07-22 Siemens Aktiengesellschaft Procédé et dispositif d'attribution de ressources de calcul parmi une pluralité de dispositifs disposés dans un réseau
CN115329179A (zh) * 2022-10-14 2022-11-11 卡奥斯工业智能研究院(青岛)有限公司 数据采集资源量控制方法、装置、设备及存储介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI470567B (zh) * 2012-02-03 2015-01-21 Univ Nat Chiao Tung 考慮耗時與耗電的卸載運算量的決策方法與運算系統

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228618A1 (en) * 2004-04-09 2005-10-13 Patel Chandrakant D Workload placement among data centers based on thermal efficiency
US20060047808A1 (en) * 2004-08-31 2006-03-02 Sharma Ratnesh K Workload placement based on thermal considerations
US20090228726A1 (en) * 2008-03-07 2009-09-10 Malik Naim R Environmentally Cognizant Power Management

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228618A1 (en) * 2004-04-09 2005-10-13 Patel Chandrakant D Workload placement among data centers based on thermal efficiency
US20060047808A1 (en) * 2004-08-31 2006-03-02 Sharma Ratnesh K Workload placement based on thermal considerations
US20090228726A1 (en) * 2008-03-07 2009-09-10 Malik Naim R Environmentally Cognizant Power Management

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
ALBERT GREENBERG; JAMES HAMILTON; DAVID A. MALTZ; PARVEEN PATEL: "The Cost of a Cloud: Research Problems in Data Center Networks", ACM STGCOMM COMPUTER COMMUNICATION REVIEW, vol. 39, no. 1, January 2009 (2009-01-01)
JIAN-JIA CHEN ET AL: "Task partitioning and platform synthesis for energy efficiency", 15TH IEEE INTERNATIONAL CONFERENCE ON EMBEDDED AND REAL-TIME COMPUTING SYSTEMS AND APPLICATIONS (RTCSA 2009), 2009, IEEE PISCATAWAY, NJ, USA, pages 393 - 402, XP002628365, ISBN: 978-0-7695-3787-0, DOI: 10.1109/RTCSA.2009.48 *
MUKHERJEE T ET AL: "Spatio-temporal thermal-aware job scheduling to minimize energy consumption in virtualized heterogeneous data centers", COMPUTER NETWORKS, vol. 53, no. 17, 3 December 2009 (2009-12-03), ELSEVIER SCIENCE PUBLISHERS B.V., AMSTERDAM, NL, pages 2888 - 2904, XP026765084, ISSN: 1389-1286, [retrieved on 20090630], DOI: 10.1016/J.COMNET.2009.06.008 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2620837A1 (fr) * 2012-01-24 2013-07-31 Hitachi Ltd. Procédé de gestion de fonctionnement de système de traitement d'informations
US8954984B2 (en) 2012-04-19 2015-02-10 International Business Machines Corporation Environmentally aware load-balancing
TWI476694B (zh) * 2012-06-08 2015-03-11 Chunghwa Telecom Co Ltd Virtual Operation Resource Water Level Early Warning and Energy Saving Control System and Method
US10037501B2 (en) 2013-12-18 2018-07-31 International Business Machines Corporation Energy management costs for a data center
US10198295B2 (en) 2014-05-21 2019-02-05 University Of Leeds Mechanism for controlled server overallocation in a datacenter
CN104317654A (zh) * 2014-10-09 2015-01-28 南京大学镇江高新技术研究院 基于动态温度预测模型的数据中心任务调度方法
CN109087158A (zh) * 2018-06-12 2018-12-25 佛山欧神诺陶瓷有限公司 一种自动汇总并分配多渠道商机的方法及其系统
EP3683677A1 (fr) * 2019-01-15 2020-07-22 Siemens Aktiengesellschaft Procédé et dispositif d'attribution de ressources de calcul parmi une pluralité de dispositifs disposés dans un réseau
CN115329179A (zh) * 2022-10-14 2022-11-11 卡奥斯工业智能研究院(青岛)有限公司 数据采集资源量控制方法、装置、设备及存储介质
CN115329179B (zh) * 2022-10-14 2023-04-28 卡奥斯工业智能研究院(青岛)有限公司 数据采集资源量控制方法、装置、设备及存储介质

Also Published As

Publication number Publication date
TW201137776A (en) 2011-11-01

Similar Documents

Publication Publication Date Title
WO2011076486A1 (fr) Procédé et système pour allocation dynamique d'une charge de travail dans un centre de calcul qui optimise la consommation d'énergie globale
JP4954089B2 (ja) グリッド・アクティビティのモニタリングおよび振り分けによる総合的グリッド環境管理を促進する方法、システム、およびコンピュータ・プログラム
US8037329B2 (en) Systems and methods for determining power consumption profiles for resource users and using the profiles for resource allocation
US8346909B2 (en) Method for supporting transaction and parallel application workloads across multiple domains based on service level agreements
Femal et al. Boosting data center performance through non-uniform power allocation
CA2801473C (fr) Modele de modification de la performance pour la gestion de charges de travail consolidee en nuages qui tiennent compte de la qualite de service
US8782655B2 (en) Controlling computing resource consumption
US8621476B2 (en) Method and apparatus for resource management in grid computing systems
JP4965578B2 (ja) ローカル・グリッドをサポートするためにホスト・グリッド内の割り振りポリシを変更するためのコンピュータ実装方法、並びに、そのデータ処理システム及びコンピュータ・プログラム
Wang et al. Power optimization with performance assurance for multi-tier applications in virtualized data centers
KR20110007205A (ko) 컴퓨트 환경에서 에너지 소비를 관리하기 위한 시스템 및 방법
JP2009514117A5 (fr)
Braiki et al. Fuzzy-logic-based multi-objective best-fit-decreasing virtual machine reallocation
Goudarzi et al. Hierarchical SLA-driven resource management for peak power-aware and energy-efficient operation of a cloud datacenter
Chen et al. Improving resource utilization via virtual machine placement in data center networks
Chaabouni et al. Energy management strategy in cloud computing: a perspective study
Venkataraman Threshold based multi-objective memetic optimized round robin scheduling for resource efficient load balancing in cloud
Werner et al. Simulator improvements to validate the green cloud computing approach
Selvi et al. Resource allocation issues and challenges in cloud computing
Zhang et al. An energy and SLA-aware resource management strategy in cloud data centers
Gohil et al. A comparative analysis of virtual machine placement techniques in the cloud environment
Björkqvist et al. Cost-driven service provisioning in hybrid clouds
Li et al. CSL-driven and energy-efficient resource scheduling in cloud data center
US9515905B1 (en) Management of multiple scale out workloads
Rezai et al. Energy aware resource management of cloud data centers

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10779311

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10779311

Country of ref document: EP

Kind code of ref document: A1