GB2475087A - Misrepresenting resource availability in reports to a central resource manger allocating tasks to managed entities. - Google Patents

Misrepresenting resource availability in reports to a central resource manger allocating tasks to managed entities. Download PDF

Info

Publication number
GB2475087A
GB2475087A GB0919428A GB0919428A GB2475087A GB 2475087 A GB2475087 A GB 2475087A GB 0919428 A GB0919428 A GB 0919428A GB 0919428 A GB0919428 A GB 0919428A GB 2475087 A GB2475087 A GB 2475087A
Authority
GB
United Kingdom
Prior art keywords
network
report
tasks
availability
resource
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0919428A
Other versions
GB2475087B (en
GB0919428D0 (en
Inventor
Sven Van Den Berghe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to GB0919428.3A priority Critical patent/GB2475087B/en
Publication of GB0919428D0 publication Critical patent/GB0919428D0/en
Publication of GB2475087A publication Critical patent/GB2475087A/en
Application granted granted Critical
Publication of GB2475087B publication Critical patent/GB2475087B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5094Allocation of resources, e.g. of the central processing unit [CPU] where the allocation takes into account power or heat criteria
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5044Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering hardware capabilities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/503Resource availability
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)
  • Computer And Data Communications (AREA)

Abstract

In a computer network entities (28) report resource availability to a central resource manager (32) that is responsible for allocation work or tasks (34) to the entities. In the particular embodiment of the invention the report can deliberately misrepresent the availability of the resource in order to influence the allocation of tasks. The misrepresentation may be achieved by deliberately adapting a value in the report. This may be done by representing the actual resource as an artificial resource. The report may include an energy efficiency value as an artificial resource or be a software licence. One advantage offered by the method is to allow managed entities to obtain work in order to run at an optimal capacity with regards to energy efficiency.

Description

Method, Program and Apparatus for Influencing Task Allocation in a Computer Network The present invention relates to a method, apparatus, computer program, and computer network in which availability of network computing resources, such as server resources, is reported to a manager.
Collections of servers or other computing units that are used to deliver a particular service are usually managed collectively by a resource manager. The role of a resource manager is to allocate tasks amongst a collection of servers so that an appropriate quality of service measure is reached, for example, a group of tasks is completed in the shortest time, or a particular task finishes by a certain time. Resource managers use sophisticated algorithms and heuristics that rely on knowledge of the availability of resources on the server collection. Resources can include processor time, server memory, network bandwidth, or even the availability of a software licence.
Computer servers use significant amounts of energy, whether they are gathered into large data centres, small computer rooms or as desktop servers. For most of the computer server models in operation today, the largest part of the energy consumed is as a base load that is independent of the computer load of the server. That is, an unused server will draw almost as much energy a fully loaded server. One way to reduce energy consumption is to operate the servers at their most efficient load point where the amount of effective work done per unit of energy consumed is a maximum.
When servers are placed in dedicated facilities such as server rooms and data centres they become more directly coupled to energy use through the infrastructure required to support the facility. This infrastructure includes cooling equipment, networking and power supply and distribution. Infrastructure typically uses as much energy as the direct server use. Although there is a fixed use of energy by the infrastructure as well, the infrastructure energy is ultimately driven by the energy used by the IT equipment so that reducing IT energy use reduces the infrastructure energy use (for example if there is less energy used, there is less heat dissipated and so less need to cool).
Data centres and server rooms have many demands placed on them and although the minimisation of energy use is becoming a significant demand, there is also a need to maintain the quality and reliability of the service provided. In addition, data centres and server rooms are a large capital investment. These considerations mean that is difficult to make changes to the operation of data centres and server rooms that will increase energy efficiency, as they will be perceived to cost too much or have an unacceptable risk to the delivery of the service, and their implementation requires changes to otherwise stable operational procedures.
An additional obstacle to adding to resource managers' objectives is that the protocols between the manager and the managed entities (what information is transmitted and in what form) may not be capable of extension to carry any new energy efficiency metrics so that the objectives cannot be added as the manager does not know the relevant states.
It may be possible to use resource managers to increase efficiency by operating the IT equipment at the "sweet point" at which effective work done per unit of energy consumed is at a maximum. This requires the ability to move software about, for example, through virtualisation. It also requires the ability to make globally optimal decisions about task placement. However, in order to make these decisions, resource managers will have to be changed significantly to become aware of the performance characteristics of the IT equipment and the algorithms required to manage them most efficiently. As most operating resource managers are specialised and tuned pieces of software, these are significant changes and will be perceived as a significant risk, to the extent that they may not be implemented.
Figure 1 shows the contemplated relationships between managers 10 and their managed entities 11. In the case of data centres (although these principles apply more generally) the managed entities 11 will usually be servers but may also include other types of computing entity, storage devices and networking. A manager 10 will allocate tasks to the managed entities 11 according to some policy. For data centres these tasks may be some form of computation, and the policies can include maximising the amount of work done, ensuring responsiveness etc. To implement the policies the manager 10 needs to have information about the state of the managed entities, for example, how loaded they are, amount of free memory etc. The state is related to some resource that impacts the ability to implement the policies and is intended to give a true representation, even if it is not always accurate.
When a resource manager receives a task for a server, the task has a list of resource requests (e.g. that it wants X megabytes of memory and needs a fast network connection to a disk drive etc). Resource managers can assign tasks to servers that they know have a certain set of resources available. Resource managers can also manage the state of the resource on a server e.g. not allocating tasks to a server when some of its resources are over loaded.
According to an embodiment of the present invention, a method is provided to be performed in a computer network including a central resource manager allocating tasks to a plurality of managed entities having computing resources used for performing the tasks, the method comprising producing a report of availability of computing resources in a reported part of the network, the reported part of the network having one or more managed entities, and transmitting the report to the central resource manager: wherein the report deliberately misrepresents the availability of computing resources in the reported part of the network in order to influence the allocation of tasks.
Advantageously, the method can allow the allocation of tasks to resources to be modified according to a desired behaviour without having to change the policy by which the central resource manager allocates tasks. For example, the central resource manager may be configured to allocate tasks to the server with the most spare operating capacity, but by misrepresenting the availability of resources in reported parts of the network the tasks can be allocated so that they are performed in the most energy efficient manner. Changes made at a local level, that is to the managed entities such as servers and other members of the network, can influence the central allocation policy in a way that is transparent to the central resource manager, so no update of the central resource manager's allocation policy is required.
The central resource manager may have a policy for allocating tasks in order to reach an appropriate quality of service measure. That quality of service measure may be, for example, processing tasks within a given time limit, or spreading task load evenly amongst servers or other entities performing tasks, It is often the case that the information required to make decisions about how tasks should be allocated to resources in order to achieve a certain goal is locally available. Therefore, assuming the managed entity or process responsible for producing the report has an adequate knowledge of the central resource allocation policy, local-level changes to reports in the form of misrepresentation of the availability of computing resources in a part of the network can be used to influence the central resource manager to allocate tasks in accordance with an alternative policy. By modifying the central resource manager's perception of the availability of computing resources, changes in task allocation are achieved. Modifying the central resource manager's perception can be achieved using local information from network entities.
"A part of the network" should be understood to be one or more managed entities each of which may or may not have the ability to perform tasks allocated thereto, and could be a combination of different types of managed entities. A part of the network is not limited physically or geographically, so could correspond, for example, by a process running across a plurality of physical resources. Commonly, a reported part of the network is a single managed entity such as a server.
The term "report" should be interpreted broadly enough to include data transmitted for the purpose of communicating information about the state of a resource or part of the network, or data that is interpreted or analysed to provide such information by the receiving entity.
The report may be provided in response to a request from the server, so that the method includes receiving a request for information about the availability of resources in a part of the network from the server.
Producing a report may include gathering original information and preparing it for transmission, or it may simply be modifying an existing report.
The report may be produced by the reported part of the network itself, or it may be produced by a separate entity.
The report may represent the availability of resources in a part of the network by means of a value representing a property or attribute of one or more resources from a managed entity in that part of the network. The report may contain data or information about a single attribute or property of a particular resource in a single managed entity, or it may contain a value representing the availability of a certain type of resource in each of a plurality of managed entities. Alternatively, it may contain information or data about the availability of more than one resource in a single managed entity, or each of multiple managed entities. The report may be in response to a request received from the central resource manager or some other resource, which may specify certain attributes, properties, resources, or managed entities which should be reported.
Preferably, the network in which the method is performed is a data centre and the managed entities to which tasks are allocated are servers. A data centre can be viewed as a collection of servers used to deliver a particular service, their central resource manager, and the infrastructure required to support the functions of the central resource manager and servers. High energy consumption at data centres mean that changes to the way tasks are allocated to servers in a data centre can result in significant gains in energy efficiency.
Exampies of computing resources are any of processor time, memory, storage space, and data throughput capacity (potentially including limitations due to network bandwidth between network entities), licences, and access to specialist equipment e.g. printers.
The availability of these computing resources at a managed entity impacts the managed entity's ability to perform a task. The availability of a computing resource could be represented in a number of ways by a report, depending on the particular protocol and network in which the method is being performed An indication of current availability can be provided, along with a maximum capacity. Representing the availability of a computing resource may thus include reporting how much of that resource exists at the managed entity or part of the network in question, and how much is being used.
Optionally, the report deliberately misrepresents the availability of computing resources by representing an artificial resource as a computing resource. A report misrepresents the artificial resource as a computing resource by reporting it as though it were a computing resource.
Protocols governing the reporting and allocation processes may not allow additional factors or constraints to be reported and considered by the central resource manager in allocating tasks. Advantageously, the creation of an artificial resource, which purports to be a resource affecting a managed entity's ability to perform tasks, but which is actually an indication of some other status of a part of the network, provides a way of having a new factor considered.
The artificiaJ resource could be, for example, energy efficiency. Energy efficiency is not a computing resource such as memory or disk space, and hence reporting it as a computing resource misrepresents the reported part of the network. In this case, a value or score representing the energy efficiency of the reported part of the network or some other part of the network is reported as a computing resource of the reporting part of the network. Depending on the central resource manager's policy for dealing with the artificial resource, the reported value can be used to influence the allocation of tasks by attracting tasks to, or repelling tasks from, the reported part of the network.
The central resource manager's policy for dealing with the artificial resource would need to be established according to protocols existing in the network.
Software licences are an example of a computing resource directly affecting a managed entities ability to perform tasks. An artificial software licence is an example of an artificial resource. The artificial software licence does not actually represent whether or not the managed entity holding the licence has a software licence for a particular application, it is merely used as a tool for attracting or repelling tasks to or from certain parts of the network. Advantageously, the creation and use of an artificial software licence which pertains to be an actual computing resource enables a local action, the creation of the licence, to affect the central allocation of tasks. Furthermore, this can be achieved without manipulating existing reports.
It may be that the network includes a central licence manager as an entity or process for granting and relinquishing licences to and from managed entities in the network. In a network also comprising a central licence manager, the method embodying the present invention may further comprise the central licence manager granting a licence to a part of the network in a defined state, wherein the central resource manager allocates no tasks to any part of the network which has a granted licence, and optionally, the central licence manager revoking the licence when the part of the network to which the licence was granted leaves the defined state.
In order to grant an artificial software licence to the part of the network in the defined state the central licence manager would need to be aware that the part of the network in question was in the defined state. This could be via an established reporting procedure, or it could be that the managed entities in the network have been configured at a local level to request the artificial software licence from the central licence manager when they reach the defined state. The defined state may be a state in which energy efficiency is above a certain threshold.
Similarly, revocation of the licence may be upon request from the part of the network holding the licence, or the central licence manager may become aware that the part of the network holding the licence is no longer in the defined state via some reporting means.
As an alternative to or in addition to misrepresenting the availability of computing resources by reporting artificial resources as computing resources, producing the report may include deliberately misrepresenting availability of computing resources in the reported part of the network by adapting a value representing the availability of a computing resource in the reported part of the network, and including the adapted value in the report.
The managed entities may themselves at a local level be able to establish that in order to achieve a certain quality of service measure, such as energy efficiency, faster performance of tasks, or some other performance-related target, a change in the way tasks are allocated to managed entities is required. Rather than change a central resource manager's policy for allocating tasks to resources, it may be preferable to make changes at a local level, such as managed entity level.
Misrepresenting the availability of resources in part of the network by adapting a value representing the availability of a computing resource in the reported part of the network can influence the allocation of tasks to managed entities by the central resource manager. For example, if the central resource manager's policy is to add tasks to parts of the network having a light load, a value representing the load on the resource can be misrepresented as artificially light until the part of the network in question has the desired load.
Adapting the value may be performed by the part of the network being reported on, or there may be a transformer hosted by another entity or part of the network that is capable of adapting reported values.
In order to adapt values effectively, it is advantageous that the entity or process responsible for the adaptation, be it a transformer or otherwise, is aware of the central resource manager's policy for allocating tasks to resources.
Adapting the value may include reporting that there is less of the computing resource available than there actually is, and this may be by reporting a higher usage than is actually being experienced, or by reporting that less of the resource exists in the part of the network in question than is actually the case. Similarly, adapting a value may be reporting that there is more of the computing resource available than is actually the case, and this may be either by reporting that usage is lower than is actually being experienced, or by reporting that more of the resource exists in the part of the network in question than is actually the case.
In some cases, a desired change in the allocation of tasks to entities may be achieved by entities making decisions at a local level about the load of tasks they themselves require, and hence how the availability of resources in the reported part of the network should be misrepresented in order to gain the desired load. Alternatively, there may be some performance goals or quality of service measures, such as an increase in energy efficiency, that are best achieved by more than one resource cooperating and establishing desired loads for each of the one or more resources cooperatively.
Preferably, producing a report for a part of the network includes misrepresenting the availability of resources in the reported part of the network based on a report of a state of at least one other part of the network.
An entity in the network may host a transformer process operable to receive reports from a plurality of other entities or parts of the network and adapt the values reported therein before sending the reports to the central resource manager. In this way, a behaviour such as energy efficient behaviour can be achieved in a plurality of entities or parts of the network, for example by misrepresenting values of a state of a certain part of the network in order that it is not allocated tasks by the central resource manager and a sleep mode can be entered. It may be desirable to allocate tasks such that a portion of the plurality of entities are operating at full capacity, and others are in a sleep mode, whereas the central resource manager may have a policy of spreading load evenly among managed entities.
Where it is desirable to consider the state of a number of managed entities in determining a desired allocation of tasks to resources, a method according to an embodiment of the present invention may further include gathering information about a state of a subject part of the network having a plurality of managed entities, determining a desired allocation of tasks to the reported part of the network based on the gathered information, and misrepresenting the availability of resources in the reported part of the network in order to encourage the central resource manager to allocate tasks according to the desired allocation.
The subject part of the network and the reported part of the network may be the same, overlapping, or separate parts of the network. Furthermore, the report may be intended to influence the allocation of tasks to a part of the network that is the same, overlapping, or different from the reported part of the network.
Gathering information may include intercepting reports intended for the central resource manager, or it may be that a particular managed entity or other network component is responsible for gathering information about a state of a part of the network for reporting purposes.
The desired allocation may be in pursuance of a different quality of service measure to that which the central resource manager is aiming to achieve. The central resource manager does not need to be aware of the desired allocation of tasks, but the entity or process responsible for producing reports should be, in order that the availability of resources can be misrepresented in a way that will influence the central resource manager to allocate tasks according to the desired allocation.
The state of a subject part of the network could include operating characteristics such as operating load, system temperatures, or time since last entered sleep mode.
Preferably, the subject part of the network includes a component of a building management system. Building management systems are responsible for the management of infrastructure around the computer network, particularly in cooling.
Usually these systems are managed independently of the task allocation and do not link directly with the central resource manager. Modern building management systems are based on networked computers, which means that a building management system could be linked with the resource management. Advantageously, using information from the building management system to obtain a desired allocation of tasks to resources, and misrepresenting the availability of resources in managed entities in order to influence the allocation of tasks, enables the state of operation of the building management system to be considered in allocating tasks in a way that is transparent to the central resource manager.
The invention may be embodied by a computer program which, when executed by a computer, causes the computer to perform a method embodying the present invention.
The computer program could be run on the reported part of the network itself, so that the reported part of the network produces a report about itself. Alternatively, the computer program could be run on a separate entity which, in terms of reports about the availability of resources in a reported part of the network, sits between the reported part of the network and the central resource manager and transforms or adapts reports.
According to an embodiment of the present invention, an apparatus is provided for use in a computer network including a central resource manager for allocating tasks to a plurality of managed entities having computing resources used for performing the tasks, the apparatus comprising a report producer operable to produce a report of availability of computing resources in a reported part of the network, the reported part of the network having one or more managed entities, a transmitter operable to transmit the report to the central resource manager, wherein the report deliberately misrepresents the availability of computing resources in the reported part of the network in order to influence the allocation of tasks. The apparatus may be included in the reported part of the network. Alternatively, the apparatus may be a separate entity which receives reports from the reported part of the network and transforms or adapts them to produce new reports for transmission to the central resource manager.
Preferred features of the present invention will now be described, purely by way of example, with reference to the accompanying drawings, in which:-Figure 1 shows a relationship between a manager and managed entities; Figure 2 is a flow chart of a method embodying the present invention; Figure 3 is a block diagram of an apparatus embodying the present invention; Figure 4 is a block diagram of a computer network embodying the present invention; Figure 5 is a block diagram of an embodiment of the present invention in which a managed entity hosts a transformer; Figure 6 is a block diagram of an embodiment of the present invention in which a transformer is independent of a managed entity; Figure 7 is a flow chart of an embodiment of the present invention in which an artificial software licence is an artificial resource; Figure 8 is a block diagram of an embodiment of the present invention in which a transformer aggregates reports from multiple entities.
Figure 2 is a flow chart illustrating steps in a method embodying the present invention.
At step Si, produce report, a report of the availability of computing resources in a reported part a computing network is produced. In invention embodiments, the report deliberately misrepresents the availability of computing resources in a part of the network. At step S2, transmit report, the report is transmitted to a central resource manager responsible for allocating tasks to managed entities. There must be a data link, whether it be wired, wireless, or a combination of the two, between the reported part of the network, the entity responsible for producing the report, and the central resource manager.
Figure 3 is a block diagram of an apparatus 20 embodying the present invention. The report producer 22 produces a report 26 which deliberately misrepresents the availability of computing resources in a part of the network. The report 26 is passed to a transmitter 24 for transmission to a central resource manager, responsible for allocating tasks to managed entities.
Figure 4 is a block diagram of a computer network I embodying the present invention.
In the computer network 1 illustrated there are only two managed entities 28, but it should be appreciated that computer networks 1 embodying the present invention may contain many more managed entities than two. Each managed entity 28 has computing resources 30 such as processor speed, storage space, data throughput capacity, software licences, memory, and processor time. Each managed entity 28 has a report producer 22 and a transmitter 24. The computer network 1 also has a central resource manager 32.
The report producer 22 is operable to produce a report of the availability of computing resources in a part of the network. The part of the network being reported on, the reported part of the network, may be the managed entity 28 on which the report producer is hosted. The report 26 may be of the availability of the computing resources in the reported part of the network. In embodiments in which artificial resources are reported as a computing resource, the report 26 pertains to be about the computing resources 30 in the reported part of the network, but may actually contain values reflecting other properties of different entities altogether. Alternatively, the report 22 may misrepresent the availability of computing resources in the reported part of the network by adapting a value in the report from the true value. The report 26 is passed to the transmitter 24, from where it is sent to the central resource manager 32. On the basis of the contents of the report 26, and in accordance with an aflocation policy, the central resource manager 32 allocates a task or tasks 34 to the reported part of the network. Since the report 26 misrepresented the availability of computing resources 30 in the reported part of the network, the allocation of tasks 34 will have been influenced.
Figure 5 is a block diagram of an embodiment in which a managed entity 28 hosts a transformer 36. The transformer 36 effectively combines the roles of the report producer 22 and transmitter 24. The box surrounding the transformer 36 and managed entity 28 indicates that although they are different processes and effectively independent, they are hosted by the same physical piece of apparatus.
In the embodiment depicted, a report, referred to hereinafter as an unmodified report 27 for clarity, is produced by the managed entity 28. The unmodified report 27 is passed to the transformer 36. This passing may be intentional or unintentional, that is, the managed entity may intend the information in the unmodified report to be passed directly to the central resource manager 32, but it is effectively intercepted by the transformer 36. Alternatively, the managed entity may knowingly pass information in the form of an unmodified report 27 to the transformer 36. The unmodified report 27 is a truthful representation of the availability of computing resources 30 in the managed entity 28. Whilst the unmodified report 27 is truthful, it may have some inaccuracies such as those due to quantisation or aliasing effects, inaccuracies in measurement techniques, or inaccuracies resulting from measurements being out of date. None of these are examples of deliberate misrepresentations.
The transformer 36 receives not only the unmodified report 27, but also other information 37 regarding the condition or state of at least one other part of the network, be it a managed entity or some other part of the infrastructure. The transformer, having been effectively programmed to aim to achieve a particular quality of service measure in the network, can then decide how best to modify the unmodified report 27 to misrepresent the availability of computing resources in the reported part of the network.
In the example in Figure 5 the reported part of the network is the managed entity 28.
The other conditions 37 may be, for example, additional information regarding the managed entity 28 not considered in creating the unmodified report 27 because the unmodified report 27 was created according to an existing protocol.
The transformer 36 may aim to modify the central resource manager's perception of the current or future availability of computing resources in the reported part of the network to produce behaviour from the central resource manager 32 (influence task allocation) that satisfies some new set of objectives without the central resource manager 32 being aware of the new policy, metrics, or ways of achieving the policy objectives.
A first process for modifying reports to misrepresent the availability of computing resources is now described.
Using an example in which the new objective which the transformer 36 is programmed to achieve is energy efficient performance of tasks, one way of misrepresenting the availability of computing resources 30 in the reported part of the network to achieve this objective is by creating a new artificial resource that is reported as a computing resource, but is actually merely a value representing energy efficiency. The artificial resource is derived from the energy efficiency of the reported part of the network with values that will influence the allocation of tasks by the central resource manager 32 in the desired way. For example, a central resource manager 32 could be constrained to allocate tasks to the managed entity with the lowest amount' of the artificial resource, in this case energy efficiency.
Trying to achieve energy efficiency in a particular managed entity 28 or group of managed entities 28 can be achieved by attracting tasks until a particular managed entity 28 is running at its most efficient, and repelling tasks thereafter, or by repelling tasks until the particular managed entity 28 is unloaded and can sleep.
A managed entity 28 or transformer 36 could know its current operating efficiency from a configured table plotting load against efficiency. However, other measures could be included in the calculation of the reported value of the artificial resource, for example, many servers are fitted with temperature sensors and so the reported value of the artificial resource could take into account an energy efficient temperature range.
The precise procedures necessary to create a new resource type will depend on the type of resource management software installed at the central resource manager, but in an exemplary embodiment will require: 1) Configuration at the central resource manager 32 of the new artificial resource: definition of the type, range and an allocation strategy. The artificial resource is established at the central resource manager 32 as though it were a new computing resource in the network. The central resource manager 32 must be told how to interpret reports of availability of the artificial resource, and how to allocate tasks in response to such reports. For the energy efficiency artificial resource in this embodiment the range is nought to one. The allocation strategy could be simply to maximise the value by allocating tasks to the managed entity having the lowest value.
Advantageously, no model of how efficiently the resource or managed entities themselves are running is required at the central resource manager. The central resource manager simply receives a value between 0 and 1 representing the energy efficiency of the managed entity. This allows the management of a number of different types of resource or managed entities using the same policies and the same code at the central resource manager. If a managed entity in question is loaded beyond the most energy efficient point, it may be that it is given an energy efficiency artificial resource value that will repel further tasks, such as 1. Alternatively, the energy efficient artificial resource may be proportional to operating load of the managed entity, having a value between 0 and 1, but with a pre-defined optimum value between 0 and 1 at which the entity is most energy efficient. The central resource manager 32 could then only allocate tasks to the managed entity if it reported a value below the optimum.
2) Creation of the artificial resources on the reporting side 22, 36. The central resource manager 32 may make requests for information about availability of computing resources, to which the report producer 22 or managed entity 28 via transformer 26 will respond. The entity producing the report 26, or modifying an existing report, will be coded for reporting the energy efficiency artifiial resource.
The reporting entity gathers efficiency related information in order that it can determine the actual energy efficiency of the managed entity or computing resource in question.
For example, the actual operating load of a managed entity could be compared to an internal model of its efficiency against operating load to determine what the appropriate change to load is in order to improve energy efficiency (reduce load, increase load, stay same).
This internal model can include potential efficiency improvements from CPU voltage variations or shedding all load to enable the resource to enter an idle state or sleep mode. This efficiency value is reported to the central resource manager 32 as an actual computing resource of the reported part of the network.
A second process for modifying reports to misrepresent the availability of computing resources is now described.
The structure remains the same as that outlined in Figure 5. However, the availability of computing resources in the reported part of the network is achieved by modifying values in the unmodified report 27 to produce the report 26. The reported part of the network is the managed entity 28, and the computing resources the availability of which are reported, are the computing resources 30 belonging to the managed entity 28.
This embodiment uses the assumption that the policy of the central resource manager 32 is to add load to managed entities that have a light CPU load.
The steps involved in this embodiment will be described in more detail below. As an outline, the transformer 36 modifies the unmodified report 27 to artificially reduce the reported CPU load at the managed entity 28 to create a modified report 26. Artificially reducing the reported CPU load at the managed entity 28 will influence the allocation of tasks to resources by the central resource manager 32, since the central resource manager 32 will preferentially allocate tasks 34 to the managed entity 28 due to its low reported CPU load.
The transformer 36 can continue to artificially reduce the reported CPU load in this way until it detects or calculates that the managed entity 28 is operating at its most efficient, when the reported load could be artificially raised to discourage more tasks 34 being allocated to the managed entity 28.
The other conditions 37 may be parameters that the transformer 36 can use to calculate a measure of the current energy efficiency of the managed entity 28. In this description a transformer 36 is discussed, but equally, the role of the transformer 36 could be performed by a report producer 22 with a transmitter 24 for transmitting the report 26 to the central resource manager 32.
Some of the steps taken in this embodiment are described here in more detail: Step 1: Request received by managed entity 28 from central resource manager 32 for information about the current load, or availability of computing resources 30 (for computing resources this load can be the CPU load, memory used, disk space used etc, or the corresponding measures of availability of computing resources 30).
Step 2: Reported part of the network 28 gathers efficiency related information for the transformer 36. For example, a computing resource 30 (such as a CPU) may gather its actual load or availability and an internal model of its efficiency against load to determine what the appropriate change is to operate at best efficiency (reduce load, increase load, stay same). This internal model can include changes of state such as CPU voltage variations or shedding all load to enable the computing resource to be idle. Alternatively, the determination of the appropriate change can be done externally, using an equivalent model. For example, the transformer 36 may gather information about the availability of computing resources 30 as an unmodified report 27 and use a model of energy efficiency against load for the particular computing resource 30 in question. The transformer 36 can then determine the appropriate change in load to operate at best efficiency.
Additional conditions 37 can be taken into account so that the reporting entity actually decides how to misrepresent the availability of resources in order to increase the energy efficiency of a different entity altogether, such as the building management system. For a building management system (BMS) these additional conditions 37 can be the power being drawn by a unit, the amount of heat transferred by an air conditioner etc. Step 3: The transformer 36 has either calculated the efficiency information itself, or the transformer 36 gathers the availability and efficiency information from the computing resource 30. The transformer 36 then computes a response to the central resource manager 32 that aims to achieve the desired result. If it has been determined that the computing resource 30 in question is operating at maximal efficiency, then the transformer 36 could modify the report 27 to produce a modified report 26 that deliberately misrepresents the availability of computing resources 30 by reporting that the computing resource 30 is operating at full load so that the central resource manager makes no change to its allocation. If the transformer 36 decides that the computing resource 30 needs more tasks to operate more efficiently, then it will modify the report to report a light load to the central resource manager 32 (possibly zero) to encourage the central resource manager 32 to place more work on it, though it must also be careful not to attract too much work so that the computing resource 30 becomes overloaded. If the computing resource 30 wants to shed load, or it is determined by the transformer that it should shed load (to become idle, enter a sleep mode, or to operate at a more efficient voltage), then the report can be modified to report a higher than actual load, or lower than actual availability.
Correspondingly, reported values can be modified depending on the additional conditions 37 and the desired load in order to increase the energy efficiency of, for example, components of the building management system.
A simple example of the logic the transformer 36 might follow in modifying reports is set out below: If LA is the actual load on the computing resource 30, LR is the reported load in the unmodified report 27, LM the maximum load that the management software aims for and LE the most efficient load, then one strategy is: if (LE < L) report LM+(a calculated additional number) else if(LB > L) { report 0 else { report LM So that the calculated report values are the figures for the modified report 26. More generally, the LE> LR case, which returns 0, could actually be a function of the difference between LE and LR such as (LM -(LE -LR) or (LR/ LE)*( LR/LE). The idea of these functions is to cooperate with other computing resources at other managed entities 28 by requesting larger load changes when further away from an optimal load.
LR is approximately equal to LA, but some inaccuracy occurs as a result of the measurement itself, and due to quantisation and aliasing effects.
In some cases, a group of managed entities 28 each having an available amount of a particular computing resource 30 can reach a more efficient operating point by attracting load to some of the managed entities 28, and repelling load from others. If a group of managed entities 28 all have an equal available amount of a particular computing resource 30, then all resources choosing to misrepresent their available resources in the same way and to the same extent may not lead to optimal efficiency (the central resource manager 32 only has a fixed amount of tasks to allocate). For the computing resource 30 in each of the group of managed entities 28, the report producers 22 or transformers 36 could therefore randomly or selectively decide where and how to misrepresent the availability of the computing resource 30 in such a way that tasks are attracted or repelled. In this example the misrepresentation process is not completely random, but could be weighted to favour the most efficient final state.
Software implementing the misrepresentation processes can be installed and implemented as part of the client software for the resource management system.
Alternatively, if the resource management system uses direct calls to function on the resource's middleware, then the resource's middleware can be modified to behave as above, the transformer 36 would then be effectively hosted by the computing resource itself.
In either of the two processes described above, the misrepresenting of the actual availability of computing resources 30 in managed entities 28 can take into account the state of one or more components in a building management system. The state of components of the building management system could be, for example, the other condition 37 in Figure 5. Taking into account the operation of the building management system is particularly preferable in embodiments in which the computer network 1 is a data centre and the managed entities 28 are servers.
A large part of the energy consumed by a data centre is used in the management of the infrastructure around the IT equipment, particularly in cooling. Usually this infrastructure is managed independently of the resource management by dedicated buildingmanagement systems that do not communicate directly with the computing resource management. Modern building management systems are based on networked PCs that are moving towards open systems (see the activities in the OPC Foundation's Unified Architecture http://opcfoundation.org for an example), which means that a building management system could conceivably be linked with the resource management. The processes described in this document for misrepresenting the availability of computing resources provide a technique to bridge the models so that conditions of a building management system can influence a resource manager's task allocation to affect operating characteristics of the data centre as a whole.
For example, consider an air conditioning system that cools a group of servers. The report producer 22 or transformer 36 for a computing resource 30 on one or each of the servers could communicate with the building management system regarding the air conditioning unit, and if the unit is running non-optimally adjust a report of availability of a computing resource at a managed entity to get the appropriate response from the central resource manager 32. For example, the central resource manager 32 could increase the number of tasks allocated to be performed in the vicinity of a particular air conditioning system so that more heat is dissipated and the air conditioning system reaches its optimal operational load.
Figure 6 is a block diagram of a communications system in which the transformer 36 is independent of the managed entity 28. The managed entity produces unmodified reports for the transformer 36. The transformer modifies the unmodified report 27, possibly in light of information received as other conditions 37, and transmits the modified report 26 to the central resource manager 32. The modified report 26 misrepresents the availability of computing resources 30 in the reported part of the network, in this case the managed entity 28. On the basis of the report 26 the central resource manager allocates tasks 34 to be performed by the computing resources 30 of the managed entity 28.
The processes for modifying the report so that it misrepresents the availability of computing resources 30 in the reported part of the network 28 are the same processes as described for the embodiment in Figure 5 in which the transformer 36 is part of the same physical entity as the managed entity 28.
Figure 7 is a flow chart illustrating an embodiment in which an artificial software licence is created as an artificial computing resource.
Licences for software are a computing resource that is also managed. Often sites, data centres, networks, or collections of servers are allowed a restricted number of simultaneous invocations of an application by the terms of the licence with the application's developer. This policy may be enforced in a number of ways, but usually it is through a central licence manager consulted during application start up for the availability of a licence to proceed. The licence manager is an application itself and can be configured to manage the licences of many applications. Resource managers such as a central resource manager 32 in invention embodiments are aware that licence availability is a constraint on policy of allocation of tasks 34 to managed entities 28.
According to an embodiment, an energy efficient licence is created as an artificial resource. This embodiment may, for example, require the computer network to have a licence manager configured to allocate a maximum of one instance of an energy efficient software licence per computing resource 30 per server 28. Accordingly, the central resource manager 32 must be configured to manage licences on a per-resource basis, so it does not allocate new work requiring a particular computing resource to a server already holding a licence for that resource. In the example outlined below an overall efficiency of the server is considered rather than an individual resource (eg storage space).
A central licence manager is configured with a policy that allows one energy-efficient licence per managed entity 28, and the central resource manager is made aware of this. Both these changes are configurable on available systems. A server monitors its energy efficiency Si 1. If it is running at its most efficient, or above some predetermined threshold, the server determines whether it already has an energy efficient licence in step S12. If in step S12 it is determined that no, it does not already hold an energy efficient licence, then it requests an energy-efficient licence from the licence manager in step S13. This is granted and, as the policy says that not more than one licence is available per server, and the licence has now been granted to the server the central resource manager allocates work elsewhere. That is, the central resource manager 32 will only allocate tasks to managed entities for which a licence is held by the licence manager. If the licence for a particular managed entity 28 has been requested by, and granted to, the entity, then the central resource manger 32 will allocate tasks elsewhere. When the server load drops below peak efficiency, or below the predetermined threshold, it is determined at step Si 1 that the server is not running efficiently. The flow proceeds to step S14 and if the server does already hold an energy efficient licence, then the server relinquishes the licence at step S15 thus making itself available to the central resource manager 32 for further task allocation.
Figure 8 illustrates a computer network embodying the present invention in which the transformer 36 is operable to aggregate unmodified reports from more than one managed entity. In this way, the transformer 36 can modify the reports to misrepresent the availability of computing resources 30 at the managed entities in order that the allocation of tasks to the individual managed entities is influenced in such a way that the overall behaviour of the managed entities as a group is changed.
In the two report modification processes described above, the managed entities 28 can behave individually, even with respect to each of their computing resources 30, without regard for the overall behaviour of the system. However, the optimal state of a system of managed entities 28 having computing resources 30 may not be achieved by each computing resource 30 within those managed entities 28 operating at its most efficient point, and in this case overall efficiency can be improved by cooperation between the managed entities 28 and their resources 30. One way of achieving cooperation is by having an aggregator node as a transformer 36 that modifies unmodified reports 27 from individual managed entities 28 using the responses from a group of managed entities e.g. if it detects that a group of servers are reporting light load in their unmodified reports 27, it may modify the reports so that in the modified reports only a few report light load (guaranteeing that they get the load they want) and others shed load (allowing idle entities or resources and freeing load for the ones requesting it).
This strategy is effective when individual resources 30 and entities 28 are operating inefficiently but can increase their efficiency by either adding load or shedding load and is an alternative to the random strategy discussed above.
In the above description, load and operating load is made up of tasks, so that repelling or attracting load can be thought of as repelling or attracting tasks.
A standard measure of energy efficiency can be used, such as effective work done (for example, in terms of processing) per unit of energy consumed. However, any appropriate measure of energy efficiency can be employed.
In any of the above aspects, the various features may be implemented in hardware, or as software modules running on one or more processors. Features of one aspect may be applied to any of the other aspects.
The invention also provides a computer program or a computer program product for carrying out any of the methods described herein, and a computer readable medium having stored thereon a program for carrying out any of the methods described herein.
A computer program embodying the invention may be stored on a computer-readable medium, or it could, for example, be in the form of a signal such as a downloadable data signal provided from an Internet website, or it could be in any other form.

Claims (15)

  1. Claims 1. A method to be performed in a computer network including a central resource manager allocating tasks to a plurality of managed entities having computing resources used for performing the tasks, the method comprising: producing a report of availability of computing resources in a reported part of the network, the reported part of the network having one or more managed entities; and transmitting the report to the central resource manager; wherein the report deliberately misrepresents the availability of computing resources in the reported part of the network in order to influence the allocation of tasks.
  2. 2. The method according to claim 1, wherein the computer network is a data centre and the managed entities to which tasks are allocated are servers.
  3. 3. The method according to claim 1 or 2, wherein the computing resources are any of processor time, memory, storage space, data throughput capacity, licences, and access to other hardware.
  4. 4. The method according to any of the preceding claims, wherein the report deliberately misrepresents the availability of the computing resources by representing an artificial resource as a computing resource.
  5. 5. The method according to claim 4, wherein the report includes a value representing the energy efficiency of a part of the network as an artificial resource.
  6. 6. The method according to claim 4, wherein: the artificial resource is an artificial software licence.
  7. 7. The method according to claim 6, to be performed in a network also comprising a central licence manager, the method further comprising: the central licence manager granting an artificial licence to a part of the network in a defined state; wherein the central resource manager allocates no tasks to any part of the network which has a granted artificial licence; and optionally the central licence manager revoking the artificial licence when the part of the network to which the artificial licence was granted leaves the defined state.
  8. 8. The method according to any of the preceding claims, wherein producing the report includes deliberately misrepresenting availability of computing resources in the reported part of the network by adapting a value representing the availability of a computing resource in the reported part of the network; and including the adapted value in the report.
  9. 9. The method according to any of the preceding claims, wherein producing a report for a part of the network includes misrepresenting the availability of resources in the reported part of the network based on a report of a state of at least one other part of the network.
  10. 10. The method according to any of the preceding claims, wherein producing the report includes: gathering information about a state of a subject part of the network having a plurality of managed entities; determining a desired allocation of tasks to the reported part of the network based on the gathered information; and misrepresenting the availability of resources in the reported part of the network in order to encourage the central resource manager to allocate tasks according to the desired allocation.
  11. 11. The method according to any of the previous claims, wherein the subject part of the network includes a component of a building management system.
  12. 12. A computer program which, when executed by a computer, causes the computer to perform the method as claimed in any of claims 1 to 11.
  13. 13. An apparatus for use in a computer network including a central resource manager for allocating tasks to a plurality of managed entities having computing resources used for performing the tasks, the apparatus comprising: a report producer operable to produce a report of availability of computing resources in a reported part of the network, the reported part of the network having one or more managed entities; a transmitter operable to transmit the report to the central resource manager; wherein the report deliberately misrepresents the availability of computing resources in the reported part of the network in order to influence the allocation of tasks.
  14. 14. A managed entity forming part of a computer network in which tasks are allocated to the managed entity by a central resource manager, the managed entity having an apparatus according to claim 13.
  15. 15. A computer network having: a central resource manager for allocating tasks to a plurality of managed entities having computing resources used for performing the tasks; a plurality of managed entities each having computing resources for performing tasks allocated thereto; and a report producer operable to produce a report of availability of computing resources in a reported part of the network, the reported part of the network having one or more managed entities; a transmitter operable to transmit the report to the central resource manager; wherein the report deliberately misrepresents the availability of computing resources in the reported part of the network in order to influence the allocation of tasks.
GB0919428.3A 2009-11-05 2009-11-05 Method, program and apparatus for influencing task allocation in a computer network Expired - Fee Related GB2475087B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0919428.3A GB2475087B (en) 2009-11-05 2009-11-05 Method, program and apparatus for influencing task allocation in a computer network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0919428.3A GB2475087B (en) 2009-11-05 2009-11-05 Method, program and apparatus for influencing task allocation in a computer network

Publications (3)

Publication Number Publication Date
GB0919428D0 GB0919428D0 (en) 2009-12-23
GB2475087A true GB2475087A (en) 2011-05-11
GB2475087B GB2475087B (en) 2015-12-23

Family

ID=41501971

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0919428.3A Expired - Fee Related GB2475087B (en) 2009-11-05 2009-11-05 Method, program and apparatus for influencing task allocation in a computer network

Country Status (1)

Country Link
GB (1) GB2475087B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766781B (en) * 2021-01-27 2024-08-09 重庆航凌电路板有限公司 Method and system for allocating production tasks of equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228618A1 (en) * 2004-04-09 2005-10-13 Patel Chandrakant D Workload placement among data centers based on thermal efficiency
WO2006029656A1 (en) * 2004-09-13 2006-03-23 Fujitsu Siemens Computers, Inc. Computer arrangement for providing services for clients through a network
US20090031312A1 (en) * 2007-07-24 2009-01-29 Jeffry Richard Mausolf Method and Apparatus for Scheduling Grid Jobs Using a Dynamic Grid Scheduling Policy

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228618A1 (en) * 2004-04-09 2005-10-13 Patel Chandrakant D Workload placement among data centers based on thermal efficiency
WO2006029656A1 (en) * 2004-09-13 2006-03-23 Fujitsu Siemens Computers, Inc. Computer arrangement for providing services for clients through a network
US20090031312A1 (en) * 2007-07-24 2009-01-29 Jeffry Richard Mausolf Method and Apparatus for Scheduling Grid Jobs Using a Dynamic Grid Scheduling Policy

Also Published As

Publication number Publication date
GB2475087B (en) 2015-12-23
GB0919428D0 (en) 2009-12-23

Similar Documents

Publication Publication Date Title
US8191068B2 (en) Resource management system, resource information providing method and program
US7054943B1 (en) Method and apparatus for dynamically adjusting resources assigned to plurality of customers, for meeting service level agreements (slas) with minimal resources, and allowing common pools of resources to be used across plural customers on a demand basis
US7676578B1 (en) Resource entitlement control system controlling resource entitlement based on automatic determination of a target utilization and controller gain
JP5529114B2 (en) System and method for managing energy consumption in a computing environment
US7644161B1 (en) Topology for a hierarchy of control plug-ins used in a control system
US7908605B1 (en) Hierarchal control system for controlling the allocation of computer resources
US7146511B2 (en) Rack equipment application performance modification system and method
JP4168027B2 (en) Equipment rack load regulation system and method
GB2443136A (en) Information processing system
JP4984169B2 (en) Load distribution program, load distribution method, load distribution apparatus, and system including the same
KR20160131093A (en) Coordinated admission control for network-accessible block storage
US20140207425A1 (en) System, Method and Apparatus for Adaptive Virtualization
KR20130083032A (en) Management method of service level agreement for guarantee of quality of service in cloud environment
KR102287566B1 (en) Method for executing an application on a distributed system architecture
CN112882827B (en) Method, electronic device and computer program product for load balancing
US8032636B2 (en) Dynamically provisioning clusters of middleware appliances
Ghezzi et al. Performance‐driven dynamic service selection
JP2015011365A (en) Provisioning apparatus, system, provisioning method, and provisioning program
JP2005092862A (en) Load distribution method and client-server system
Tychalas et al. SaMW: a probabilistic meta-heuristic algorithm for job scheduling in heterogeneous distributed systems powered by microservices
JP2005031736A (en) Server load distribution device and method, and client/server system
GB2475087A (en) Misrepresenting resource availability in reports to a central resource manger allocating tasks to managed entities.
Ethilu et al. An efficient switch migration scheme for load balancing in software defined networking
US11842213B2 (en) Cooling-power-utilization-based workload allocation system
Pakhrudin et al. Cloud service analysis using round-robin algorithm for quality-of-service aware task placement for internet of things services

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20181105