WO2014082094A1 - Transparently routing job submissions between disparate environments - Google Patents
Transparently routing job submissions between disparate environments Download PDFInfo
- Publication number
- WO2014082094A1 WO2014082094A1 PCT/US2013/072094 US2013072094W WO2014082094A1 WO 2014082094 A1 WO2014082094 A1 WO 2014082094A1 US 2013072094 W US2013072094 W US 2013072094W WO 2014082094 A1 WO2014082094 A1 WO 2014082094A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- workload
- routing
- computing cluster
- batch type
- processor
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5083—Techniques for rebalancing the load in a distributed system
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
- G06F9/4887—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues involving deadlines, e.g. rate based, periodic
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/06—Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
Definitions
- the present invention relates to high performance computing or big data processing systems, and to a method and system for
- Job scheduling environments enable the distribution of heterogeneous compute workloads across large compute environments.
- Compute environments within large enterprises tend to have the following characteristics:
- Exemplary embodiments of the present invention provide a system and method for any developer of high performance compute or "BigData” or “map-reduce” applications to make use of compute resources across an internal enterprise and/or multiple infrastructure-as-a-service (laaS) cloud environments, seamlessly. This is done by treating each individual cluster of computers, internal or external to the closed
- This system transfers the data and migrates the workload to remote clusters as if it existed and was submitted locally.
- the system moves the data and runs the separable partitions of the job in different computing environments, transferring the results back upon completion.
- the map portions of a map-reduce type submitted workload are equivalent.
- the decision making and execution of this workflow is implemented as a complete transparent process to the developer and application.
- the complexities of the data and job migration are not exposed to the developer or application. The developer need only make their application function in a single region and the invention automatically handles the complexities of migrating it to other regions.
- the incumbent approach places compute geographically separated compute resources in the same scheduling environment and treats local and remote environments as equivalent.
- Two factors in the incumbent approach contribute to make the invention a superior solution for the problem.
- Factor one operations across questionable WAN links that execute under the assumption of low latency and high bandwidth will consistently fail.
- Factor two performance characteristics of global shared storage devices are typically so slow that they result in the perception of failure due to lack of rapid progress on any job in the workload.
- Exemplary embodiments of the invention continuously gather detailed performance and use data from the clusters, and uses this data to make decisions related to job routing based on parameters such as:
- the matchmaking algorithm used to determine the eventual compute job routing is configurable to account for a variety of dynamic properties.
- Exemplary embodiments of the invention perform meta- scheduling for workloads by applying all the knowledge it has about the jobs being submitted and the potential clusters that could run the jobs and routing jobs to the appropriate regions automatically, without application, developer or end user intervention.
- the job meta-scheduling decision happens at submit time, or periodically thereafter, and upon consideration immediately routes the jobs out to schedulers that then have the
- Exemplary embodiments of the invention allow for clusters and jobs scheduled in to them, to run completely independent of each other, imparting much greater stability as the need for constant, low- latency communication is not required to maintain a functional
- Exemplary embodiments of the invention also allow for these clusters to function entirely outside of the scope of this architecture, providing for a mix of completely local workloads and jobs that flow in and out from other clusters via the Inventions meta-scheduling algorithm. This allows of legacy interoperability and flexibility when it comes to security: in cases where it is not desirable for jobs to be scheduled to run remote to their point of submission by the Invention, the end user can simply submit the jobs as the normally would to a local region.
- Invention also promotes high use rates among widely distributed pools of computational resources, with more workloads submitted through its meta-scheduling algorithm resulting in greater overall utilization.
- Figure 1 is a block diagram which shows a process of job submission by an end user or a job submission portal such as CycleServer.
- Figure 2 is a block diagram which shows the routing engine of SubmitOnce and the variables that can drive the decision.
- Figure 3 is a block diagram which shows the workflow of a remote job submission including data transfer and scheduler interaction.
- Figure 4 is a block diagram that shows the process of backfilling work onto a partially idle internal cluster when submitting to a remote cluster.
- Figure 5 is a flowchart showing SubmitOnce workload routing.
- FIG. 6 is a flowchart showing SubmitOnce application workload routing architecture
- Exemplary embodiments provide a system for submitting workload within the cloud that precisely mimics the behavior of scheduler- based job submission. Using the knowledge of the operation of the job scheduler, the system pulls as much metadata as possible about the workload being submitted.
- Another exemplary embodiment provides for a job routing mechanism coupled with a scheduler monitoring solution that can account for a flexible number of environment parameters to make an intelligent decision about job routing.
- exemplary embodiments allow the framework to use automated remote access to perform seamless data transfer, remote command execution, and job monitoring once the job routing decision is made.
- Exemplary embodiments provide an architecture by which a set of jobs can run on multiple heterogeneous environments, using different schedulers or map-reduce or "BigData" frameworks in different environments, and transparently deposit the results in a consolidated area when complete.
- Exemplary embodiments also include a system for submitting workload within a cloud computing environment, wherein the system precisely mimics the behavior of a scheduler-based job submission by using the knowledge of the operation of the job scheduler, wherein the system pulls at least a portion of the available metadata corresponding to the submitted workload.
- Exemplary embodiments of this invention provide for job routing mechanism coupled with a scheduler monitoring solution that can account for a flexible number of environment parameters to make a real time decision about job routing, including the use of periodic evaluation of the placement of some or all submissions, in multiple cluster
- Yet another exemplary embodiment includes the
- a further exemplary embodiment includes the architecture by which a set of jobs can run on multiple heterogeneous environments and transparently deposit the results in a consolidated area upon job completion.
- An exemplary embodiment includes a method for directing a workload between distributed computing environments. The method includes continuously obtaining performance and use data from each of a plurality of computer clusters, a first subset of the plurality of computer clusters being in a first region and a second subset of the plurality of computer clusters being in a second region, each region having known performance characteristics, zone of performance and zone of reliability.
- the method further includes receiving a job for routing to a distributed computing environment.
- the method further includes routing the job to a given computer cluster in response to the obtained
- a further exemplary embodiment includes a method for directing a workload between distributed computing environments.
- the method includes identifying a finish by deadline and a batch/non-batch type associated with an electronically submitted workload.
- the method further includes processing the submitted workload by at least one of (i) routing the submitted workload to a local computer cluster in response to the local computer cluster having sufficient capacity to complete the submitted by the finish by deadline; (ii) routing a first portion of a batch type submitted workload, or equivalently the portions of map parts of a map-reduce type submitted workload, to an available capacity of the local computer cluster, and routing a second portion of the batch type
- 'map-reduce' workloads are a batch type workload as are a map-reduce workload's constituent parts: the map portions and individual reduce jobs.
- so-called 'embarrassingly' or 'pleasantly' parallel workloads i.e. any workload composed of many independent calculations, even if each individual is strictly parallel, is a batch type workload.
- Another exemplary embodiment includes a method for directing a workload between distributed computing environments.
- the method includes receiving a workload submission at an application workload router.
- the method further includes routing the workload submission by at least one of the steps of (i) routing a first portion of the workload submission, or equivalently the portions of map parts of a map- reduce type submitted workload, to a local computer cluster, the first portion of the workload submission, or equivalently the second portions of map parts of a map-reduce type submitted workload, being within available completion parameters of the local computer cluster and routing a second portion of the workload submission to a remote (non-local) computer cluster and (ii) routing the workload submission to the remote computer cluster in the absence of the local computer cluster.
- This exemplary method can further include automatically modifying workflow steps to include outgoing and then incoming data transfer for data affiliated with a workload submission routed to one or more remote (non-local) computer clusters.
- the job submission command-lines/API/webpage/webservice gathers at block 1 06 environment information, at block 1 08 derived variables pulled from a dry run of the routing/submission and at block 1 04 user input metadata.
- the job submission executable will always execute the submission locally at block 1 1 0. This way, job submission always occurs within a predefined time interval.
- Figure 2 shows another exemplary embodiment of the present invention. Shown in Figure 2 is a block diagram depicting the routing engine of SubmitOnce and the variables that can drive the decision procedure.
- GUI dashboards within a server architecture that can be used to configure, manage, and monitor the job routing and submission behaviors received at block 202. It should also include default submission
- Block 204 can incorporate metadata that can be defined for the scheduling environments such as, but not limited to, available shared storage space, advertised applications, current capacity, oversubscription thresholds and dynamic execute node capabilities. This metadata can be input during
- Figure 3 presents another exemplary embodiment of the invention herein.
- Figure 3 illustrates a block diagram depicting the workflow of a remote job submission including data transfer and scheduler interaction.
- the processes and components in this embodiment in Figure 3 includes a hub-and-spoke design for a central server to communicate with one or more cluster units located in block 302.
- the key decision during the submission is whether or not the routing is local or remote, because this dictates the requirement to move data to either block 304 or block 308.
- cluster units can represent both internal clusters of machines and external clusters of machines, with statically allocated or dynamically allocated lists of computational resou rces.
- Figure 3 requires a ticket-based data transfer mechanism that can provide both internal-initiated and external-initiated data transfers on either a scheduled or on-demand basis as used in blocks (31 0, 31 6) and (31 2 , 31 4).
- This process also needs secure, reliable communication between the central server and the remote nodes for command execution for the steps located at blocks (306, 31 8). There should also be proper error handling of any potential failure before, during, or after a committed job submission. If any errors are encountered, the system should submit locally as a failsafe as in block 306.
- FIG. 4 presents yet another exemplary embodiment of the present invention.
- Figu re 4 depicts a block diagram of the process of backfilling work onto a partially idle internal cluster when submitting to a remote cluster.
- the processes and components in this exemplary embodiment begin when a job submission is committed to a particular cluster unit, there is an opportunity for further load balancing. Although the bulk of the workload is designated for the remote cluster unit, a subset of the workload may be carved off to run on local resources that are immediately available, decreasing the overall runtime as in block 402. This branch of behavior is only taken if the following is true: (1 ) the submission is not a tightly coupled parallel job (2) the submission is a job array (3) the ability to split task arrays is enabled within the system.
- the system counts the number of available execution slots, counts the running jobs, and calculates the available slots at block 404.
- the job array is split such that the local cluster is filled first at block 406 and the remainder of jobs is submitted to selected remote cluster(s) at block 408. These two or more submissions created for block 406 and block 408 are processed, and the workflow proceeds as described above.
- Figure 5 presents another exemplary embodiment of the present invention.
- Figure 5 presents a SubmitOnce Workflow Routing flowchart.
- the process begins with the user/application submitting a job at block 502.
- the process continues with determining the job has a "finish by" deadline at block 504. If there is no "finish by" deadline then the process determines whether there is enough cluster space at block 506. If yes, then the process routes to a local cluster at block 51 0. If no, then it is determined if this is a batch job at block 51 2. If yes, then the process fills local slots, then route externally at block 51 4. If no then the process simply routes externally at block 51 6. However, if there is a "finish by" deadline then the process determines whether there is enough time to go local at block 508. If yes, then it is routed to a local cluster at block 51 0. If no, then it is determined whether this is a batch job at block 51 2 and the process continues from there as before.
- Figure 6 presents an exemplary embodiment of the present invention.
- Figure 6 illustrates a SubmitOnce application workload routing architecture.
- the routing architecture begins at block 602 with a user or application submitting a job.
- the job is received by an application workload router. If there are no local environmental clusters then the job is routed to an internal/external cloud at block 608. If it is possible to route locally then the job is sent to block 606. However, if needed, the job can expand from the local clusters to the cloud at block 608 if needed.
- An exemplary method for practicing the teachings of this invention includes a method for directing workload. The method
- the method further includes identifying a finish by deadline and a batch/non-batch type associated with the workload received.
- the exemplary method further includes wherein the routing comprises routing to the local computing cluster in response to the continuously obtained real time performance and use data of the local computing cluster having sufficient capacity to complete the workload by the finish by deadline.
- the method also includes wherein the routing comprises a first portion of a batch type submitted workload to the local computing cluster, or equivalently the map portions of a map-reduce type submitted workload, and routing a second portion of the batch type workload, or equivalently the second portions of the maps in a map- reduce type submitted workload, to at least one external computing cluster.
- the exemplary method can also include wherein the routing further comprises a non-batch type workload with a finish by deadline longer than a completion capacity of the local computing cluster and routing the non-batch type workload to the external computing workload.
- the method can further comprise modifying, by the processor, where the first and the second portion of the batch type submitted workload is routed.
- This exemplary method may also be a result of execution of a computer program stored in a non-transitory computer-readable medium, such as non-transitory computer-readable memory, and a specific manner in which components of an electronic device are configured to cause that electronic device to operate.
- the exemplary method may also be performed by an apparatus including one memory with a computer program and one processor, where the memory with the computer program is configured with the processor to cause the apparatus to perform the exemplary method.
- various exemplary embodiments of this invention can be performed by various electronic devices which include a processor and memory such as a computer.
- a solution of the present disclosure includes providing the end-users with an interface that allows for typical job submissions while automating job flow to local and external clusters. This increases the capabilities of the end-user without burdening them with complicated configuration and processes.
- the automation within the application workload router hides excess complexity from the end-user while augmenting capabilities that would typically be constrained to a single independent cluster.
- the end-user needs a way to fully describe his or her workload for proper routing. Most of this description is achieved using the scheduling layer. Other than reliable execution, the most important parameter to an end-user is the elapsed time for an entire workload.
- the disclosure provides the solution of a job routing environment that allows the end-user to provide two additional important pieces of information. One is the average runtime of an individual task. Another is the overall desired runtime of the workload. This information is considered along with parameters already known, such as the number of tasks, dynamic VM node spin-up time, data transfer time, and whether or not this is a purely batch workload. The end result is that jobs can be split across multiple clusters, maximizing internal cluster usage while still fulfilling the request.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Engineering & Computer Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Human Resources & Organizations (AREA)
- Economics (AREA)
- Strategic Management (AREA)
- Marketing (AREA)
- Game Theory and Decision Science (AREA)
- Development Economics (AREA)
- Educational Administration (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Debugging And Monitoring (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Priority Applications (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/441,860 US20150286508A1 (en) | 2012-11-26 | 2013-11-26 | Transparently routing job submissions between disparate environments |
| HK16103677.6A HK1215747A1 (zh) | 2012-11-26 | 2013-11-26 | 在不同的环境之间透明地路由作业提交 |
| JP2015544196A JP6326062B2 (ja) | 2012-11-26 | 2013-11-26 | 異なる環境どうし間でのジョブ実行依頼のトランスペアレントなルーティング |
| EP13857421.5A EP2923320A4 (en) | 2012-11-26 | 2013-11-26 | TRANSPARENT DELIVERY OF WORK SUBMISSIONS BETWEEN DISPARATE ENVIRONMENTS |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261729930P | 2012-11-26 | 2012-11-26 | |
| US61/729,930 | 2012-11-26 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2014082094A1 true WO2014082094A1 (en) | 2014-05-30 |
Family
ID=50776607
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2013/072094 Ceased WO2014082094A1 (en) | 2012-11-26 | 2013-11-26 | Transparently routing job submissions between disparate environments |
Country Status (5)
| Country | Link |
|---|---|
| US (1) | US20150286508A1 (enExample) |
| EP (1) | EP2923320A4 (enExample) |
| JP (1) | JP6326062B2 (enExample) |
| HK (1) | HK1215747A1 (enExample) |
| WO (1) | WO2014082094A1 (enExample) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019164667A1 (en) * | 2018-02-20 | 2019-08-29 | Microsoft Technology Licensing, Llc | Dynamic processor power management |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11429609B2 (en) * | 2015-04-15 | 2022-08-30 | Microsoft Technology Licensing, Llc | Geo-scale analytics with bandwidth and regulatory constraints |
| US10802753B2 (en) * | 2018-02-15 | 2020-10-13 | Seagate Technology Llc | Distributed compute array in a storage system |
| CN109471707A (zh) * | 2018-10-12 | 2019-03-15 | 传化智联股份有限公司 | 调度任务的部署方法及装置 |
| US20230376344A1 (en) * | 2020-09-11 | 2023-11-23 | Intel Corporation | An edge-to-datacenter approach to workload migration |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6353844B1 (en) * | 1996-12-23 | 2002-03-05 | Silicon Graphics, Inc. | Guaranteeing completion times for batch jobs without static partitioning |
| US20080104609A1 (en) * | 2006-10-26 | 2008-05-01 | D Amora Bruce D | System and method for load balancing distributed simulations in virtual environments |
| US20100058350A1 (en) * | 2008-09-03 | 2010-03-04 | International Business Machines Corporation | Framework for distribution of computer workloads based on real-time energy costs |
| US20100235539A1 (en) * | 2009-03-13 | 2010-09-16 | Novell, Inc. | System and method for reduced cloud ip address utilization |
| US20110125949A1 (en) * | 2009-11-22 | 2011-05-26 | Jayaram Mudigonda | Routing packet from first virtual machine to second virtual machine of a computing device |
Family Cites Families (13)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002259353A (ja) * | 2001-03-01 | 2002-09-13 | Nippon Telegr & Teleph Corp <Ntt> | 広域クラスタ通信の設定方法、クラスタノードマネージャ装置、クラスタ装置および広域クラスタネットワーク |
| JP2002342098A (ja) * | 2001-05-16 | 2002-11-29 | Mitsubishi Electric Corp | 管理装置、データ処理システム、管理方法及び管理方法をコンピュータに実行させるためのプログラム |
| US7331048B2 (en) * | 2003-04-04 | 2008-02-12 | International Business Machines Corporation | Backfill scheduling of applications based on data of the applications |
| US7853953B2 (en) * | 2005-05-27 | 2010-12-14 | International Business Machines Corporation | Methods and apparatus for selective workload off-loading across multiple data centers |
| US20080052712A1 (en) * | 2006-08-23 | 2008-02-28 | International Business Machines Corporation | Method and system for selecting optimal clusters for batch job submissions |
| US7838765B2 (en) * | 2007-04-25 | 2010-11-23 | Linden Photonics, Inc. | Electrical conducting wire having liquid crystal polymer insulation |
| US8239538B2 (en) * | 2008-11-21 | 2012-08-07 | Samsung Electronics Co., Ltd. | Execution allocation cost assessment for computing systems and environments including elastic computing systems and environments |
| US7970830B2 (en) * | 2009-04-01 | 2011-06-28 | Honeywell International Inc. | Cloud computing for an industrial automation and manufacturing system |
| US8560465B2 (en) * | 2009-07-02 | 2013-10-15 | Samsung Electronics Co., Ltd | Execution allocation cost assessment for computing systems and environments including elastic computing systems and environments |
| US8914805B2 (en) * | 2010-08-31 | 2014-12-16 | International Business Machines Corporation | Rescheduling workload in a hybrid computing environment |
| US8739171B2 (en) * | 2010-08-31 | 2014-05-27 | International Business Machines Corporation | High-throughput-computing in a hybrid computing environment |
| EP2439637A1 (en) * | 2010-10-07 | 2012-04-11 | Deutsche Telekom AG | Method and system of providing access to a virtual machine distributed in a hybrid cloud network |
| US9213709B2 (en) * | 2012-08-08 | 2015-12-15 | Amazon Technologies, Inc. | Archival data identification |
-
2013
- 2013-11-26 HK HK16103677.6A patent/HK1215747A1/zh unknown
- 2013-11-26 EP EP13857421.5A patent/EP2923320A4/en not_active Withdrawn
- 2013-11-26 US US14/441,860 patent/US20150286508A1/en not_active Abandoned
- 2013-11-26 JP JP2015544196A patent/JP6326062B2/ja active Active
- 2013-11-26 WO PCT/US2013/072094 patent/WO2014082094A1/en not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6353844B1 (en) * | 1996-12-23 | 2002-03-05 | Silicon Graphics, Inc. | Guaranteeing completion times for batch jobs without static partitioning |
| US20080104609A1 (en) * | 2006-10-26 | 2008-05-01 | D Amora Bruce D | System and method for load balancing distributed simulations in virtual environments |
| US20100058350A1 (en) * | 2008-09-03 | 2010-03-04 | International Business Machines Corporation | Framework for distribution of computer workloads based on real-time energy costs |
| US20100235539A1 (en) * | 2009-03-13 | 2010-09-16 | Novell, Inc. | System and method for reduced cloud ip address utilization |
| US20110125949A1 (en) * | 2009-11-22 | 2011-05-26 | Jayaram Mudigonda | Routing packet from first virtual machine to second virtual machine of a computing device |
Non-Patent Citations (1)
| Title |
|---|
| See also references of EP2923320A4 * |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2019164667A1 (en) * | 2018-02-20 | 2019-08-29 | Microsoft Technology Licensing, Llc | Dynamic processor power management |
| US10853147B2 (en) | 2018-02-20 | 2020-12-01 | Microsoft Technology Licensing, Llc | Dynamic processor power management |
Also Published As
| Publication number | Publication date |
|---|---|
| HK1215747A1 (zh) | 2016-09-09 |
| JP2016506557A (ja) | 2016-03-03 |
| EP2923320A1 (en) | 2015-09-30 |
| US20150286508A1 (en) | 2015-10-08 |
| JP6326062B2 (ja) | 2018-05-16 |
| EP2923320A4 (en) | 2016-07-20 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Stavrinides et al. | A hybrid approach to scheduling real-time IoT workflows in fog and cloud environments | |
| US10387179B1 (en) | Environment aware scheduling | |
| US10262390B1 (en) | Managing access to a resource pool of graphics processing units under fine grain control | |
| Ramezani et al. | Task-based system load balancing in cloud computing using particle swarm optimization | |
| Van den Bossche et al. | Online cost-efficient scheduling of deadline-constrained workloads on hybrid clouds | |
| US8862933B2 (en) | Apparatus, systems and methods for deployment and management of distributed computing systems and applications | |
| CN106933669B (zh) | 用于数据处理的装置和方法 | |
| Elzeki et al. | Overview of scheduling tasks in distributed computing systems | |
| US10360074B2 (en) | Allocating a global resource in a distributed grid environment | |
| CN108337109A (zh) | 一种资源分配方法及装置和资源分配系统 | |
| US20220229695A1 (en) | System and method for scheduling in a computing system | |
| Petrosyan et al. | Serverless high-performance computing over cloud | |
| Acharya et al. | Docker container orchestration management: A review | |
| CN105607950A (zh) | 一种虚拟机资源配置方法和装置 | |
| US20150286508A1 (en) | Transparently routing job submissions between disparate environments | |
| Selvi et al. | Resource allocation issues and challenges in cloud computing | |
| Salehi et al. | Contention management in federated virtualized distributed systems: Implementation and evaluation | |
| US20150242242A1 (en) | Routing job submissions between disparate compute environments | |
| Mandal et al. | Adapting scientific workflows on networked clouds using proactive introspection | |
| Ramezani et al. | Task based system load balancing approach in cloud environments | |
| US12489821B2 (en) | Distributed cloud system, and data processing method and storage medium of distributed cloud system | |
| Pawar et al. | A review on virtual machine scheduling in cloud computing | |
| Tamilarasi et al. | Task allocation and re-allocation for big data applications in cloud computing environments | |
| Ferdaus | Multi-objective virtual machine management in cloud data centers | |
| Mampage | Autonomous Resource Management for Serverless Computing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13857421 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2015544196 Country of ref document: JP Kind code of ref document: A |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 14441860 Country of ref document: US |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| WWE | Wipo information: entry into national phase |
Ref document number: 2013857421 Country of ref document: EP |