US20110173245A1 - Distribution of intermediate data in a multistage computer application - Google Patents
Distribution of intermediate data in a multistage computer application Download PDFInfo
- Publication number
- US20110173245A1 US20110173245A1 US12/684,273 US68427310A US2011173245A1 US 20110173245 A1 US20110173245 A1 US 20110173245A1 US 68427310 A US68427310 A US 68427310A US 2011173245 A1 US2011173245 A1 US 2011173245A1
- Authority
- US
- United States
- Prior art keywords
- computer
- intermediate data
- generated intermediate
- data
- computers
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
Definitions
- An example embodiment of the invention is a method for distributing data of a multistage computer application to a plurality of computers.
- the method includes determining a data usage demand of a generated intermediate data.
- the data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, and is discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task.
- the method further includes a calculating step to calculate a computer usage demand for each computer in the plurality of computers.
- the computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer.
- a storing operation stores the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
- Another example embodiment of the invention is a system for distributing intermediate data of a multistage computer application to a plurality of computers.
- the system includes a data manager, a computer manager, and a scheduler.
- the data manager is configured to calculate, by at least one computer processor, a data usage demand of a generated intermediate data.
- the data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, is discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task.
- the computer manager is configured to calculate a computer usage demand for each computer in the plurality of computers.
- the computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer.
- the scheduler is configured to select at least one target computer of the plurality of computers for storage of the generated intermediate data at local memory such that a variance of the computer usage demand across the plurality of computers is minimized.
- the computer program product may include computer readable program code configured to calculate a data usage demand of a generated intermediate data, calculate a computer usage demand for each computer in the plurality of computers, and store the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
- FIG. 1 shows an example system 102 employing the present invention.
- FIG. 2 shows another example system 102 employing the present invention.
- FIG. 3 shows an example communication pattern of generated intermediate data from a multistage computer application.
- FIG. 4 shows an example flowchart for distributing data of a multistage computer application to a plurality of computers.
- FIG. 5 shows additional operations included in the flowchart of FIG. 4 .
- FIG. 6 shows additional operations included in the flowchart of FIG. 4 .
- FIGS. 1-9 The present invention is described with reference to embodiments of the invention. Throughout the description of the invention reference is made to FIGS. 1-9 .
- FIG. 1 illustrates an example system 102 incorporating an embodiment of the present invention. It is noted that the system 102 shown is just one example of various arrangements of the present invention and should not be interpreted as limiting the invention to any particular configuration.
- the system 102 includes a plurality of computers 104 executing a multistage computer application 106 in a computer network.
- the plurality of computers 104 may be part of a cloud computing structure.
- a multistage computer application refers to a computer application that executes a plurality of tasks in stages successively over time.
- the multistage computer application 106 generates intermediate data 108 .
- intermediate data 108 is data generated by one stage 110 of the multistage computer application and transferred to one or more following stages 112 .
- the system 102 also includes a placement service 132 for distributing the intermediate data 108 to the plurality of computers 104 .
- the placement service 132 may include a data manager 114 configured to calculate, by at least one computer processor, a data usage demand 116 of the generated intermediate data 108 .
- the data usage demand 116 is proportional to the number of consuming tasks in the multistage computer application 106 configured to consume the generated intermediate data 108 .
- the data usage demand 116 is discounted by a distance between a current stage 110 of the multistage computer application and a future stage 112 of the multistage computer application 116 executing the consuming task.
- the data usage demand 116 may be calculated as follows:
- v i t ⁇ t ⁇ C i t ⁇ w j ⁇ r j w 0 + W p ⁇ ( i , j ) ,
- w j is work associated with task t j
- r j is an expected number of executions of task t j
- W p(i,j) is a total amount of work left in a longest path from the generated intermediate data to t j
- c i t are consumers of the generated intermediate data at time t
- w 0 is a constant.
- the placement service 132 may further include a computer manager 118 configured to calculate a computer usage demand 120 for each computer in the plurality of computers 104 .
- the computer usage demand 120 is a sum of all data usage demand 116 of each intermediate data stored in local memory 120 of the computer.
- the placement service 132 may further include a scheduler 124 .
- the scheduler 124 is configured to select at least one target computer 126 of the plurality of computers 104 for storage of the generated intermediate data 108 at local memory 122 such that a variance of the computer usage demand 120 across the plurality of computers 104 is minimized.
- the scheduler 124 is configured to select the target computer 126 of the plurality of computers 104 having the lowest computer usage demand 120 .
- the system 102 may include an application profile 128 configured to provide the data manager 114 a communication pattern 130 of the generated intermediate data.
- the communication pattern 130 specifies usage of the generated intermediate data 108 by task and runtime.
- the communication pattern 130 is a directed acyclic graph (DAG).
- the application profile 128 may also be communicated to the scheduler 124 .
- the scheduler 124 may select one or more of the plurality of computers 104 having a plurality of processing cores if the generated intermediate data 108 is consumed simultaneously by a plurality of tasks.
- the scheduler is further configured to select one or more of the plurality of computers 104 to store the generated intermediate data 108 and other intermediate data together if the generated intermediate data 108 is consumed simultaneously with the other intermediate data by one single task.
- the computer manager 118 may be configured to normalize the computer usage demand 120 based on computing resources available at each computer in the plurality of computers 104 .
- the computing resources include, for example, the memory size and the amount of processing cores at a computer.
- the example system 102 beneficially drives the state of resources in a distributed computing system so to enable/promote data-locality when placing computation (tasks) for dataflows. This is achieved by making data placement decisions such that when placing a ready task, the machine hosting its corresponding intermediate data is likely to have enough resources available to be selected by the scheduler 124 . This results in reduced transfer cost since intermediate data objects do not need to be transferred remotely for processing.
- the data usage demand 116 (also referred to herein as dataUsageValue for brevity) metric represents the discounted expected amount work of a given intermediate data object at a given point in time. Note that for a given intermediate data object this metric has a relative nature in that it captures its future demand or importance when compared to other intermediate data objects stored in the system at a given point in time. Since an intermediate data object can be accessed by multiple tasks at different times throughout the execution of a dataflow, data usage demand 116 varies over time. As a matter of fact, for a given intermediate data object its data usage demand 116 is recomputed every time a task consumes it and its value is updated to reflect how soon/far in the future it is needed later by another task in the dataflow.
- This metric leads to the introduction of two additional related metrics, namely: computer usage demand 120 (per machine) (also referred to herein as machineUsageValue for brevity) and system usage demand (also referred to herein as systemUsageValue for brevity) which represent an aggregate of the data objects stored in each machine and in the whole system, respectively.
- computer usage demand 120 per machine
- system usage demand also referred to herein as systemUsageValue for brevity
- system usage demand is evenly distributed across machines in the system, i.e., minimum variance for computer usage demand 120 values, machines storing intermediate data needed by ready tasks are likely to have enough resources to host the tasks.
- the data usage demand 116 should reflect the reliability of the system (e.g., failure probability).
- the system 102 aims at maximizing data locality in the presence of system failures.
- v i t ⁇ t ⁇ C i t ⁇ w j ⁇ r j w 0 + W p ⁇ ( i , j )
- Terminology m k machine with compute and memory resources Data-flow (j) dataflow job consisting of multiple dependent tasks represented as a DAG Task (t i ) task belonging to dataflow j
- T i set of tasks that depend on, ie., consume, d i D k set of intermediate data objects stored in m k EST t (T i ) sorted set of the earliest starting time of the earliest dependency task in T i at time t Set F i t of dependent tasks subset of T i containing all the tasks that have
- an example system 202 is shown consisting of a data management layer 204 that sits insides the resource management layer 206 in between the scheduler 208 and the application profiler 210 , and the resources 212 .
- the application profiler 210 receives as input the directed acyclic graph (DAG) 214 describing the job and run time information from the resource management layer.
- DAG directed acyclic graph
- An element of the application profiler 210 is the replicate cost model 216 which decides when to replicate intermediate data. Copies of intermediate data objects are treated differently in that their dataUsageValue are computed differently.
- the data management layer 204 makes data placement decision based on the information obtained from the application profiler 210 . For instance, it seeks at collocating intermediate data objects as specified by the communication pattern of the DAG 214 and creates as many replicas as determined by the RCM. As the data management layer 204 makes placement decisions it leads the scheduler 208 to make better scheduling since it effectively improves the choices available to it when placing computation.
- the data management layer 204 consists of three main steps:
- V k t represents the share of the total systemUsageValue V stored in machine m k and is an aggregate of the dataUsageValues v i t for all d i ⁇ D k at time t.
- the dataUsageValue v i is assigned at the time d i is created and stored for the first time and it varies over time as d i is consumed/processed by the tasks that depend on it (T i ).
- the update mechanism for the dataUsageValue is described below. It is easy to observe that every intermediate data object d i in a dataflow j is processed by at least one task in j, i.e., immediate child task. Depending on the communication pattern of j, however, an intermediate data object may be processed by more than one task, e.g., higher fan-in.
- V 0 t1 is updated to reflect the fact that t 1 completed and there is less work left associated to d 0 .
- t 2 is left for processing d 0 .
- a replica corresponding to an intermediate data object d i may be needed for re-execution if one of the resources hosting a task fails and the last checkpoint of the dataflow corresponds to d i .
- v i t is computed for d i considering the probability of system failure and hence, of the replica been needed for re-execution of a task.
- v i t is equally distributed across the replicas.
- v i t is in this case a function of the reliability of the dataflow, i.e., a function of several other factors such as the probability of failure of resources executing tasks belonging to the dataflow and number of replicas available for d i .
- G t (d i ) is a function that computes v i t at time t and satisfies the definition stated earlier.
- This function is:
- w 0 is a constant that can changed depending on the state of the system and workloads.
- the denominator in the equation reflects how far in the future is task t j is expected to execute. The smaller the denominator, the larger the value of v i . Thus, this suggests that tasks that are expected to run soon have more weight in the placement decision made at time t. This follows intuition since the compute resources needed to execute the task may be needed soon in the future (and therefore must be available). If we now consider the numerator, it reflects the amount of work associated with task t j including the number of times that is expected to run(r j t ).
- a simple heuristic is used to ensure the even distribution of V k t across machines.
- the algorithm selects the machine with the smallest value of V k t .
- this product we aim at minimizing the likelihood that a placement decision made at time t 1 in m k will be invalidated by changes in V k t , t 1 ⁇ t. In other words, that a machine will be overcommitted by the time d i is needed.
- Fork dataflow In the case an intermediate data object has multiple immediate consumers, the data management layer has multiple options. It could for example create as many copies of the intermediate data object as number of consumers and place them individually or it could leverage or keep one single copy and place it in a multi-core machine so to achieve better parallelism and hence, reduce makespan. The data management layer makes these decisions with the help of the application profiler which dictates the requirements of the dataflow.
- join dataflow This refer to the case wherein multiple intermediate data objects generated by different tasks are consumed by one single task.
- the data management layer may treat all the intermediate data objects as one and therefore place them together.
- Another embodiment of the invention is a method for distributing data of a multistage computer application to a plurality of computers, which is now described with reference to flowchart 402 of FIG. 4 .
- the method begins at Block 404 and includes determining a data usage demand of a generated intermediate data at Block 406 .
- the data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task.
- the data usage demand is calculated as follows:
- v i t ⁇ t ⁇ C i t ⁇ w j ⁇ r j w 0 + W p ⁇ ( i , j ) ,
- w j is work associated with task t j
- r j is an expected number of executions of task t j
- W p(i,j) is a total amount of work left in a longest path from the generated intermediate data to t j
- c i t are consumers of the generated intermediate data at time t
- w 0 is a constant.
- the method further includes calculating a computer usage demand for each computer in the plurality of computers at Block 408 .
- the computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer.
- the generated intermediate data is stored at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
- storing the generated intermediate data may include storing the generated intermediate data at the at least one target computer having the lowest computer usage demand.
- the method begins at Block 504 .
- the method includes receiving a communication pattern of the generated intermediate data.
- the communication pattern specifies usage of the generated intermediate data by task and runtime.
- the communication pattern is a directed acyclic graph.
- the method may additionally include the steps of FIG. 4 at Blocks 406 , 408 and 410 .
- the method ends at Block 506 .
- the method begins at Block 604 .
- the method may include the steps of FIG. 4 at Blocks 406 , 408 and 410 .
- the method may additionally include normalizing the computer usage demand based on computing resources available at each computer in the plurality of computers at Block 606 .
- the computing resources may include, of example, memory size and amount of processing cores.
- the method ends at Block 608 .
- aspects of the invention may be embodied as a system, method or computer program product. Accordingly, aspects of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
- a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- LAN local area network
- WAN wide area network
- Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
A method, system and computer program product for distributing intermediate data of a multistage computer application to a plurality of computers. In one embodiment, a data manager calculates data usage demand of generated intermediate data. A computer manager calculates a computer usage, which is the sum of all data usage demand of each stored intermediate data at the computer. A scheduler selects a target computer from the plurality of computers for storage of the generated intermediate data at such that a variance of the computer usage demand across the plurality of computers is minimized.
Description
- In general online scheduling algorithms for dataflows aim at minimizing makespan and make scheduling decisions based on the requirement of the dataflow and the state of resources in the system. A few of such factors are: data dependency among tasks, deadline requirement, storage and compute capacity of machines. Furthermore, since running time of makespan can be dominated by the time incurred by transferring data, most scheduling algorithm aim at procuring data-locality, i.e., collocation of tasks and their corresponding input data. As a result, the quality of the schedules produced by the scheduler is greatly influenced by the state of the resources, and more specifically, by the placement of data.
- An example embodiment of the invention is a method for distributing data of a multistage computer application to a plurality of computers. The method includes determining a data usage demand of a generated intermediate data. The data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, and is discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task.
- The method further includes a calculating step to calculate a computer usage demand for each computer in the plurality of computers. The computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer. A storing operation stores the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
- Another example embodiment of the invention is a system for distributing intermediate data of a multistage computer application to a plurality of computers. The system includes a data manager, a computer manager, and a scheduler.
- The data manager is configured to calculate, by at least one computer processor, a data usage demand of a generated intermediate data. The data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, is discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task.
- The computer manager is configured to calculate a computer usage demand for each computer in the plurality of computers. The computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer.
- The scheduler is configured to select at least one target computer of the plurality of computers for storage of the generated intermediate data at local memory such that a variance of the computer usage demand across the plurality of computers is minimized.
- Another embodiment of the invention is computer program product for distributing intermediate data of a multistage computer application to a plurality of computers. The computer program product may include computer readable program code configured to calculate a data usage demand of a generated intermediate data, calculate a computer usage demand for each computer in the plurality of computers, and store the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
- The subject matter which is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
-
FIG. 1 shows anexample system 102 employing the present invention. -
FIG. 2 shows anotherexample system 102 employing the present invention. -
FIG. 3 shows an example communication pattern of generated intermediate data from a multistage computer application. -
FIG. 4 shows an example flowchart for distributing data of a multistage computer application to a plurality of computers. -
FIG. 5 shows additional operations included in the flowchart ofFIG. 4 . -
FIG. 6 shows additional operations included in the flowchart ofFIG. 4 . - The present invention is described with reference to embodiments of the invention. Throughout the description of the invention reference is made to
FIGS. 1-9 . -
FIG. 1 illustrates anexample system 102 incorporating an embodiment of the present invention. It is noted that thesystem 102 shown is just one example of various arrangements of the present invention and should not be interpreted as limiting the invention to any particular configuration. - The
system 102 includes a plurality ofcomputers 104 executing amultistage computer application 106 in a computer network. For example, the plurality ofcomputers 104 may be part of a cloud computing structure. - A multistage computer application refers to a computer application that executes a plurality of tasks in stages successively over time. The
multistage computer application 106 generatesintermediate data 108. As used herein,intermediate data 108 is data generated by onestage 110 of the multistage computer application and transferred to one or more followingstages 112. - The
system 102 also includes aplacement service 132 for distributing theintermediate data 108 to the plurality ofcomputers 104. - The
placement service 132 may include adata manager 114 configured to calculate, by at least one computer processor, adata usage demand 116 of the generatedintermediate data 108. As described more fully below, in one embodiment thedata usage demand 116 is proportional to the number of consuming tasks in themultistage computer application 106 configured to consume the generatedintermediate data 108. For each consuming task, thedata usage demand 116 is discounted by a distance between acurrent stage 110 of the multistage computer application and afuture stage 112 of themultistage computer application 116 executing the consuming task. - For example, the
data usage demand 116 may be calculated as follows: -
- where wj is work associated with task tj, rj is an expected number of executions of task tj, Wp(i,j) is a total amount of work left in a longest path from the generated intermediate data to tj, ci t are consumers of the generated intermediate data at time t, and w0 is a constant.
- The
placement service 132 may further include acomputer manager 118 configured to calculate acomputer usage demand 120 for each computer in the plurality ofcomputers 104. As described more fully below, in one embodiment thecomputer usage demand 120 is a sum of alldata usage demand 116 of each intermediate data stored inlocal memory 120 of the computer. - The
placement service 132 may further include ascheduler 124. Thescheduler 124 is configured to select at least onetarget computer 126 of the plurality ofcomputers 104 for storage of the generatedintermediate data 108 atlocal memory 122 such that a variance of thecomputer usage demand 120 across the plurality ofcomputers 104 is minimized. In one embodiment, thescheduler 124 is configured to select thetarget computer 126 of the plurality ofcomputers 104 having the lowestcomputer usage demand 120. - The
system 102 may include anapplication profile 128 configured to provide the data manager 114 acommunication pattern 130 of the generated intermediate data. Thecommunication pattern 130 specifies usage of the generatedintermediate data 108 by task and runtime. In one embodiment, thecommunication pattern 130 is a directed acyclic graph (DAG). - The
application profile 128 may also be communicated to thescheduler 124. As discussed further below, thescheduler 124 may select one or more of the plurality ofcomputers 104 having a plurality of processing cores if the generatedintermediate data 108 is consumed simultaneously by a plurality of tasks. In another embodiment further described below, the scheduler is further configured to select one or more of the plurality ofcomputers 104 to store the generatedintermediate data 108 and other intermediate data together if the generatedintermediate data 108 is consumed simultaneously with the other intermediate data by one single task. - The
computer manager 118 may be configured to normalize thecomputer usage demand 120 based on computing resources available at each computer in the plurality ofcomputers 104. The computing resources include, for example, the memory size and the amount of processing cores at a computer. - The
example system 102 beneficially drives the state of resources in a distributed computing system so to enable/promote data-locality when placing computation (tasks) for dataflows. This is achieved by making data placement decisions such that when placing a ready task, the machine hosting its corresponding intermediate data is likely to have enough resources available to be selected by thescheduler 124. This results in reduced transfer cost since intermediate data objects do not need to be transferred remotely for processing. - The data usage demand 116 (also referred to herein as dataUsageValue for brevity) metric represents the discounted expected amount work of a given intermediate data object at a given point in time. Note that for a given intermediate data object this metric has a relative nature in that it captures its future demand or importance when compared to other intermediate data objects stored in the system at a given point in time. Since an intermediate data object can be accessed by multiple tasks at different times throughout the execution of a dataflow,
data usage demand 116 varies over time. As a matter of fact, for a given intermediate data object itsdata usage demand 116 is recomputed every time a task consumes it and its value is updated to reflect how soon/far in the future it is needed later by another task in the dataflow. The introduction of this metric leads to the introduction of two additional related metrics, namely: computer usage demand 120 (per machine) (also referred to herein as machineUsageValue for brevity) and system usage demand (also referred to herein as systemUsageValue for brevity) which represent an aggregate of the data objects stored in each machine and in the whole system, respectively. - In one embodiment, as long as system usage demand is evenly distributed across machines in the system, i.e., minimum variance for
computer usage demand 120 values, machines storing intermediate data needed by ready tasks are likely to have enough resources to host the tasks. Similarly, in the case of placing multiple replicas to handle system failures, thedata usage demand 116 should reflect the reliability of the system (e.g., failure probability). Thus, in one embodiment, thesystem 102 aims at maximizing data locality in the presence of system failures. - Embodiments of the present invention are now discussed in more detail. The reader is referred to Table 1 for a list of term and their corresponding meanings.
-
-
TABLE 1 Terminology mk machine with compute and memory resources Data-flow (j) dataflow job consisting of multiple dependent tasks represented as a DAG Task (ti) task belonging to dataflow j Intermediate data set (di) intermediate input data set for some task(s) dataUsageValue (vi) value associated with di representing the likelihood that di will be needed for processing by a task in the future machineUsageValue (Vk) value associated with mk representing the share of systemUsageValue systemUsageValue ( V )systemUsageValue Set of dependent tasks (Ti) set of tasks that depend on, ie., consume, di Dk set of intermediate data objects stored in mk ESTt (Ti) sorted set of the earliest starting time of the earliest dependency task in Ti at time t Set Fi t of dependent tasks subset of Ti containing all the tasks that have completed at time t Set Ci t of dependent tasks subset of Ti containing all the tasks that have not ran yet at time t (Ti − Fi t) wj amount work associated with task tj rj t number of times task tj is expected to run (rj t > 1) Wp (i, j) aggregate amount of work of the maximum path length between task ti and tj Ri, k kth replica of di - Turing now to
FIG. 2 , anexample system 202 is shown consisting of adata management layer 204 that sits insides theresource management layer 206 in between thescheduler 208 and theapplication profiler 210, and theresources 212. As shown, theapplication profiler 210 receives as input the directed acyclic graph (DAG) 214 describing the job and run time information from the resource management layer. - An element of the
application profiler 210 is the replicatecost model 216 which decides when to replicate intermediate data. Copies of intermediate data objects are treated differently in that their dataUsageValue are computed differently. - The
data management layer 204 makes data placement decision based on the information obtained from theapplication profiler 210. For instance, it seeks at collocating intermediate data objects as specified by the communication pattern of theDAG 214 and creates as many replicas as determined by the RCM. As thedata management layer 204 makes placement decisions it leads thescheduler 208 to make better scheduling since it effectively improves the choices available to it when placing computation. - In one embodiment, the
data management layer 204 consists of three main steps: - 1. Computing and Assigning dataUsageValue vi t: assigning/tagging a dataUsageValue vi t to every intermediate data object di stored in the system. Recall that vi t represents the discounted expected amount of work associated with di, i.e., the amount of work left to be done by the tasks in Ti that have not ran yet, when compared to other intermediate data objects in the system at time t. Since an intermediate data object di may be needed multiple times throughout the life time of j, vi t varies over time and becomes zero (vi=0) when it is no longer needed. Intuitively, the higher the demand of di in the future the higher the value.
- 2. Computing and Assigning machineUsageValue Vk t: assigning/tagging a machineUsageValue Vk t to each machine mk in the system. Vk t represents the share of the total systemUsageValue
V stored in machine mk and is an aggregate of the dataUsageValues vi t for all diεDk at time t. - 3. Data Placement: placing new incoming di following a heuristic that seeks to even the distribution of vi t across machines in the system and achieve the goals stated earlier.
- Two cases are considered when computing vi t:
- 1. Communication Pattern of the Dataflow:
- The dataUsageValue vi is assigned at the time di is created and stored for the first time and it varies over time as di is consumed/processed by the tasks that depend on it (Ti). The update mechanism for the dataUsageValue is described below. It is easy to observe that every intermediate data object di in a dataflow j is processed by at least one task in j, i.e., immediate child task. Depending on the communication pattern of j, however, an intermediate data object may be processed by more than one task, e.g., higher fan-in.
- For instance, in
FIG. 3 , d0 is needed by two tasks, t1 (immediate child of t0) and t2 (farther in the future). Thus, after t1 finishes at time t1, V0 t1 is updated to reflect the fact that t1 completed and there is less work left associated to d0. In this case only t2 is left for processing d0. Note that v1 t2→0 on the other hand as soon as t2 finishes (at t=t2) to reflect the fact that d1 is no longer needed by the dataflow. - 2. Re-Execution of a Task in the Presence of System Failure:
- As described earlier a replica corresponding to an intermediate data object di may be needed for re-execution if one of the resources hosting a task fails and the last checkpoint of the dataflow corresponds to di. Following the same principle stated for the previous case, vi t is computed for di considering the probability of system failure and hence, of the replica been needed for re-execution of a task. Furthermore, to recognize the aggregated importance of all the replicas, vi t is equally distributed across the replicas. Note that vi t is in this case a function of the reliability of the dataflow, i.e., a function of several other factors such as the probability of failure of resources executing tasks belonging to the dataflow and number of replicas available for di.
- It is noted that there are multiple ways to obtain vi t. In one embodiment, Gt(di) is a function that computes vi t at time t and satisfies the definition stated earlier. One example of this function is:
-
- where w0 is a constant that can changed depending on the state of the system and workloads. For the sake of easing the explanation, let us assume that a unit of work takes one unit of time to execute. Intuitively, the denominator in the equation reflects how far in the future is task tj is expected to execute. The smaller the denominator, the larger the value of vi. Thus, this suggests that tasks that are expected to run soon have more weight in the placement decision made at time t. This follows intuition since the compute resources needed to execute the task may be needed soon in the future (and therefore must be available). If we now consider the numerator, it reflects the amount of work associated with task tj including the number of times that is expected to run(rj t).
- The introduction of the term rj t is important since it allows us to capture the likeliness that the system fails and therefore a data set may needed again in the future. In one embodiment, to assign vi t to N replicas of di the value of Vi t is evenly distributed. That is,
-
- Following these observations, the larger the numerator the higher the value of vi suggesting that resources where di has been placed should be treated carefully to avoid over committing them.
- The value of Vk t can be determined in several ways. To illustrate, consider a very simple approach, that is, Vk=Σjvj|djεDk. Intuitively, the higher the value of Vk t the more likely that mk will be needed by tasks in j soon. In other words, placing a task in a machine mk with high value of Vk t comes at the cost of achieving limited data locality by having to place incoming tasks that depend on data held in mk into a remote machine.
- In one embodiment, a simple heuristic is used to ensure the even distribution of Vk t across machines. When making the decision of placing a new incoming di the algorithm selects the machine with the smallest value of Vk t. Intuitively, the smaller the value of Vk t the less likely that machine Mk will be needed, and its value will be reduced. By considering this product we aim at minimizing the likelihood that a placement decision made at time t1 in mk will be invalidated by changes in Vk t, t1<t. In other words, that a machine will be overcommitted by the time di is needed.
- Recall that the data management layer can use a priori knowledge regarding the communication pattern of the DAG corresponding to j to identify opportunities for collocating data and hence maximizing data-locality. Two especial cases are worth mentioning:
- 1. Fork dataflow: In the case an intermediate data object has multiple immediate consumers, the data management layer has multiple options. It could for example create as many copies of the intermediate data object as number of consumers and place them individually or it could leverage or keep one single copy and place it in a multi-core machine so to achieve better parallelism and hence, reduce makespan. The data management layer makes these decisions with the help of the application profiler which dictates the requirements of the dataflow.
- 2. Join dataflow: This refer to the case wherein multiple intermediate data objects generated by different tasks are consumed by one single task. For the purpose of data-locality the data management layer may treat all the intermediate data objects as one and therefore place them together.
- Another embodiment of the invention is a method for distributing data of a multistage computer application to a plurality of computers, which is now described with reference to
flowchart 402 ofFIG. 4 . The method begins atBlock 404 and includes determining a data usage demand of a generated intermediate data atBlock 406. As discussed above, the data usage demand is proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task. - In one embodiment of the invention, the data usage demand is calculated as follows:
-
- where wj is work associated with task tj, rj is an expected number of executions of task tj, Wp(i,j) is a total amount of work left in a longest path from the generated intermediate data to tj, ci t are consumers of the generated intermediate data at time t, and w0 is a constant.
- The method further includes calculating a computer usage demand for each computer in the plurality of computers at
Block 408. The computer usage demand is a sum of all data usage demand of each intermediate data stored in local memory at the computer. - Next, at
Block 410, the generated intermediate data is stored at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized. As detailed above, storing the generated intermediate data may include storing the generated intermediate data at the at least one target computer having the lowest computer usage demand. The method ends atBlock 412. - In another method embodiment, which is now described with reference to
flowchart 502 ofFIG. 5 , the method begins atBlock 504. The method includes receiving a communication pattern of the generated intermediate data. The communication pattern specifies usage of the generated intermediate data by task and runtime. In one embodiment, the communication pattern is a directed acyclic graph. The method may additionally include the steps ofFIG. 4 atBlocks Block 506. - In another method embodiment, which is now described with reference to
flowchart 602 ofFIG. 6 , the method begins atBlock 604. The method may include the steps ofFIG. 4 atBlocks Block 606. The computing resources may include, of example, memory size and amount of processing cores. The method ends atBlock 608. - As will be appreciated by one skilled in the art, aspects of the invention may be embodied as a system, method or computer program product. Accordingly, aspects of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
- Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
- Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- Aspects of the invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
- The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
- While the preferred embodiments to the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.
Claims (20)
1. A method for distributing data of a multistage computer application to a plurality of computers, the method comprising:
determining a data usage demand of a generated intermediate data, the data usage demand being proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task;
calculating a computer usage demand for each computer in the plurality of computers, the computer usage demand being a sum of all data usage demand of each intermediate data stored in local memory at the computer; and
storing the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
2. The method of claim 1 , wherein storing the generated intermediate data includes storing the generated intermediate data at the at least one target computer having the lowest computer usage demand.
3. The method of claim 1 , further comprising receiving a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime.
4. The method of claim 3 , wherein the communication pattern is a directed acyclic graph.
5. The method of claim 1 , further comprising normalizing the computer usage demand based on computing resources available at each computer in the plurality of computers.
6. The method of claim 5 , wherein the computing resources include, at least one of, memory size and amount of processing cores.
7. The method of claim 1 , further comprising:
receiving a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime; and
if the generated intermediate data is consumed simultaneously by a plurality of tasks, storing the generated intermediate data at one or more of the plurality of computers having a plurality of processing cores.
8. The method of claim 1 , further comprising:
receiving a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime; and
if the generated intermediate data is consumed simultaneously with other intermediate data by one single task, storing the generated intermediate data and the other intermediate data together at the target computer.
9. The method of claim 1 , wherein the data usage demand is calculated as follows:
where wj is work associated with task tj, rj is an expected number of executions of task tj, Wp(i,j) is a total amount of work left in a longest path from the generated intermediate data to tj, ci t are consumers of the generated intermediate data at time t, and w0 is a constant.
10. A system for distributing intermediate data of a multistage computer application to a plurality of computers, the system comprising:
a data manager configured to calculate, by at least one computer processor, a data usage demand of a generated intermediate data, the data usage demand being proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task;
a computer manager configured to calculate a computer usage demand for each computer in the plurality of computers, the computer usage demand being a sum of all data usage demand of each intermediate data stored in local memory at the computer; and
a scheduler configured to select at least one target computer of the plurality of computers for storage of the generated intermediate data at local memory such that a variance of the computer usage demand across the plurality of computers is minimized.
11. The system of claim 10 , wherein scheduler is configured to select the target computer of the plurality of computers having the lowest computer usage demand.
12. The system of claim 10 , an application profile configured to provide the data manager a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime.
13. The system of claim 12 , wherein the communication pattern is a directed acyclic graph.
14. The system of claim 10 , wherein the computer manager is further configured to normalize the computer usage demand based on computing resources available at each computer in the plurality of computers.
15. The system of claim 14 , wherein the computing resources include, at least one of, memory size and amount of processing cores.
16. The system of claim 10 , further comprising:
an application profile configured to provide the scheduler a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime; and
wherein the scheduler is configured to select one or more of the plurality of computers having a plurality of processing cores if the generated intermediate data is consumed simultaneously by a plurality of tasks.
17. The system of claim 10 , further comprising:
an application profile configured to provide the scheduler a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime; and
wherein the scheduler is configured to select one or more of the plurality of computers to store the generated intermediate data and other intermediate data together if the generated intermediate data is consumed simultaneously with the other intermediate data by one single task.
18. A computer program product for distributing intermediate data of a multistage computer application to a plurality of computers, the computer program product comprising:
a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code configured to:
calculate a data usage demand of a generated intermediate data, the data usage demand being proportional to the number of consuming tasks in the multistage computer application configured to consume the generated intermediate data and, for each consuming task, discounted by a distance between a current stage of the multistage computer application and a future stage of the multistage computer application executing the consuming task;
calculate a computer usage demand for each computer in the plurality of computers, the computer usage demand being a sum of all data usage demand of each intermediate data stored in local memory at the computer; and
store the generated intermediate data at local memory of at least one target computer of the plurality of computers such that a variance of the computer usage demand across the plurality of computers is minimized.
19. The computer program product of claim 18 , wherein the program code to store the generated intermediate data includes program code to store the generated intermediate data at the at least one target computer having the lowest computer usage demand.
20. The computer program product of claim 18 , further comprising program code to receive a communication pattern of the generated intermediate data, the communication pattern specifying usage of the generated intermediate data by task and runtime.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/684,273 US7970884B1 (en) | 2010-01-08 | 2010-01-08 | Distribution of intermediate data in a multistage computer application |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/684,273 US7970884B1 (en) | 2010-01-08 | 2010-01-08 | Distribution of intermediate data in a multistage computer application |
Publications (2)
Publication Number | Publication Date |
---|---|
US7970884B1 US7970884B1 (en) | 2011-06-28 |
US20110173245A1 true US20110173245A1 (en) | 2011-07-14 |
Family
ID=44169500
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/684,273 Expired - Fee Related US7970884B1 (en) | 2010-01-08 | 2010-01-08 | Distribution of intermediate data in a multistage computer application |
Country Status (1)
Country | Link |
---|---|
US (1) | US7970884B1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10552203B2 (en) * | 2015-05-22 | 2020-02-04 | Landmarks Graphics Corporation | Systems and methods for reordering sequential actions |
US10394600B2 (en) * | 2015-12-29 | 2019-08-27 | Capital One Services, Llc | Systems and methods for caching task execution |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5978588A (en) * | 1997-06-30 | 1999-11-02 | Sun Microsystems, Inc. | Method and apparatus for profile-based code placement using a minimum cut set of the control flow graph |
US6381735B1 (en) * | 1998-10-02 | 2002-04-30 | Microsoft Corporation | Dynamic classification of sections of software |
US20020194603A1 (en) * | 2001-06-15 | 2002-12-19 | Jay H. Connelly | Method and apparatus to distribute content using a multi-stage broadcast system |
US6591262B1 (en) * | 2000-08-01 | 2003-07-08 | International Business Machines Corporation | Collaborative workload management incorporating work unit attributes in resource allocation |
US20050166205A1 (en) * | 2004-01-22 | 2005-07-28 | University Of Washington | Wavescalar architecture having a wave order memory |
US20060212597A1 (en) * | 2005-02-18 | 2006-09-21 | Fujitsu Limited | Multi-stage load distributing apparatus and method, and program |
US7370328B2 (en) * | 2000-04-28 | 2008-05-06 | Honda Motor Co., Ltd. | Method for assigning job in parallel processing method and parallel processing method |
US20080120592A1 (en) * | 2006-10-31 | 2008-05-22 | Tanguay Donald O | Middleware framework |
US20080175270A1 (en) * | 2007-01-23 | 2008-07-24 | Deepak Kataria | Multi-Stage Scheduler with Processor Resource and Bandwidth Resource Allocation |
US7461236B1 (en) * | 2005-03-25 | 2008-12-02 | Tilera Corporation | Transferring data in a parallel processing environment |
US7490218B2 (en) * | 2004-01-22 | 2009-02-10 | University Of Washington | Building a wavecache |
US20090241117A1 (en) * | 2008-03-20 | 2009-09-24 | International Business Machines Corporation | Method for integrating flow orchestration and scheduling for a batch of workflows |
US20090285228A1 (en) * | 2008-05-19 | 2009-11-19 | Rohati Systems, Inc. | Multi-stage multi-core processing of network packets |
-
2010
- 2010-01-08 US US12/684,273 patent/US7970884B1/en not_active Expired - Fee Related
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5978588A (en) * | 1997-06-30 | 1999-11-02 | Sun Microsystems, Inc. | Method and apparatus for profile-based code placement using a minimum cut set of the control flow graph |
US6381735B1 (en) * | 1998-10-02 | 2002-04-30 | Microsoft Corporation | Dynamic classification of sections of software |
US7370328B2 (en) * | 2000-04-28 | 2008-05-06 | Honda Motor Co., Ltd. | Method for assigning job in parallel processing method and parallel processing method |
US6591262B1 (en) * | 2000-08-01 | 2003-07-08 | International Business Machines Corporation | Collaborative workload management incorporating work unit attributes in resource allocation |
US20020194603A1 (en) * | 2001-06-15 | 2002-12-19 | Jay H. Connelly | Method and apparatus to distribute content using a multi-stage broadcast system |
US20050166205A1 (en) * | 2004-01-22 | 2005-07-28 | University Of Washington | Wavescalar architecture having a wave order memory |
US7490218B2 (en) * | 2004-01-22 | 2009-02-10 | University Of Washington | Building a wavecache |
US20060212597A1 (en) * | 2005-02-18 | 2006-09-21 | Fujitsu Limited | Multi-stage load distributing apparatus and method, and program |
US7461236B1 (en) * | 2005-03-25 | 2008-12-02 | Tilera Corporation | Transferring data in a parallel processing environment |
US20080120592A1 (en) * | 2006-10-31 | 2008-05-22 | Tanguay Donald O | Middleware framework |
US20080175270A1 (en) * | 2007-01-23 | 2008-07-24 | Deepak Kataria | Multi-Stage Scheduler with Processor Resource and Bandwidth Resource Allocation |
US20090241117A1 (en) * | 2008-03-20 | 2009-09-24 | International Business Machines Corporation | Method for integrating flow orchestration and scheduling for a batch of workflows |
US20090285228A1 (en) * | 2008-05-19 | 2009-11-19 | Rohati Systems, Inc. | Multi-stage multi-core processing of network packets |
Also Published As
Publication number | Publication date |
---|---|
US7970884B1 (en) | 2011-06-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9715408B2 (en) | Data-aware workload scheduling and execution in heterogeneous environments | |
US10691647B2 (en) | Distributed file system metering and hardware resource usage | |
Jalaparti et al. | Network-aware scheduling for data-parallel jobs: Plan when you can | |
US8595732B2 (en) | Reducing the response time of flexible highly data parallel task by assigning task sets using dynamic combined longest processing time scheme | |
US9244751B2 (en) | Estimating a performance parameter of a job having map and reduce tasks after a failure | |
US8200824B2 (en) | Optimized multi-component co-allocation scheduling with advanced reservations for data transfers and distributed jobs | |
Soualhia et al. | Task scheduling in big data platforms: a systematic literature review | |
Nghiem et al. | Towards efficient resource provisioning in MapReduce | |
US20150033237A1 (en) | Utility-optimized scheduling of time-sensitive tasks in a resource-constrained environment | |
US10642652B2 (en) | Best trade-off point on an elbow curve for optimal resource provisioning and performance efficiency | |
US20120042319A1 (en) | Scheduling Parallel Data Tasks | |
US20150248312A1 (en) | Performance-aware job scheduling under power constraints | |
US20130318538A1 (en) | Estimating a performance characteristic of a job using a performance model | |
US20110173410A1 (en) | Execution of dataflow jobs | |
US20220129316A1 (en) | Workload Equivalence Class Identification For Resource Usage Prediction | |
US11966775B2 (en) | Cloud native adaptive job scheduler framework for dynamic workloads | |
US11403095B2 (en) | Scalable code repository with green master | |
Shirzad et al. | Job failure prediction in Hadoop based on log file analysis | |
Rajan et al. | Designing self-tuning split-map-merge applications for high cost-efficiency in the cloud | |
US7970884B1 (en) | Distribution of intermediate data in a multistage computer application | |
US9577869B2 (en) | Collaborative method and system to balance workload distribution | |
Foroni et al. | Moira: A goal-oriented incremental machine learning approach to dynamic resource cost estimation in distributed stream processing systems | |
US20170235608A1 (en) | Automatic response to inefficient jobs in data processing clusters | |
US20230222012A1 (en) | Method for scaling up microservices based on api call tracing history | |
Banicescu et al. | Towards the robustness of dynamic loop scheduling on large-scale heterogeneous distributed systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CASTILLO, CLARIS;SPREITZER, MICHAEL J.;STEINDER, MALGORZATA;AND OTHERS;SIGNING DATES FROM 20100106 TO 20100108;REEL/FRAME:023755/0140 |
|
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20150628 |