US20050172083A1 - Selection of a resource in a distributed computer system - Google Patents

Selection of a resource in a distributed computer system Download PDF

Info

Publication number
US20050172083A1
US20050172083A1 US11/096,766 US9676605A US2005172083A1 US 20050172083 A1 US20050172083 A1 US 20050172083A1 US 9676605 A US9676605 A US 9676605A US 2005172083 A1 US2005172083 A1 US 2005172083A1
Authority
US
United States
Prior art keywords
adaptor
adaptors
receiving
queue
workload
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/096,766
Inventor
David Meiri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/096,766 priority Critical patent/US20050172083A1/en
Publication of US20050172083A1 publication Critical patent/US20050172083A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]

Definitions

  • This invention relates to distributed computer systems, and in particular, to the selection of system resources by a constituent processor of a distributed computer system.
  • a distributed computer system includes a large number of processors, each with its own local memory. These processors all share a common memory.
  • the common memory includes several queues in which are listed instructions for various processing tasks waiting to be performed. When a processor becomes free, it selects one of these queues and carries out the processing task waiting at the front of the queue.
  • the processor In selecting a queue, the processor attempts to minimize the waiting time of each processing task in each queue. Since waiting time depends, in part, on queue length, it is useful for the processor to know how many tasks are waiting in each queue before selecting a queue.
  • a distributed computer system occasionally communicates with other distributed computer systems. To do so, a sending processor from a source distributed computer system sends a message to one of the constituent processors on a target distributed computer system. A prerequisite to doing so is the selection of a receiving processor from among the constituent processors of the target system.
  • a sending processor selects, as the receiving processor, that processor on the target system that is the least busy.
  • the sending processor faces a problem similar to that described above in the context of selecting a queue. Short of polling each processor in the target system, there is no simple and reliable mechanism for identifying the processor that is the least busy.
  • the problem of selecting a receiving processor and selecting a queue are examples of the more general problem of selecting a resource on the basis of a stochastic property of that resource.
  • the method of the invention selects resources probabilistically, using estimates of the current, or present values of the stochastic property for each of the available resources.
  • One method for selecting a resource from a plurality of resources includes determining a score for that resource on the basis of a stochastic property of the resource and then defining an interval corresponding to the resource. The extent of that interval is selected to depend on the score for that resource. A random number, is then generated and that resource is selected if the random number falls within the interval defined for that resource. The random number can, but need not be, uniformly distributed over the set of all intervals associated with the plurality of resources.
  • the method thus has the quality of spinning a roulette wheel having as many slots as there are resources to select from, with the extent of each slot being dependent on the value of the stochastic property of the resource associated with that slot. This ensures that resources having desirable values of that stochastic property are more likely to be selected but that all resources have some probability of being selected.
  • the resource is selected to be a queue and the stochastic property of the resource is the queue-length of the queue.
  • the resource is a processor and the stochastic property is the workload of that processor.
  • the method includes determining a score for each resource from the plurality of resources available for selection. This includes estimating a present value of the stochastic property of that resource, typically on the basis of prior measurements of that stochastic property.
  • the prior measurement is the last-known value, or most recent measurement of that stochastic property for the resource in question.
  • the extent of the interval associated with a particular resource depends on the score associated with that resource. In one practice of the invention, the extend depends on the normalized score for that resource.
  • the score determined for a resource can be normalized by evaluating a sum of scores assigned to each resource in the plurality of resources and normalizing the score assigned to the resource by the sum of scores.
  • the method also includes an optional step of periodically updating the measurements upon which an estimate of a current value of a stochastic property are based.
  • a resource that has been selected is also polled to determine the current value of the stochastic property for that resource. This current value then becomes the new last-known value for the stochastic property of that resource.
  • FIG. 1 shows a data storage system
  • FIG. 2 shows the contents of the local cache memory and the global memory of the data storage system of FIG. 1 ;
  • FIG. 3 is a flow-chart illustrating a queue-selection method
  • FIG. 4 is a sampling interval for the queue-selection method illustrated in FIG. 3 ;
  • FIG. 5 shows the data-storage system of FIG. 1 in communication with a remote data-storage system
  • FIG. 6 is a flow-chart illustrating a method for selecting a remote adaptor with which to communicate.
  • FIG. 7 is a sampling interval for the remote adaptor selection method illustrated in FIG. 6 .
  • a data-storage system 10 for that carries out a resource selection method, as shown in FIG. 1 includes several adaptors 12 that interface with external devices.
  • These external devices can be data storage devices 14 , such as disk drives, in which case the adaptors are called “disk adaptors.”
  • the external devices can also be hosts 16 , or processing systems that are directly accessed by users of the data-storage system 10 , in which case they are referred to as “host adaptors.”
  • the external devices can also be remote data-storage systems 18 for mirroring data in the data-storage system 10 , in which case the adaptors are referred to as “remote adaptors.”
  • Each adaptor 12 includes its own processor 20 and a local memory 22 available to the processor 20 .
  • the data-storage system 10 also includes a common memory 24 that is accessible to all the adaptors.
  • the common memory 24 functions as a staging area for temporary storage of data.
  • the use of a common memory- 24 improves performance of the data-storage system 10 by reducing the latency associated with accessing mass storage devices.
  • the various adaptors 12 in the data-storage system 10 cooperate with each other to assure an orderly flow of data from the common memory 24 to or from the mass storage devices 14 , hosts 16 , and mirror sites 18 .
  • the adaptors 12 must communicate with each other. This communication is implemented by maintaining one or more queues 26 in a queue portion 28 of the common memory 24 , as shown in FIG. 2 .
  • an adaptor 32 requires that a particular task be executed by another adaptor, it leaves, on a queue 26 within the queue portion 28 , a message 30 requesting that the task be carried out.
  • An adaptor 34 scanning the queue can then encounter the message 30 and execute that task.
  • the adaptor 32 leaving the message is referred to as the “request-adaptor;” the adaptor 34 that carries out the task specified in the message is referred to as the “execution-adaptor.” It is understood, however, that these are logical designations only. Disk adaptors, host adaptors, and remote adaptors can each take on the role of a request-adaptor 32 or an execution-adaptor 34 at various times during the operation of the data-storage system 10 .
  • Certain tasks in the data-storage system 10 are urgent and must be carried out promptly. Other tasks are less time-sensitive. To accommodate this, the data-storage system 10 assigns different priorities to the queues 26 .
  • a request-adaptor 32 When a request-adaptor 32 has a task to be executed, it determines the priority of the task and places it in the queue 26 whose priority is appropriate to the urgency of the task.
  • Each queue 26 contains a varying number of messages 30 . This number is referred to as the queue-length.
  • the queue-length has a lower bound of zero and an upper bound that depends on the configuration of the disk-storage system 10 .
  • request-adaptors 32 add new messages to the queue 26 and execution-adaptors 34 carry out requests specified in messages and delete those messages from the queue 26 .
  • the queue-length is a time-varying random number.
  • an execution-adaptor 34 When an execution-adaptor 34 becomes free to execute a processing task, it selects a queue 26 and executes the processing task specified by a topmost message 36 in that queue 26 .
  • the execution-adaptor 34 selects the queue 26 so as to minimize the waiting time for all pending messages in all queues. In most cases, this requires that the execution-adaptor 34 select the queue 26 having the greatest queue-length.
  • the execution-adaptor 34 cannot know with certainty the length of each queue 26 at the moment when it is necessary to select a queue 26 . Even if the execution-adaptor 34 were to incur the overhead associated with polling each queue 26 , it would be possible for other adaptors 12 to add or delete a message 30 from a queue 26 that has just been polled by the execution-adaptor 34 . This introduces error into the execution-adaptor's assessment of the queue-lengths.
  • the execution-adaptor 34 caches, in its local memory 22 , a queue-length table 38 listing the length of each queue 26 at the time that the execution-adaptor 34 last carried out a request pending on that queue 26 .
  • the table-entries in the queue-length table 38 are thus the last-known queue-lengths for each queue 26 .
  • These last-known queue-lengths function as estimates of the queue-lengths at the moment when the execution adaptor 34 selects a queue 26 .
  • the execution-adaptor 34 updates a queue's entry in the queue-length table 38 whenever it accesses that queue 26 to carry out a request. Since the execution-adaptor 34 already has to access the queue 26 in order to carry out a request pending on that queue 26 , there is little additional overhead associated with polling the queue 26 to obtain its queue-length.
  • the execution-adaptor 34 also maintains a priority table 40 listing the priority values assigned to each queue 26 .
  • a high-priority queue is characterized by a large integer in the priority table 40 .
  • Lower priority tables are characterized by smaller integers in the priority table 40 .
  • the execution-adaptor 34 selects a queue 26 by first assigning 42 a score to each queue 26 . It does so by weighting the estimate of the queue-length for each queue 26 with the priority assigned to that queue 26 . The result is referred to as the “effective queue-length” for that queue 26 .
  • the execution-adaptor 34 then sums 44 the effective queue-lengths for all queues 26 and defines 46 a sampling interval 48 having an extent equal to that sum, as shown in FIG. 4 .
  • the execution adaptor 34 then divides 50 the sampling interval 48 into as many queue-intervals 52 as there are queues 26 .
  • Each queue-interval 52 has an extent that corresponds to the effective queue-length of the queue 26 with which it is associated.
  • the extent of each queue-interval 52 is the effective queue-length normalized by the extent of the sampling interval 48 .
  • each queue-interval 52 is disjoint from all other queue-intervals. As a result, each point on the sampling interval is associated with one, and only one, queue 26 .
  • the execution-adaptor 34 executes 54 a random number process 56 (see FIG. 2 ) that generates a random number having a value that is uniformly distributed over the sampling interval 48 .
  • the random number will thus have a value that places it in one of the queue-intervals 52 that together form the sampling interval 48 .
  • the probability that the random number will be in any particular queue-interval 52 depends on the last-known effective queue-length of the queue 26 corresponding to that queue-interval relative to the last-known effective queue-lengths of all other queues.
  • the execution-adaptor 34 then accesses 58 the queue 26 corresponding to the queue-interval 52 that contains the random number and carries out 60 the task specified by the topmost message 36 in that selected queue 26 . Once the task is completed, the execution-adaptor 34 deletes 62 the topmost message 36 from the selected queue 26 and polls 64 the selected queue 26 to obtain its queue-length. The execution-adaptor 34 then updates 66 the entry in its queue-length table 38 that corresponds to the selected queue 26 .
  • the queue-selection method described above avoids polling each queue 26 to obtain its current queue-length.
  • the foregoing queue-selection method can thus rapidly select a queue 26 that, while not guaranteed to be have longest effective queue-length, most likely does.
  • the queue-selection method described above also avoids neglecting any queue 26 . This ensures that tasks waiting on queues having a low effective queue-length are nevertheless performed within a reasonable waiting period. This also ensures that queues having a low effective queue-length are occasionally polled to see if their effective queue-lengths have changed.
  • a data-storage system 10 can be configured to maintain several queues 26 all of which have the same priority.
  • a data-storage system 10 offers more flexibility in load balancing than a data-storage system having only a single queue because in such a system, several adaptors can carry out pending requests simultaneously.
  • the foregoing method can also be carried out in a data-storage system 10 in which all the queues 26 have the same priority.
  • the effective queue-length can be set equal to the queue-length, in which case the priority table 40 is unnecessary.
  • the entries in the priority table 40 can be set equal to each other.
  • the method described above can be adapted to select any resource on the basis of a stochastic property of that resource.
  • the resource is a queue 26 and the stochastic property that provides the basis for selection is the length of that queue.
  • the resource is a remote adaptor on a remote mirroring site 18 and the stochastic property that provides the basis for selection is the processing workload associated with the remote adaptor.
  • a distinction between the two cases is that in the first case, it is preferable to select the resource having a high value of the stochastic property and in the second case, it is preferable to select the resource having a low value of the stochastic property. This distinction is readily accommodated in the second case by working with the inverse of the stochastic property rather than with the stochastic property directly.
  • a first data-storage system 68 sometimes communicates with a second data-storage system 70 .
  • a host adaptor 72 associated with the first data-storage system 68 writes to a device 74 that is mirrored on a mirror device 76 controller by a disk adaptor 78 associated with the second data-storage system 70
  • a remote adaptor 80 on the first data-storage system 68 establishes communication with a selected remote adaptor 82 on the second data-storage system 70 .
  • the remote adaptor on the first data-storage system 68 will be referred to as the “sending adaptor” 80 and the remote adaptors on the second data-storage system 70 will be referred to as the “receiving adaptors” 82 .
  • Each remote adaptor 80 , 82 has its own processor 82 and local memory 84 .
  • the designations “receiving adaptor” and “sending adaptor” are logical designations only.
  • the second data-storage system 70 may have devices that are mirrored on the first data-storage system 68 , in which case a remote adaptor 82 of the second data storage system 70 can function as a sending adaptor and a remote adaptor 80 on the first data storage system 68 can function as a receiving adaptor.
  • the sending adaptor 80 selects one of the available receiving adaptors 82 .
  • the sending adaptor 80 selects the receiving adaptor 82 that is the least busy.
  • the sending adaptor 80 cannot know with certainty whether the information it relies upon in selecting a receiving adaptor 82 is accurate. For example, it is possible that, in the brief interval between being polled by a sending adaptor 80 and being asked to carry out a task by the sending adaptor 80 , a receiving adaptor 82 may have taken on requests sent by other sending adaptors 80 .
  • the sending adaptor 80 maintains, in its local memory 84 , a workload table 86 having information indicative of the workload carried by each receiving adaptor 82 at the time that the sending adaptor 80 last engaged in an I/O transaction with that receiving adaptor 82 .
  • the workload associated with a particular receiving adaptor 82 is thus the last-known workload for that receiving adaptor 82 .
  • the receiving-adaptor selection method uses the last-known workloads of the receiving adaptors in the workload table 86 to estimate how busy each receiving adaptor 82 is at the time that the sending adaptor 80 selects a receiving adaptor 82 .
  • the sending adaptor 80 updates the corresponding entry in the workload table 88 entry for each receiving adaptor 82 whenever it engages in an I/O transaction with that receiving adaptor 82 . Since the sending adaptor 80 already had to establish communication with the receiving adaptor 82 in order to engage in an I/O transaction with that adaptor 82 , there is little additional overhead associated with polling the receiving adaptor 82 to obtain a measure of how busy that receiving adaptor 82 currently is. In response to polling by the sending adaptor 80 , the receiving adaptor 82 provides an integer indicative of the number of tasks it is handling concurrently.
  • selection of a receiving adaptor 82 with which to communicate begins with the sending adaptor 80 assigning 88 a score to each receiving adaptor 82 .
  • the sending adaptor 80 does so by weighting the reciprocal of the table entry associated with each receiving adaptor 82 by an integer large enough to avoid time-consuming floating point operations in the steps that follow.
  • the resulting score is referred to as the “inverse workload” for that receiving adaptor 82 .
  • the sending adaptor 80 then sums 90 the inverse workloads for all receiving adaptors 82 and defines 92 a sampling interval 94 having a length equal to that sum, as shown in FIG. 7 .
  • the sampling interval 94 is then subdivided 96 into as many sub-intervals 98 as there are receiving adaptors 82 .
  • Each sub-interval 98 has a length that corresponds to the inverse workload of the receiving adaptor 82 with which it is associated.
  • each sub-interval 98 is disjoint from all other sub-intervals. As a result, each point on the sampling interval 94 is associated with one, and only one, receiving adaptor 82 .
  • the sending adaptor 80 executes 100 a random number process 102 that generates a random number having a value that is uniformly distributed over the sampling interval 94 .
  • the random number will thus have a value that places it in a sub-interval 98 corresponding to one of the receiving adaptors 82 .
  • the probability that the random number will be in a sub-interval 98 corresponding to a particular receiving adaptor 82 depends on the inverse workload of that receiving adaptor 82 relative to the inverse workloads of all other receiving adaptors.
  • the sending adaptor 80 then establishes 104 communication with and sends 106 a message to the selected receiving adaptor 82 corresponding to the sub-interval 98 associated with the value of the random number.
  • the sending adaptor 80 then polls 108 the receiving adaptor 82 to obtain a new estimate of its workload and updates 110 the entry in its workload table 86 that corresponds to that receiving adaptor 82 .
  • the sending adaptor 80 can rapidly select a receiving adaptor 82 that, although not guaranteed to have the smallest workload, most likely does. Because each receiving adaptor 82 has some probability of being selected, the probabilistic selection process described above avoids neglecting any receiving adaptor 82 . This ensures load balancing among the receiving adaptors 82 . This also ensures that receiving adaptors 82 that were once found to be busy are occasionally polled to see if they have since become relatively idle.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)
  • Computer And Data Communications (AREA)

Abstract

A method for selecting a resource from a plurality of resources includes determining a score for that resource on the basis of a stochastic property of that resource. An interval corrsesponding to that resource is then defined to have an extent that depends on the score. A particular resource is then selected by generating a random number and selecting that resource when the random number falls within the interval.

Description

  • This invention relates to distributed computer systems, and in particular, to the selection of system resources by a constituent processor of a distributed computer system.
  • BACKGROUND
  • A distributed computer system includes a large number of processors, each with its own local memory. These processors all share a common memory. The common memory includes several queues in which are listed instructions for various processing tasks waiting to be performed. When a processor becomes free, it selects one of these queues and carries out the processing task waiting at the front of the queue.
  • In selecting a queue, the processor attempts to minimize the waiting time of each processing task in each queue. Since waiting time depends, in part, on queue length, it is useful for the processor to know how many tasks are waiting in each queue before selecting a queue.
  • In a distributed computer system, several other processors are constantly adding and deleting processing tasks from the queues. This causes the length of each queue to change unpredictably. As a result, in order for a processor to know the length of a queue, it must take the time to poll the queue. However, if each processor, upon completing a processing task, were to poll each queue, the overhead associated with selecting a queue becomes unacceptably high.
  • A distributed computer system occasionally communicates with other distributed computer systems. To do so, a sending processor from a source distributed computer system sends a message to one of the constituent processors on a target distributed computer system. A prerequisite to doing so is the selection of a receiving processor from among the constituent processors of the target system.
  • Preferably, a sending processor selects, as the receiving processor, that processor on the target system that is the least busy. However, in doing so, the sending processor faces a problem similar to that described above in the context of selecting a queue. Short of polling each processor in the target system, there is no simple and reliable mechanism for identifying the processor that is the least busy.
  • SUMMARY
  • The problem of selecting a receiving processor and selecting a queue are examples of the more general problem of selecting a resource on the basis of a stochastic property of that resource. Rather than attempt to determine with certainty the value of the stochastic property for each resource, the method of the invention selects resources probabilistically, using estimates of the current, or present values of the stochastic property for each of the available resources.
  • One method for selecting a resource from a plurality of resources, includes determining a score for that resource on the basis of a stochastic property of the resource and then defining an interval corresponding to the resource. The extent of that interval is selected to depend on the score for that resource. A random number, is then generated and that resource is selected if the random number falls within the interval defined for that resource. The random number can, but need not be, uniformly distributed over the set of all intervals associated with the plurality of resources.
  • The method thus has the quality of spinning a roulette wheel having as many slots as there are resources to select from, with the extent of each slot being dependent on the value of the stochastic property of the resource associated with that slot. This ensures that resources having desirable values of that stochastic property are more likely to be selected but that all resources have some probability of being selected.
  • In a first practice of the invention, the resource is selected to be a queue and the stochastic property of the resource is the queue-length of the queue. In a second practice of the invention, the resource is a processor and the stochastic property is the workload of that processor.
  • In both cases, the method includes determining a score for each resource from the plurality of resources available for selection. This includes estimating a present value of the stochastic property of that resource, typically on the basis of prior measurements of that stochastic property. In one aspect of the invention, the prior measurement is the last-known value, or most recent measurement of that stochastic property for the resource in question.
  • The extent of the interval associated with a particular resource depends on the score associated with that resource. In one practice of the invention, the extend depends on the normalized score for that resource. The score determined for a resource can be normalized by evaluating a sum of scores assigned to each resource in the plurality of resources and normalizing the score assigned to the resource by the sum of scores.
  • The method also includes an optional step of periodically updating the measurements upon which an estimate of a current value of a stochastic property are based. In one practice of the invention, a resource that has been selected is also polled to determine the current value of the stochastic property for that resource. This current value then becomes the new last-known value for the stochastic property of that resource.
  • These and other features of the invention will be apparent from the following detailed description and the accompanying figures in which:
  • BRIEF DESCRIPTION OF THE FIGURES
  • FIG. 1 shows a data storage system;
  • FIG. 2 shows the contents of the local cache memory and the global memory of the data storage system of FIG. 1;
  • FIG. 3 is a flow-chart illustrating a queue-selection method;
  • FIG. 4 is a sampling interval for the queue-selection method illustrated in FIG. 3;
  • FIG. 5 shows the data-storage system of FIG. 1 in communication with a remote data-storage system;
  • FIG. 6 is a flow-chart illustrating a method for selecting a remote adaptor with which to communicate; and
  • FIG. 7 is a sampling interval for the remote adaptor selection method illustrated in FIG. 6.
  • DETAILED DESCRIPTION
  • A data-storage system 10 for that carries out a resource selection method, as shown in FIG. 1, includes several adaptors 12 that interface with external devices. These external devices can be data storage devices 14, such as disk drives, in which case the adaptors are called “disk adaptors.” The external devices can also be hosts 16, or processing systems that are directly accessed by users of the data-storage system 10, in which case they are referred to as “host adaptors.” The external devices can also be remote data-storage systems 18 for mirroring data in the data-storage system 10, in which case the adaptors are referred to as “remote adaptors.” Each adaptor 12 includes its own processor 20 and a local memory 22 available to the processor 20.
  • The data-storage system 10 also includes a common memory 24 that is accessible to all the adaptors. The common memory 24 functions as a staging area for temporary storage of data. The use of a common memory-24 improves performance of the data-storage system 10 by reducing the latency associated with accessing mass storage devices.
  • The various adaptors 12 in the data-storage system 10 cooperate with each other to assure an orderly flow of data from the common memory 24 to or from the mass storage devices 14, hosts 16, and mirror sites 18. To cooperate effectively, the adaptors 12 must communicate with each other. This communication is implemented by maintaining one or more queues 26 in a queue portion 28 of the common memory 24, as shown in FIG. 2. When an adaptor 32 requires that a particular task be executed by another adaptor, it leaves, on a queue 26 within the queue portion 28, a message 30 requesting that the task be carried out. An adaptor 34 scanning the queue can then encounter the message 30 and execute that task.
  • Throughout the remainder of this specification, the adaptor 32 leaving the message is referred to as the “request-adaptor;” the adaptor 34 that carries out the task specified in the message is referred to as the “execution-adaptor.” It is understood, however, that these are logical designations only. Disk adaptors, host adaptors, and remote adaptors can each take on the role of a request-adaptor 32 or an execution-adaptor 34 at various times during the operation of the data-storage system 10.
  • Certain tasks in the data-storage system 10 are urgent and must be carried out promptly. Other tasks are less time-sensitive. To accommodate this, the data-storage system 10 assigns different priorities to the queues 26. When a request-adaptor 32 has a task to be executed, it determines the priority of the task and places it in the queue 26 whose priority is appropriate to the urgency of the task.
  • Each queue 26 contains a varying number of messages 30. This number is referred to as the queue-length. The queue-length has a lower bound of zero and an upper bound that depends on the configuration of the disk-storage system 10. In the course of normal operation, request-adaptors 32 add new messages to the queue 26 and execution-adaptors 34 carry out requests specified in messages and delete those messages from the queue 26. As a result, the queue-length is a time-varying random number.
  • When an execution-adaptor 34 becomes free to execute a processing task, it selects a queue 26 and executes the processing task specified by a topmost message 36 in that queue 26. The execution-adaptor 34 selects the queue 26 so as to minimize the waiting time for all pending messages in all queues. In most cases, this requires that the execution-adaptor 34 select the queue 26 having the greatest queue-length.
  • Because the queue-length is a time-varying random number, the execution-adaptor 34 cannot know with certainty the length of each queue 26 at the moment when it is necessary to select a queue 26. Even if the execution-adaptor 34 were to incur the overhead associated with polling each queue 26, it would be possible for other adaptors 12 to add or delete a message 30 from a queue 26 that has just been polled by the execution-adaptor 34. This introduces error into the execution-adaptor's assessment of the queue-lengths.
  • To avoid having to poll each queue 26 whenever it becomes free to carry out a request from one of the queues, the execution-adaptor 34 caches, in its local memory 22, a queue-length table 38 listing the length of each queue 26 at the time that the execution-adaptor 34 last carried out a request pending on that queue 26. The table-entries in the queue-length table 38 are thus the last-known queue-lengths for each queue 26. These last-known queue-lengths function as estimates of the queue-lengths at the moment when the execution adaptor 34 selects a queue 26.
  • The execution-adaptor 34 updates a queue's entry in the queue-length table 38 whenever it accesses that queue 26 to carry out a request. Since the execution-adaptor 34 already has to access the queue 26 in order to carry out a request pending on that queue 26, there is little additional overhead associated with polling the queue 26 to obtain its queue-length.
  • The execution-adaptor 34 also maintains a priority table 40 listing the priority values assigned to each queue 26. A high-priority queue is characterized by a large integer in the priority table 40. Lower priority tables are characterized by smaller integers in the priority table 40.
  • Referring now to FIG. 3, the execution-adaptor 34 selects a queue 26 by first assigning 42 a score to each queue 26. It does so by weighting the estimate of the queue-length for each queue 26 with the priority assigned to that queue 26. The result is referred to as the “effective queue-length” for that queue 26. The execution-adaptor 34 then sums 44 the effective queue-lengths for all queues 26 and defines 46 a sampling interval 48 having an extent equal to that sum, as shown in FIG. 4.
  • The execution adaptor 34 then divides 50 the sampling interval 48 into as many queue-intervals 52 as there are queues 26. Each queue-interval 52 has an extent that corresponds to the effective queue-length of the queue 26 with which it is associated. In the illustrated embodiment, the extent of each queue-interval 52 is the effective queue-length normalized by the extent of the sampling interval 48. In addition, each queue-interval 52 is disjoint from all other queue-intervals. As a result, each point on the sampling interval is associated with one, and only one, queue 26.
  • Once the queue-intervals 50 are defined, the execution-adaptor 34 executes 54 a random number process 56 (see FIG. 2) that generates a random number having a value that is uniformly distributed over the sampling interval 48. The random number will thus have a value that places it in one of the queue-intervals 52 that together form the sampling interval 48. The probability that the random number will be in any particular queue-interval 52 depends on the last-known effective queue-length of the queue 26 corresponding to that queue-interval relative to the last-known effective queue-lengths of all other queues.
  • The execution-adaptor 34 then accesses 58 the queue 26 corresponding to the queue-interval 52 that contains the random number and carries out 60 the task specified by the topmost message 36 in that selected queue 26. Once the task is completed, the execution-adaptor 34 deletes 62 the topmost message 36 from the selected queue 26 and polls 64 the selected queue 26 to obtain its queue-length. The execution-adaptor 34 then updates 66 the entry in its queue-length table 38 that corresponds to the selected queue 26.
  • By using a locally-cached last-known queue-length to formulate an estimate of a current effective queue-length, the queue-selection method described above avoids polling each queue 26 to obtain its current queue-length. The foregoing queue-selection method can thus rapidly select a queue 26 that, while not guaranteed to be have longest effective queue-length, most likely does. Because each queue 26 has some probability of being selected, the queue-selection method described above also avoids neglecting any queue 26. This ensures that tasks waiting on queues having a low effective queue-length are nevertheless performed within a reasonable waiting period. This also ensures that queues having a low effective queue-length are occasionally polled to see if their effective queue-lengths have changed.
  • A data-storage system 10 can be configured to maintain several queues 26 all of which have the same priority. A data-storage system 10 offers more flexibility in load balancing than a data-storage system having only a single queue because in such a system, several adaptors can carry out pending requests simultaneously.
  • The foregoing method can also be carried out in a data-storage system 10 in which all the queues 26 have the same priority. In such a data-storage system 10, the effective queue-length can be set equal to the queue-length, in which case the priority table 40 is unnecessary. Alternatively, the entries in the priority table 40 can be set equal to each other.
  • The method described above can be adapted to select any resource on the basis of a stochastic property of that resource. In the application described above, the resource is a queue 26 and the stochastic property that provides the basis for selection is the length of that queue. In the application that follows, the resource is a remote adaptor on a remote mirroring site 18 and the stochastic property that provides the basis for selection is the processing workload associated with the remote adaptor.
  • A distinction between the two cases is that in the first case, it is preferable to select the resource having a high value of the stochastic property and in the second case, it is preferable to select the resource having a low value of the stochastic property. This distinction is readily accommodated in the second case by working with the inverse of the stochastic property rather than with the stochastic property directly.
  • Referring now to FIG. 5, a first data-storage system 68 sometimes communicates with a second data-storage system 70. For example, when a host adaptor 72 associated with the first data-storage system 68 writes to a device 74 that is mirrored on a mirror device 76 controller by a disk adaptor 78 associated with the second data-storage system 70, a remote adaptor 80 on the first data-storage system 68 establishes communication with a selected remote adaptor 82 on the second data-storage system 70. The remote adaptor on the first data-storage system 68 will be referred to as the “sending adaptor” 80 and the remote adaptors on the second data-storage system 70 will be referred to as the “receiving adaptors” 82. Each remote adaptor 80, 82 has its own processor 82 and local memory 84.
  • It is understood that the designations “receiving adaptor” and “sending adaptor” are logical designations only. For example, the second data-storage system 70 may have devices that are mirrored on the first data-storage system 68, in which case a remote adaptor 82 of the second data storage system 70 can function as a sending adaptor and a remote adaptor 80 on the first data storage system 68 can function as a receiving adaptor.
  • In establishing communication, the sending adaptor 80 selects one of the available receiving adaptors 82. Preferably, the sending adaptor 80 selects the receiving adaptor 82 that is the least busy. However, because of the overhead associated with communicating with the receiving adaptors 82, it is impractical for the sending adaptor 80 to poll each of the receiving adaptors 82 to determine which of the receiving adaptors 82 is the least busy.
  • In addition, the sending adaptor 80 cannot know with certainty whether the information it relies upon in selecting a receiving adaptor 82 is accurate. For example, it is possible that, in the brief interval between being polled by a sending adaptor 80 and being asked to carry out a task by the sending adaptor 80, a receiving adaptor 82 may have taken on requests sent by other sending adaptors 80.
  • To avoid having to poll each receiving adaptor 82, the sending adaptor 80 maintains, in its local memory 84, a workload table 86 having information indicative of the workload carried by each receiving adaptor 82 at the time that the sending adaptor 80 last engaged in an I/O transaction with that receiving adaptor 82. The workload associated with a particular receiving adaptor 82 is thus the last-known workload for that receiving adaptor 82. The receiving-adaptor selection method uses the last-known workloads of the receiving adaptors in the workload table 86 to estimate how busy each receiving adaptor 82 is at the time that the sending adaptor 80 selects a receiving adaptor 82.
  • The sending adaptor 80 updates the corresponding entry in the workload table 88 entry for each receiving adaptor 82 whenever it engages in an I/O transaction with that receiving adaptor 82. Since the sending adaptor 80 already had to establish communication with the receiving adaptor 82 in order to engage in an I/O transaction with that adaptor 82, there is little additional overhead associated with polling the receiving adaptor 82 to obtain a measure of how busy that receiving adaptor 82 currently is. In response to polling by the sending adaptor 80, the receiving adaptor 82 provides an integer indicative of the number of tasks it is handling concurrently.
  • Referring to FIG. 6, selection of a receiving adaptor 82 with which to communicate begins with the sending adaptor 80 assigning 88 a score to each receiving adaptor 82. The sending adaptor 80 does so by weighting the reciprocal of the table entry associated with each receiving adaptor 82 by an integer large enough to avoid time-consuming floating point operations in the steps that follow. The resulting score is referred to as the “inverse workload” for that receiving adaptor 82. The sending adaptor 80 then sums 90 the inverse workloads for all receiving adaptors 82 and defines 92 a sampling interval 94 having a length equal to that sum, as shown in FIG. 7.
  • The sampling interval 94 is then subdivided 96 into as many sub-intervals 98 as there are receiving adaptors 82. Each sub-interval 98 has a length that corresponds to the inverse workload of the receiving adaptor 82 with which it is associated. In addition, each sub-interval 98 is disjoint from all other sub-intervals. As a result, each point on the sampling interval 94 is associated with one, and only one, receiving adaptor 82.
  • Once the sub-intervals are defined, the sending adaptor 80 executes 100 a random number process 102 that generates a random number having a value that is uniformly distributed over the sampling interval 94. The random number will thus have a value that places it in a sub-interval 98 corresponding to one of the receiving adaptors 82. The probability that the random number will be in a sub-interval 98 corresponding to a particular receiving adaptor 82 depends on the inverse workload of that receiving adaptor 82 relative to the inverse workloads of all other receiving adaptors.
  • The sending adaptor 80 then establishes 104 communication with and sends 106 a message to the selected receiving adaptor 82 corresponding to the sub-interval 98 associated with the value of the random number. The sending adaptor 80 then polls 108 the receiving adaptor 82 to obtain a new estimate of its workload and updates 110 the entry in its workload table 86 that corresponds to that receiving adaptor 82.
  • By using a locally-cached last-known workload rather than polling each receiving adaptor 82 to obtain a current workload, the sending adaptor 80 can rapidly select a receiving adaptor 82 that, although not guaranteed to have the smallest workload, most likely does. Because each receiving adaptor 82 has some probability of being selected, the probabilistic selection process described above avoids neglecting any receiving adaptor 82. This ensures load balancing among the receiving adaptors 82. This also ensures that receiving adaptors 82 that were once found to be busy are occasionally polled to see if they have since become relatively idle.

Claims (21)

1.-26. (canceled)
27. A method for selecting a receiving adaptor in connection with communication between storage devices, comprising:
assigning a score to each of a plurality of possible receiving adaptors;
defining an interval for each of said possible receiving adaptors, wherein each interval varies according to a corresponding score of each of said plurality of possible receiving adaptors;
generating a random number; and
selecting a particular receiving adaptor from said plurality of possible receiving adaptors, wherein said particular receiving adaptor has an interval corresponding to said random number.
28. A method, according to claim 27, further comprising:
establishing communication with said particular receiving adaptor.
29. A method, according to claim 27, wherein generating a random number includes generating a uniformly distributed random number.
30. A method, according to claim 27, wherein each of said scores varies according to an inverse workload value of each of said possible receiving adaptors.
31. A method, according to claim 30, further comprising:
summing each of the inverse workload values to obtain a normalizing factor.
32. A method, according to claim 31, wherein each of said intervals for each of said possible receiving adaptors is a portion of said normalizing factor having a size that varies according to a corresponding one of each of said inverse workload values.
33. A method, according to claim 32, wherein each of said inverse workload values is obtained from a workload table.
34. A method, according to claim 33, further comprising:
locally caching the workload table.
35. A method, according to claim 33, wherein said workload table is updated each time data is sent from one of the storage devices to another one of the storage devices.
36. Computer software, provided in a computer-readable medium, for selecting a receiving adaptor in connection with communication between storage devices, comprising:
executable code that assigns a score to each of a plurality of possible receiving adaptors;
executable code that defines an interval for each of said possible receiving adaptors, wherein each interval varies according to a corresponding score of each of said plurality of possible receiving adaptors;
executable code that generates a random number; and
executable code that selects a particular receiving adaptor from said plurality of possible receiving adaptors, wherein said particular receiving adaptor has an interval corresponding to said random number.
37. Computer software, according to claim 36, further comprising:
executable code that establishes communication with said particular receiving adaptor.
38. Computer software, according to claim 36, wherein executable code that generates a random number generates a uniformly distributed random number.
39. Computer software, according to claim 36, wherein each of said scores varies according to an inverse workload value of each of said possible receiving adaptors.
40. Computer software, according to claim 39, further comprising:
executable code that sums each of the inverse workload values to obtain a normalizing factor.
41. Computer software, according to claim 40, wherein each of said intervals for each of said possible receiving adaptors is a portion of said normalizing factor having a size that varies according to a corresponding one of each of said inverse workload values.
42. Computer software, according to claim 41, wherein each of said inverse workload values is obtained from a workload table.
43. Computer software, according to claim 42, further comprising:
executable code that locally caches the workload table.
44. Computer software, according to claim 42, further comprising:
executable code that updates said workload table is updated each time data is sent from one of the storage devices to another one of the storage devices.
45. A data storage system, comprising:
a common memory;
a host adaptor, coupled with the common memory, that has in interface for a host;
a mass storage device coupled with the common memory;
a remote adaptor coupled with the common memory, wherein the remote adaptor causes data written to the mass storage device to be transmitted to an other storage device having a plurality of possible receiving adaptors that receive the data, and
computer software, provided in a computer-readable medium, that selects a particular one of the possible receiving adaptors, said computer software including executable code that assigns a score to each of the plurality of possible receiving adaptors, executable code that defines an interval for each of said possible receiving adaptors, wherein each interval varies according to a corresponding score of each of said plurality of possible receiving adaptors, executable code that generates a random number, and executable code that selects the particular receiving adaptor from said plurality of possible receiving adaptors, wherein said particular receiving adaptor has an interval corresponding to said random number.
46. A data storage system, according to claim 46, wherein each of said scores varies according to an inverse workload value of each of said possible receiving adaptors.
US11/096,766 2001-05-08 2005-04-01 Selection of a resource in a distributed computer system Abandoned US20050172083A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/096,766 US20050172083A1 (en) 2001-05-08 2005-04-01 Selection of a resource in a distributed computer system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/851,039 US6886164B2 (en) 2001-05-08 2001-05-08 Selection of a resource in a distributed computer system
US11/096,766 US20050172083A1 (en) 2001-05-08 2005-04-01 Selection of a resource in a distributed computer system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/851,039 Continuation US6886164B2 (en) 2001-05-08 2001-05-08 Selection of a resource in a distributed computer system

Publications (1)

Publication Number Publication Date
US20050172083A1 true US20050172083A1 (en) 2005-08-04

Family

ID=25309800

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/851,039 Expired - Lifetime US6886164B2 (en) 2001-05-08 2001-05-08 Selection of a resource in a distributed computer system
US11/096,766 Abandoned US20050172083A1 (en) 2001-05-08 2005-04-01 Selection of a resource in a distributed computer system

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/851,039 Expired - Lifetime US6886164B2 (en) 2001-05-08 2001-05-08 Selection of a resource in a distributed computer system

Country Status (6)

Country Link
US (2) US6886164B2 (en)
JP (1) JP2004520655A (en)
CN (1) CN1259628C (en)
DE (1) DE10296791T5 (en)
GB (1) GB2384083B (en)
WO (1) WO2002091217A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267824A1 (en) * 2004-05-28 2005-12-01 Hurewitz Barry S Matching resources of a securities research department to accounts of the department
US20060041456A1 (en) * 2004-05-28 2006-02-23 Hurewitz Barry S Systems and method for determining the cost of a securities research department to service a client of the department
US20060059075A1 (en) * 2004-09-10 2006-03-16 Hurewitz Barry S Systems and methods for auctioning access to securities research resources
US7769654B1 (en) 2004-05-28 2010-08-03 Morgan Stanley Systems and methods for determining fair value prices for equity research
US7953652B1 (en) 2006-06-12 2011-05-31 Morgan Stanley Profit model for non-execution services

Families Citing this family (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7382782B1 (en) * 2002-04-12 2008-06-03 Juniper Networks, Inc. Packet spraying for load balancing across multiple packet processors
US8261029B1 (en) 2007-03-28 2012-09-04 Emc Corporation Dynamic balancing of writes between multiple storage devices
US8589283B2 (en) * 2007-08-30 2013-11-19 Ccip Corp. Method and system for loan application non-acceptance follow-up
US20090060165A1 (en) * 2007-08-30 2009-03-05 Pradeep Kumar Dani Method and System for Customer Transaction Request Routing
US9152995B2 (en) 2007-08-30 2015-10-06 Cc Serve Corporation Method and system for loan application non-acceptance follow-up
US9128771B1 (en) * 2009-12-08 2015-09-08 Broadcom Corporation System, method, and computer program product to distribute workload
US8819386B1 (en) 2011-01-25 2014-08-26 Emc Corporation Memory efficient use of dynamic data structures used to manage sparsely accessed data
US8990826B2 (en) * 2012-06-06 2015-03-24 General Electric Company System and method for receiving analysis requests and configuring analytics systems
US9330048B1 (en) 2013-01-28 2016-05-03 Emc Corporation Balancing response times for synchronous I/O requests having different priorities
US9239991B2 (en) 2013-09-05 2016-01-19 General Electric Company Services support system and method
US9606870B1 (en) 2014-03-31 2017-03-28 EMC IP Holding Company LLC Data reduction techniques in a flash-based key/value cluster storage
US10083412B2 (en) 2015-05-14 2018-09-25 Atlassian Pty Ltd Systems and methods for scheduling work items
US10853746B2 (en) 2015-05-14 2020-12-01 Atlassian Pty Ltd. Systems and methods for scheduling work items
ITUA20161426A1 (en) * 2016-03-07 2017-09-07 Ibm Dispatch of jobs for parallel execution of multiple processors
US10324635B1 (en) 2016-03-22 2019-06-18 EMC IP Holding Company LLC Adaptive compression for data replication in a storage system
US10565058B1 (en) 2016-03-30 2020-02-18 EMC IP Holding Company LLC Adaptive hash-based data replication in a storage system
US20170295113A1 (en) * 2016-04-06 2017-10-12 Alcatel-Lucent Usa Inc. Longest queue identification
US10409520B1 (en) 2017-04-27 2019-09-10 EMC IP Holding Company LLC Replication of content-based storage using address space slices
US10503609B1 (en) 2017-04-27 2019-12-10 EMC IP Holding Company LLC Replication link smoothing using historical data
US10860239B2 (en) 2018-05-04 2020-12-08 EMC IP Holding Company LLC Fan-out asynchronous replication caching
US11360688B2 (en) 2018-05-04 2022-06-14 EMC IP Holding Company LLC Cascading snapshot creation in a native replication 3-site configuration
US10853221B2 (en) 2018-05-04 2020-12-01 EMC IP Holding Company LLC Performance evaluation and comparison of storage systems
US10705753B2 (en) 2018-05-04 2020-07-07 EMC IP Holding Company LLC Fan-out asynchronous replication logical level caching
US11048722B2 (en) 2018-07-31 2021-06-29 EMC IP Holding Company LLC Performance optimization for data persistency in asynchronous replication setups
US10613793B1 (en) 2018-11-01 2020-04-07 EMC IP Holding Company LLC Method to support hash based xcopy synchronous replication
US10719249B1 (en) 2019-01-31 2020-07-21 EMC IP Holding Company LLC Extent lock resolution in active/active replication
US10853200B2 (en) 2019-02-01 2020-12-01 EMC IP Holding Company LLC Consistent input/output (IO) recovery for active/active cluster replication
US11194666B2 (en) 2019-04-26 2021-12-07 EMC IP Holding Company LLC Time addressable storage in a content addressable storage system
US10719257B1 (en) 2019-04-29 2020-07-21 EMC IP Holding Company LLC Time-to-live (TTL) license management in an active/active replication session
US11216388B2 (en) 2019-04-30 2022-01-04 EMC IP Holding Company LLC Tiering between storage media in a content aware storage system
US11301138B2 (en) 2019-07-19 2022-04-12 EMC IP Holding Company LLC Dynamic balancing of input/output (IO) operations for a storage system
US11238063B2 (en) 2019-07-25 2022-02-01 EMC IP Holding Company LLC Provenance-based replication in a storage system
US10908828B1 (en) 2019-07-25 2021-02-02 EMC IP Holding Company LLC Enhanced quality of service (QoS) for multiple simultaneous replication sessions in a replication setup
US11429493B2 (en) 2020-01-20 2022-08-30 EMC IP Holding Company LLC Remote rollback of snapshots for asynchronous replication
US11593396B2 (en) 2020-09-23 2023-02-28 EMC IP Holding Company LLC Smart data offload sync replication
US11281407B1 (en) 2020-09-23 2022-03-22 EMC IP Holding Company LLC Verified write command in active-active replication

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5905889A (en) * 1997-03-20 1999-05-18 International Business Machines Corporation Resource management system using next available integer from an integer pool and returning the integer thereto as the next available integer upon completion of use
US6085216A (en) * 1997-12-31 2000-07-04 Xerox Corporation Method and system for efficiently allocating resources for solving computationally hard problems
US6219728B1 (en) * 1996-04-22 2001-04-17 Nortel Networks Limited Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US6219649B1 (en) * 1999-01-21 2001-04-17 Joel Jameson Methods and apparatus for allocating resources in the presence of uncertainty
US6259705B1 (en) * 1997-09-22 2001-07-10 Fujitsu Limited Network service server load balancing device, network service server load balancing method and computer-readable storage medium recorded with network service server load balancing program
US6574587B2 (en) * 1998-02-27 2003-06-03 Mci Communications Corporation System and method for extracting and forecasting computing resource data such as CPU consumption using autoregressive methodology
US6658473B1 (en) * 2000-02-25 2003-12-02 Sun Microsystems, Inc. Method and apparatus for distributing load in a computer environment
US7523454B2 (en) * 2001-03-06 2009-04-21 Hewlett-Packard Development Company, L.P. Apparatus and method for routing a transaction to a partitioned server

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5093794A (en) * 1989-08-22 1992-03-03 United Technologies Corporation Job scheduling system
JPH0713817B2 (en) * 1990-03-13 1995-02-15 工業技術院長 Dynamic load balancing method for loosely coupled parallel computers
JPH05216844A (en) * 1991-07-17 1993-08-27 Internatl Business Mach Corp <Ibm> Method and apparatus for improved task distribution in multiprocessor data processing system
US5369570A (en) * 1991-11-14 1994-11-29 Parad; Harvey A. Method and system for continuous integrated resource management
JPH06243112A (en) * 1993-02-19 1994-09-02 Seiko Epson Corp Multiprocessor device
JP3374480B2 (en) * 1993-12-01 2003-02-04 富士通株式会社 Data processing device
US5854754A (en) * 1996-02-12 1998-12-29 International Business Machines Corporation Scheduling computerized backup services
JPH09282184A (en) * 1996-02-14 1997-10-31 Matsushita Electric Ind Co Ltd Task management device capable of absorbing fluctuation of execution probability accompanying rapid increase of same priority task
EP1022658A1 (en) * 1999-01-21 2000-07-26 Siemens Aktiengesellschaft Multiprocessor system and load balancing method in a multiprocessor system
JP2000322365A (en) * 1999-05-12 2000-11-24 Hitachi Ltd Acceptance limiting method for server computer
JP2001101149A (en) * 1999-09-30 2001-04-13 Nec Corp Distributed parallel data processor, recording medium recording distributed parallel data processing program and distributed parallel data processing system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219728B1 (en) * 1996-04-22 2001-04-17 Nortel Networks Limited Method and apparatus for allocating shared memory resources among a plurality of queues each having a threshold value therefor
US5905889A (en) * 1997-03-20 1999-05-18 International Business Machines Corporation Resource management system using next available integer from an integer pool and returning the integer thereto as the next available integer upon completion of use
US6259705B1 (en) * 1997-09-22 2001-07-10 Fujitsu Limited Network service server load balancing device, network service server load balancing method and computer-readable storage medium recorded with network service server load balancing program
US6085216A (en) * 1997-12-31 2000-07-04 Xerox Corporation Method and system for efficiently allocating resources for solving computationally hard problems
US6574587B2 (en) * 1998-02-27 2003-06-03 Mci Communications Corporation System and method for extracting and forecasting computing resource data such as CPU consumption using autoregressive methodology
US6219649B1 (en) * 1999-01-21 2001-04-17 Joel Jameson Methods and apparatus for allocating resources in the presence of uncertainty
US6658473B1 (en) * 2000-02-25 2003-12-02 Sun Microsystems, Inc. Method and apparatus for distributing load in a computer environment
US7523454B2 (en) * 2001-03-06 2009-04-21 Hewlett-Packard Development Company, L.P. Apparatus and method for routing a transaction to a partitioned server

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050267824A1 (en) * 2004-05-28 2005-12-01 Hurewitz Barry S Matching resources of a securities research department to accounts of the department
US20060041456A1 (en) * 2004-05-28 2006-02-23 Hurewitz Barry S Systems and method for determining the cost of a securities research department to service a client of the department
US7689490B2 (en) * 2004-05-28 2010-03-30 Morgan Stanley Matching resources of a securities research department to accounts of the department
US7734517B2 (en) 2004-05-28 2010-06-08 Morgan Stanley Systems and method for determining the cost of a securities research department to service a client of the department
US20100145757A1 (en) * 2004-05-28 2010-06-10 Morgan Stanley Matching resources of a securities research department to accounts of the department
US7769654B1 (en) 2004-05-28 2010-08-03 Morgan Stanley Systems and methods for determining fair value prices for equity research
US8209253B2 (en) 2004-05-28 2012-06-26 Morgan Stanley Matching resources of a securities research department to accounts of the department
US20060059075A1 (en) * 2004-09-10 2006-03-16 Hurewitz Barry S Systems and methods for auctioning access to securities research resources
US7752103B2 (en) 2004-09-10 2010-07-06 Morgan Stanley Systems and methods for auctioning access to securities research resources
US7904364B2 (en) 2004-09-10 2011-03-08 Morgan Stanley Systems and methods for auctioning access to securities research resources
US7953652B1 (en) 2006-06-12 2011-05-31 Morgan Stanley Profit model for non-execution services
US8370237B1 (en) 2006-06-12 2013-02-05 Morgan Stanley Profit model for non-execution services

Also Published As

Publication number Publication date
US6886164B2 (en) 2005-04-26
GB2384083A (en) 2003-07-16
US20020169816A1 (en) 2002-11-14
GB2384083B (en) 2005-03-30
GB0306158D0 (en) 2003-04-23
JP2004520655A (en) 2004-07-08
CN1507594A (en) 2004-06-23
CN1259628C (en) 2006-06-14
DE10296791T5 (en) 2004-04-22
WO2002091217A1 (en) 2002-11-14

Similar Documents

Publication Publication Date Title
US6886164B2 (en) Selection of a resource in a distributed computer system
US11068301B1 (en) Application hosting in a distributed application execution system
US7127507B1 (en) Method and apparatus for network-level monitoring of queue-based messaging systems
JP4612710B2 (en) Transaction parallel control method, database management system, and program
US7784053B2 (en) Management of virtual machines to utilize shared resources
US6665740B1 (en) Logical volume selection in a probability-based job scheduler
US7200695B2 (en) Method, system, and program for processing packets utilizing descriptors
JP3980675B2 (en) Network independent file shadowing
US7219121B2 (en) Symmetrical multiprocessing in multiprocessor systems
US5974462A (en) Method and apparatus for controlling the number of servers in a client/server system
US7840720B2 (en) Using priority to determine whether to queue an input/output (I/O) request directed to storage
US7159071B2 (en) Storage system and disk load balance control method thereof
JP3301648B2 (en) Communication control system that distributes connections to service access points
US8190743B2 (en) Most eligible server in a common work queue environment
EP0747832A2 (en) Customer information control system and method in a loosely coupled parallel processing environment
US6839804B2 (en) Disk array storage device with means for enhancing host application performance using task priorities
US7155727B2 (en) Efficient data buffering in a multithreaded environment
US7299269B2 (en) Dynamically allocating data buffers to a data structure based on buffer fullness frequency
US8140478B2 (en) Commit rate management with decoupled commit operations
JP2008544371A (en) How to handle lock-related inconsistencies
US6111591A (en) Image processing system and information processing system
CN113961323B (en) Hybrid cloud-oriented security perception task scheduling method and system
US20170063976A1 (en) Dynamic record-level sharing (rls) provisioning inside a data-sharing subsystem
CN113368494A (en) Cloud equipment distribution method and device, electronic equipment and storage medium
JP5641300B2 (en) Storage system and memory cache area control method for storage system

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION