US20050097256A1 - Arbitration technique based on processor task priority - Google Patents

Arbitration technique based on processor task priority Download PDF

Info

Publication number
US20050097256A1
US20050097256A1 US11/014,493 US1449304A US2005097256A1 US 20050097256 A1 US20050097256 A1 US 20050097256A1 US 1449304 A US1449304 A US 1449304A US 2005097256 A1 US2005097256 A1 US 2005097256A1
Authority
US
United States
Prior art keywords
switch
messages
priority
route
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/014,493
Inventor
Phillip Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hewlett Packard Development Co LP
Original Assignee
Hewlett Packard Development Co LP
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hewlett Packard Development Co LP filed Critical Hewlett Packard Development Co LP
Priority to US11/014,493 priority Critical patent/US20050097256A1/en
Publication of US20050097256A1 publication Critical patent/US20050097256A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system

Definitions

  • the present invention generally relates to a technique for arbitration (or message routing) among multiple pending requests for system resources in a computer system. Still more particularly, the invention relates to an arbitration technique based on processor task priority.
  • Modern computer system generally include a plurality of devices interconnected through a system of buses which are linked by way of one or more bridge logic units.
  • a conventional computer system typically contains one or more central processing unit (“CPUs”) coupled through a host bridge to a main memory unit.
  • CPUs central processing unit
  • a CPU bus usually couples the CPU(s) to the host bridge, and a memory bus connects the bridge to the main memory.
  • the bridge logic typically incorporates a memory controller which receives memory access requests (such as from the CPUs) and responds by generating standard control signals necessary to access the main memory.
  • the bridge logic may also include an interface to another bus, such as the Peripheral Component Interconnect (“PCI”) bus. Examples of devices which link to such a bus include network interface cards, video accelerators, audio cards, SCSI adapters, and telephony cards, to name a few.
  • PCI Peripheral Component Interconnect
  • a conventional computer system includes multiple interconnected devices that function independently of each other, they often attempt to concurrently access common resources. For example, in a system having multiple CPUs, more than one CPU may need to access main memory at a given time.
  • a device coupled to the PCI bus may need to extract data from main memory at the same time that the CPU is requesting instructions stored in the main memory. Since, main memory generally can only respond to a single memory request at a time, it is generally the function of the memory controller to choose which device to service first. Such conflicts necessitate “arbitration,” in which the various pending memory requests are ranked with the highest ranking requests generally being serviced first.
  • each type of cycle request (e.g., CPU to memory write, PCI to memory write, CPU read from memory, etc.) is assigned a predetermined ranking.
  • some cycle requests may have the same ranking, in general some cycle requests will have rankings that are higher than other types of requests.
  • a memory controller if faced with multiple pending memory access requests, simply grants memory access to the device with the highest ranking.
  • the deficiency of this type of arbitration scheme is that a low ranking pending request may not be permitted to complete because numerous higher ranking requests are pending. This condition is called “starvation.”
  • Another arbitration technique is the Least-Recently-Used (“LRU”) algorithm.
  • LRU Least-Recently-Used
  • a memory arbiter grants the request which has least recently been granted (i.e., the “oldest” request).
  • This type of arbitration technique ensures that no one device or cycle request is starved from completing in favor of higher ranking requests.
  • the downside of this technique is that it essentially equalizes, or fixes, the priority of all devices in the computer system, since the arbitration scheme does not take into account the urgency associated with memory transactions from certain devices. That is, the newest request may be far more critical and time sensitive than the older requests, but will not be permitted to run until all older requests have run.
  • an LRU scheme while more equitable that a fixed scheme, lacks the flexibility to allow the computer system designer to directly set the memory request priorities.
  • FIG. 1 shows a computer system embodying the preferred embodiment of the invention in which processor task priorities are used as part of the arbitration scheme
  • FIG. 2 shows an embodiment of the invention in which multiple nodes of computer systems are coupled together via a switch and the switch uses the processor task priorities in making its decision as to which cycles are permitted to run.
  • processor central processing unit
  • CPU central processing unit
  • system 100 is shown constructed in accordance with a preferred embodiment of the invention.
  • system 100 includes one or more central processor units (“CPUs”) 102 (also labeled as CPU 0 -CPU 3 ), a host bridge 110 , main memory 120 , one or more peripheral devices 130 , a south bridge 140 and various input and output devices 142 , 144 .
  • the host bridge couples to the CPUs 102 , memory 120 and the various peripheral device 130 via the south bridge 140 .
  • Other architectures are possible also; the architecture in FIG. 1 is merely exemplary of one suitable embodiment.
  • the peripheral device 130 may be whatever devices are appropriate for a given computer 100 such as a modem, network interface card (“NIC”), etc.
  • the peripheral devices 130 couple to the host bridge 110 via any suitable interconnect bus 132 .
  • interconnect bus 132 Of course, devices 130 are compliant with whatever interconnect bus 132 is used.
  • one or more of the CPUs 102 can read data or instructions from and write data to memory 120 via the host bridge 110 .
  • the peripheral devices 130 also can read/write memory 120 .
  • the CPUs 102 can run cycles through the host bridge 110 that target one or more of the peripheral devices 130 .
  • signals to/from the input and output devices 142 , 144 may propagate through the south bridge 140 and the host bridge 110 as desired.
  • the input device 142 may be any type of device such as a mouse or keyboard and the output device may be a disk drive or other type of output device. These input/output devices may connect directly to the south bridge 140 or couple to the bridge 140 via other intervening logic (not shown) or couple to the system 100 via other architectures.
  • each CPU 102 can be assigned a “task priority.”
  • the task priority is assigned to each CPU or groups of CPUs preferably by software and may be changed as desired.
  • the task priority may take many forms. One suitable form is a value in the range of 0 to 15. As such, there are 16 different task priorities with task priority 15 being the highest priority and 0 the lowest, or vice versa. Task priorities 0-15 represent various gradations of priority between highest and lowest.
  • the task for which a task priority is assigned to a CPU 102 may be a program, part of a program, or low level functions such as a single memory read/write.
  • Software can program a CPU 102 to a certain task priority by writing a task priority value to a register 106 within each CPU 102 .
  • Each CPU 102 is thus independently configurable to a desired task priority. Multiple CPUs 102 can be assigned to the same task priority if desired.
  • Each CPU 102 can run a cycle on bus 104 to the host bridge 110 by which the CPU informs the bridge of that CPU's current task priority.
  • the cycle through which a task priority is transmitted to the host bridge 110 may be a cycle separate from a CPU request for a system resource such as a memory access. Alternatively, a CPU may transmit a task priority in the same cycle as the CPU request for a system resource.
  • the host bridge 110 includes a task priority table 112 , or other form of storage, in which the bridge stores the task priorities received from the CPUs 102 .
  • the task priority table 112 may include an entry 114 for each CPU 102 . In the example of FIG.
  • the system 100 includes four CPUs (CPU 0 -CPU 1 ) and accordingly, task priority table 112 in the host bridge 110 includes four entries, one entry corresponding to each of the four CPUs. Each entry 114 includes the ability to store the task priority for the corresponding CPU. Thus, the first entry 114 stores the CPU 0 task priority as shown, the second entry stores the CPU 1 task priority, and so on.
  • the CPUs 102 may inform the host bridge of their task priorities and may update the task priorities at subsequent times.
  • the host bridge 110 uses the CPU task priorities to make decisions about granting individual CPUs access to memory or other resources within the computer system. This technique preferably selects only between competing CPUs for system resources, and not for non-CPU related cycles such as peripheral device 130 writes to memory.
  • the concept explained herein can easily be extended to devices other than CPUs and the claims which follow should be interpreted broadly enough, unless otherwise indicated by the language of the claim itself, not to be limited to just CPU-based cycles.
  • a non-exhaustive list of the use of CPU task priorities in making the arbitration decision with respect to CPU cycles includes:
  • the first algorithm is self-explanatory and states that, among multiple CPU cycles pending at the host bridge 110 , the host bridge preferably selects the cycle to run that corresponds to the CPU having the highest task priority.
  • This algorithm may have limited use, however, in that starvation may occur and that two or more CPUs may have pending cycles to run that are at an equal task priority.
  • the second algorithm listed above states that an anti-starvation technique is used in conjunction with the first algorithm.
  • Any suitable anti-starvation algorithm can be used such as that described in U.S. Pat. No. 6,286,083, incorporated herein by reference. Accordingly, with this approach the CPU having the highest task priority is always selected, but other CPUs may be selected to avoid starvation should the condition so warrant.
  • the third algorithm says that the CPU with the highest task priority is selected by the host bridge, and that a tie breaking algorithm is used in the event that two more CPUs have the highest, yet equal, task priority.
  • a suitable tie breaking algorithm could be a fixed priority technique such as CPU writes to memory are always selected over CPU reads.
  • the host bridge 110 uses the CPU task priorities as a mechanism to break a deadlock in the event such conventional arbitration techniques are unable to select between two or more CPU cycles. That is, this technique embodies the algorithm that, all else being equal, the CPU with the highest task priority gets selected.
  • the fifth algorithm listed above is that CPU task priorities are used in conjunction equally with other arbitration criteria that must be met. For example, the system designer may want certain cycles to always happen in a predetermined order regardless of task priority. An example of this is when one CPU 102 having a relatively low task priority wants to write a block of data to memory 120 and another, higher task priority CPU wants to read that same data. Although the reading CPU has a higher task priority, in this situation it is generally desirable that the higher task priority reading CPU not be allowed to read the data until all of the data is written to memory by the lower priority writing CPU.
  • the fifth algorithm thus takes into account that CPU cycles corresponding to higher task priorities should generally be permitted to run before lower task priority cycles, but certain types of predetermined activities should be permitted to occur in a different order.
  • the computer system 100 shown in FIG. 1 can represent a “node” in a system in which multiple such nodes are coupled together. Further, the aforementioned use of CPU task priorities can be extended to the embodiment shown in FIG. 2 in which four nodes N 0 -N 3 are coupled together via a switch 150 . Each node represents a collection of electronic components such as the combination shown in FIG. 1 , although difference components can be implemented for each node N 0 -N 3 .
  • the switch 150 is conceptually similar to the host bridge 110 in FIG. 1 in that the switch permits each node 100 to communicate with any of the other nodes.
  • the switch 150 can use task priorities from each node when deciding how to route the messages between nodes. Accordingly, each node creates a message to send to another node (e.g., a write message, a read message, or a control message) and includes with the message a task priority. This type of task priority may pertain to the message itself or may pertain to a particular CPU within the node that sends the message.
  • the switch 150 stores the messages from each node N 0 -N 3 along with their task priorities and makes decisions on which messages to route through the switch based on the associated task priorities. The same or similar decision algorithms as explained above with regard to FIG. 1 can be implemented in switch 150 .
  • a plurality of CPUs couple through logic (e.g., a host bridge, a switch, etc.) to one or more system resources (e.g., memory, NICs, modems, other CPUs or nodes, etc.) and the logic uses information associated with each CPU that indicates the priority level of each CPU or software executing on each CPU. Allocation of system resources is accordingly weighted in favor of the most critical or important activities. Because the priority associated with a given transaction is taken into account, overall system performance should be improved as the more critical activities are given heightened priority when competing for common system resources.
  • logic e.g., a host bridge, a switch, etc.
  • system resources e.g., memory, NICs, modems, other CPUs or nodes, etc.
  • Allocation of system resources is accordingly weighted in favor of the most critical or important activities. Because the priority associated with a given transaction is taken into account, overall system performance should be improved as the more critical activities are given heightened priority when competing for common system resources.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multi Processors (AREA)

Abstract

A system comprises storage and wherein the switch is adapted to receive messages from various nodes. One or more of the messages includes a priority value that is stored in the switch's storage. The switch routes the messages based, at least in part, on the priority values

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a divisional of copending application Ser. No. 09/998,514, filed Nov. 30, 2001, which is hereby incorporated by reference herein.
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • Not applicable.
  • BACKGROUND
  • The present invention generally relates to a technique for arbitration (or message routing) among multiple pending requests for system resources in a computer system. Still more particularly, the invention relates to an arbitration technique based on processor task priority.
  • Modern computer system generally include a plurality of devices interconnected through a system of buses which are linked by way of one or more bridge logic units. For example, a conventional computer system typically contains one or more central processing unit (“CPUs”) coupled through a host bridge to a main memory unit. A CPU bus usually couples the CPU(s) to the host bridge, and a memory bus connects the bridge to the main memory. The bridge logic typically incorporates a memory controller which receives memory access requests (such as from the CPUs) and responds by generating standard control signals necessary to access the main memory. The bridge logic may also include an interface to another bus, such as the Peripheral Component Interconnect (“PCI”) bus. Examples of devices which link to such a bus include network interface cards, video accelerators, audio cards, SCSI adapters, and telephony cards, to name a few.
  • Because a conventional computer system includes multiple interconnected devices that function independently of each other, they often attempt to concurrently access common resources. For example, in a system having multiple CPUs, more than one CPU may need to access main memory at a given time. By way of additional example, a device coupled to the PCI bus may need to extract data from main memory at the same time that the CPU is requesting instructions stored in the main memory. Since, main memory generally can only respond to a single memory request at a time, it is generally the function of the memory controller to choose which device to service first. Such conflicts necessitate “arbitration,” in which the various pending memory requests are ranked with the highest ranking requests generally being serviced first.
  • There are many well-known arbitration techniques. For instance, according to a fixed priority scheme, each type of cycle request (e.g., CPU to memory write, PCI to memory write, CPU read from memory, etc.) is assigned a predetermined ranking. Although some cycle requests may have the same ranking, in general some cycle requests will have rankings that are higher than other types of requests. Using such a fixed priority scheme, a memory controller, if faced with multiple pending memory access requests, simply grants memory access to the device with the highest ranking. Although simple to implement, the deficiency of this type of arbitration scheme is that a low ranking pending request may not be permitted to complete because numerous higher ranking requests are pending. This condition is called “starvation.”
  • Another arbitration technique is the Least-Recently-Used (“LRU”) algorithm. In the LRU algorithm a memory arbiter grants the request which has least recently been granted (i.e., the “oldest” request). This type of arbitration technique ensures that no one device or cycle request is starved from completing in favor of higher ranking requests. The downside of this technique is that it essentially equalizes, or fixes, the priority of all devices in the computer system, since the arbitration scheme does not take into account the urgency associated with memory transactions from certain devices. That is, the newest request may be far more critical and time sensitive than the older requests, but will not be permitted to run until all older requests have run. Further, the devices which use memory infrequently actually tend to experience shorter waits for memory access, since these devices are less likely to have recently accessed memory than are devices which access memory more frequently. As a consequence, real-time applications and devices, which need frequent and quick access to memory, may consistently lose memory arbitration to other devices under an LRU scheme. Hence, an LRU scheme, while more equitable that a fixed scheme, lacks the flexibility to allow the computer system designer to directly set the memory request priorities.
  • It would thus be advantageous to have an arbitration scheme that addresses the problems noted above. Despite the apparent advantages that such a system would provide, to date no such system has been developed that addresses the foregoing problems.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a detailed description of exemplary embodiments of the invention, reference will now be made to the accompanying drawings in which:
  • FIG. 1 shows a computer system embodying the preferred embodiment of the invention in which processor task priorities are used as part of the arbitration scheme; and
  • FIG. 2 shows an embodiment of the invention in which multiple nodes of computer systems are coupled together via a switch and the switch uses the processor task priorities in making its decision as to which cycles are permitted to run.
  • NOTATION AND NOMENCLATURE
  • Certain terms are used throughout the following description and claims to refer to particular system components. As one skilled in the art will appreciate, computer companies may refer to a component and sub-components by different names. This document does not intend to distinguish between components that differ in name but not function. In the following discussion and in the claims, the terms “including” and “comprising” are used in an open-ended fashion, and thus should be interpreted to mean “including, but not limited to . . . ” Also, the term “couple” or “couples” is intended to mean either a direct or indirect electrical connection. Thus, if a first device couples to a second device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections. In addition, no distinction is made between a “processor,” “microprocessor,” “microcontroller,” or “central processing unit” (“CPU”) for purposes of this disclosure. Also, the terms “transaction” and “cycle” are generally considered synonymous. The terms “task priority,” “priority level,” and “priority value” are all synonymous. To the extent that any term is not specially defined in this specification, the intent is that the term is to be given its plain and ordinary meaning.
  • DETAILED DESCRIPTION
  • Referring now to FIG. 1, system 100 is shown constructed in accordance with a preferred embodiment of the invention. As shown, system 100 includes one or more central processor units (“CPUs”) 102 (also labeled as CPU0-CPU3), a host bridge 110, main memory 120, one or more peripheral devices 130, a south bridge 140 and various input and output devices 142, 144. The host bridge couples to the CPUs 102, memory 120 and the various peripheral device 130 via the south bridge 140. Other architectures are possible also; the architecture in FIG. 1 is merely exemplary of one suitable embodiment.
  • The peripheral device 130 may be whatever devices are appropriate for a given computer 100 such as a modem, network interface card (“NIC”), etc. The peripheral devices 130 couple to the host bridge 110 via any suitable interconnect bus 132. Of course, devices 130 are compliant with whatever interconnect bus 132 is used.
  • In general, one or more of the CPUs 102 can read data or instructions from and write data to memory 120 via the host bridge 110. Similarly, the peripheral devices 130 also can read/write memory 120. Further still, the CPUs 102 can run cycles through the host bridge 110 that target one or more of the peripheral devices 130. Additionally, signals to/from the input and output devices 142, 144 may propagate through the south bridge 140 and the host bridge 110 as desired. The input device 142 may be any type of device such as a mouse or keyboard and the output device may be a disk drive or other type of output device. These input/output devices may connect directly to the south bridge 140 or couple to the bridge 140 via other intervening logic (not shown) or couple to the system 100 via other architectures.
  • Referring still to FIG. 1, in accordance with the preferred embodiment of the invention, each CPU 102 can be assigned a “task priority.” The task priority is assigned to each CPU or groups of CPUs preferably by software and may be changed as desired. The task priority may take many forms. One suitable form is a value in the range of 0 to 15. As such, there are 16 different task priorities with task priority 15 being the highest priority and 0 the lowest, or vice versa. Task priorities 0-15 represent various gradations of priority between highest and lowest. The task for which a task priority is assigned to a CPU 102 may be a program, part of a program, or low level functions such as a single memory read/write. Software can program a CPU 102 to a certain task priority by writing a task priority value to a register 106 within each CPU 102. Each CPU 102 is thus independently configurable to a desired task priority. Multiple CPUs 102 can be assigned to the same task priority if desired.
  • Each CPU 102 can run a cycle on bus 104 to the host bridge 110 by which the CPU informs the bridge of that CPU's current task priority. The cycle through which a task priority is transmitted to the host bridge 110 may be a cycle separate from a CPU request for a system resource such as a memory access. Alternatively, a CPU may transmit a task priority in the same cycle as the CPU request for a system resource. As shown, the host bridge 110 includes a task priority table 112, or other form of storage, in which the bridge stores the task priorities received from the CPUs 102. The task priority table 112 may include an entry 114 for each CPU 102. In the example of FIG. 1, the system 100 includes four CPUs (CPU0-CPU1) and accordingly, task priority table 112 in the host bridge 110 includes four entries, one entry corresponding to each of the four CPUs. Each entry 114 includes the ability to store the task priority for the corresponding CPU. Thus, the first entry 114 stores the CPU 0 task priority as shown, the second entry stores the CPU 1 task priority, and so on.
  • At any time during power on self-test (“POST”) or during run-time, the CPUs 102 may inform the host bridge of their task priorities and may update the task priorities at subsequent times. In accordance with the preferred embodiment, the host bridge 110 uses the CPU task priorities to make decisions about granting individual CPUs access to memory or other resources within the computer system. This technique preferably selects only between competing CPUs for system resources, and not for non-CPU related cycles such as peripheral device 130 writes to memory. However, the concept explained herein can easily be extended to devices other than CPUs and the claims which follow should be interpreted broadly enough, unless otherwise indicated by the language of the claim itself, not to be limited to just CPU-based cycles.
  • A non-exhaustive list of the use of CPU task priorities in making the arbitration decision with respect to CPU cycles includes:
      • 1. Use task priority as the sole arbitration criterion
      • 2. Use task priority as the primary arbitration criterion coupled with an anti-starvation algorithm
      • 3. Use task priority as the primary arbitration criterion coupled with a tie-breaking algorithm
      • 4. Use criteria unrelated to task priorities as the primary criteria, but use the CPU task priorities as way to break a tie between two or more pending CPU cycles
      • 5. Use task priority coupled with other arbitration criteria that must be met
  • The first algorithm is self-explanatory and states that, among multiple CPU cycles pending at the host bridge 110, the host bridge preferably selects the cycle to run that corresponds to the CPU having the highest task priority. This algorithm may have limited use, however, in that starvation may occur and that two or more CPUs may have pending cycles to run that are at an equal task priority.
  • The second algorithm listed above states that an anti-starvation technique is used in conjunction with the first algorithm. Any suitable anti-starvation algorithm can be used such as that described in U.S. Pat. No. 6,286,083, incorporated herein by reference. Accordingly, with this approach the CPU having the highest task priority is always selected, but other CPUs may be selected to avoid starvation should the condition so warrant.
  • The third algorithm says that the CPU with the highest task priority is selected by the host bridge, and that a tie breaking algorithm is used in the event that two more CPUs have the highest, yet equal, task priority. A suitable tie breaking algorithm could be a fixed priority technique such as CPU writes to memory are always selected over CPU reads.
  • In accordance with the fourth algorithm, other conventional arbitration techniques that are not based on CPU task priorities are used as the primary arbitration decision making technique. The host bridge 110 uses the CPU task priorities as a mechanism to break a deadlock in the event such conventional arbitration techniques are unable to select between two or more CPU cycles. That is, this technique embodies the algorithm that, all else being equal, the CPU with the highest task priority gets selected.
  • The fifth algorithm listed above is that CPU task priorities are used in conjunction equally with other arbitration criteria that must be met. For example, the system designer may want certain cycles to always happen in a predetermined order regardless of task priority. An example of this is when one CPU 102 having a relatively low task priority wants to write a block of data to memory 120 and another, higher task priority CPU wants to read that same data. Although the reading CPU has a higher task priority, in this situation it is generally desirable that the higher task priority reading CPU not be allowed to read the data until all of the data is written to memory by the lower priority writing CPU. The fifth algorithm thus takes into account that CPU cycles corresponding to higher task priorities should generally be permitted to run before lower task priority cycles, but certain types of predetermined activities should be permitted to occur in a different order.
  • The computer system 100 shown in FIG. 1 can represent a “node” in a system in which multiple such nodes are coupled together. Further, the aforementioned use of CPU task priorities can be extended to the embodiment shown in FIG. 2 in which four nodes N0-N3 are coupled together via a switch 150. Each node represents a collection of electronic components such as the combination shown in FIG. 1, although difference components can be implemented for each node N0-N3. The switch 150 is conceptually similar to the host bridge 110 in FIG. 1 in that the switch permits each node 100 to communicate with any of the other nodes.
  • The switch 150 can use task priorities from each node when deciding how to route the messages between nodes. Accordingly, each node creates a message to send to another node (e.g., a write message, a read message, or a control message) and includes with the message a task priority. This type of task priority may pertain to the message itself or may pertain to a particular CPU within the node that sends the message. The switch 150 stores the messages from each node N0-N3 along with their task priorities and makes decisions on which messages to route through the switch based on the associated task priorities. The same or similar decision algorithms as explained above with regard to FIG. 1 can be implemented in switch 150.
  • Thus, the preferred embodiments make use of task priorities when deciding when to route a message through a system. Broadly stated, a plurality of CPUs (or nodes) couple through logic (e.g., a host bridge, a switch, etc.) to one or more system resources (e.g., memory, NICs, modems, other CPUs or nodes, etc.) and the logic uses information associated with each CPU that indicates the priority level of each CPU or software executing on each CPU. Allocation of system resources is accordingly weighted in favor of the most critical or important activities. Because the priority associated with a given transaction is taken into account, overall system performance should be improved as the more critical activities are given heightened priority when competing for common system resources.
  • The above discussion is meant to be illustrative of the principles and various embodiments of the present invention. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.

Claims (15)

1. A computer system, comprising:
a switch; and
a plurality of nodes coupled to said switch;
wherein said switch receives messages from said nodes, one or more of said messages including a priority value, and said switch routes the messages based on said priority values.
2. The computer system of claim 1 wherein said switch uses said priority values as the sole criterion for deciding how to route said message.
3. The computer system of claim 1 wherein said switch decides how to route said messages based on said priority values and based on an anti-starvation algorithm.
4. The computer system of claim 1 wherein said switch decides how to route said messages based on said priority values and based on a tie breaking algorithm that is used when messages from two or more nodes have the highest, yet equal, priority value.
5. The computer system of claim 1 wherein said switch decides how to route said messages based on an algorithm that does not involve said priority values, but uses said priority values to decide how to route said messages when the non priority value-based algorithm is unable to decide between competing node messages.
6. The computer system of claim 1 wherein said switch decides how to route said messages based on said priority values and based on other criteria.
7. A switch adapted to couple to a plurality of nodes, said switch comprising:
storage for priority values;
wherein said switch is adapted to receive messages from said nodes, one or more of said messages including a priority value that is stored in said storage; and
wherein said switch routes the messages based, at least in part, on said priority values.
8. The switch of claim 7 wherein said switch routes a first message through the switch before a second message if the first message has a priority value that indicates higher priority than the a priority value in the second message.
9. The switch of claim 7 wherein said switch uses said priority values as the sole criterion for deciding how to route said messages.
10. The switch of claim 7 wherein said switch decides how to route said messages based on said priority values and based on an anti-starvation algorithm.
11. The switch of claim 7 wherein said switch decides how to route said messages based on said priority values and based on a tie breaking algorithm that is used when messages from two or more nodes have the highest, yet equal, priority value.
12. The switch of claim 7 wherein said switch decides how to route said messages based on an algorithm that does not involve said priority values, but uses said priority values to decide how to route said messages when the non-priority value-based algorithm is unable to decide between competing node messages.
13. The switch of claim 7 wherein said switch decides how to route said messages based on said priority values and based on other criteria.
14. An apparatus, comprising:
means for receiving messages and task priorities associated with said messages; and
means for routing said messages based on said task priorities.
15. The apparatus of claim 14 further comprising means for storing said task priorities.
US11/014,493 2001-11-30 2004-12-16 Arbitration technique based on processor task priority Abandoned US20050097256A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/014,493 US20050097256A1 (en) 2001-11-30 2004-12-16 Arbitration technique based on processor task priority

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/998,514 US6848015B2 (en) 2001-11-30 2001-11-30 Arbitration technique based on processor task priority
US11/014,493 US20050097256A1 (en) 2001-11-30 2004-12-16 Arbitration technique based on processor task priority

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/998,514 Division US6848015B2 (en) 2001-11-30 2001-11-30 Arbitration technique based on processor task priority

Publications (1)

Publication Number Publication Date
US20050097256A1 true US20050097256A1 (en) 2005-05-05

Family

ID=25545316

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/998,514 Expired - Fee Related US6848015B2 (en) 2001-11-30 2001-11-30 Arbitration technique based on processor task priority
US11/014,493 Abandoned US20050097256A1 (en) 2001-11-30 2004-12-16 Arbitration technique based on processor task priority

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/998,514 Expired - Fee Related US6848015B2 (en) 2001-11-30 2001-11-30 Arbitration technique based on processor task priority

Country Status (1)

Country Link
US (2) US6848015B2 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209430A1 (en) * 2007-02-28 2008-08-28 International Business Machines Corporation System, apparatus, and method for facilitating provisioning in a mixed environment of locales
US20080320270A1 (en) * 2006-02-24 2008-12-25 Fujitsu Limited Data read-and-write controlling device
US20110138098A1 (en) * 2009-02-13 2011-06-09 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
US20110246688A1 (en) * 2010-04-01 2011-10-06 Irwin Vaz Memory arbitration to ensure low latency for high priority memory requests
US8868817B2 (en) 2009-02-13 2014-10-21 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
US8914406B1 (en) * 2012-02-01 2014-12-16 Vorstack, Inc. Scalable network security with fast response protocol
US9514074B2 (en) 2009-02-13 2016-12-06 The Regents Of The University Of Michigan Single cycle arbitration within an interconnect
US9680846B2 (en) 2012-02-01 2017-06-13 Servicenow, Inc. Techniques for sharing network security event information
US9710644B2 (en) 2012-02-01 2017-07-18 Servicenow, Inc. Techniques for sharing network security event information
US10333960B2 (en) 2017-05-03 2019-06-25 Servicenow, Inc. Aggregating network security data for export
US10686805B2 (en) 2015-12-11 2020-06-16 Servicenow, Inc. Computer network threat assessment
US11570176B2 (en) 2021-01-28 2023-01-31 Bank Of America Corporation System and method for prioritization of text requests in a queue based on contextual and temporal vector analysis
US11575703B2 (en) 2017-05-05 2023-02-07 Servicenow, Inc. Network security threat intelligence sharing

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6848015B2 (en) * 2001-11-30 2005-01-25 Hewlett-Packard Development Company, L.P. Arbitration technique based on processor task priority
JP2005092780A (en) * 2003-09-19 2005-04-07 Matsushita Electric Ind Co Ltd Real time processor system and control method
US20050144401A1 (en) * 2003-12-30 2005-06-30 Pantalone Brett A. Multiprocessor mobile terminal with shared memory arbitration
JP2007072598A (en) * 2005-09-05 2007-03-22 Fujifilm Corp Bus arbitration method and bus arbitration program
US8468283B2 (en) * 2006-06-01 2013-06-18 Telefonaktiebolaget Lm Ericsson (Publ) Arbiter diagnostic apparatus and method
US20090138683A1 (en) * 2007-11-28 2009-05-28 Capps Jr Louis B Dynamic instruction execution using distributed transaction priority registers
US8886918B2 (en) * 2007-11-28 2014-11-11 International Business Machines Corporation Dynamic instruction execution based on transaction priority tagging
US9086696B2 (en) * 2008-09-30 2015-07-21 Rockwell Automation Technologies, Inc. Self-arbitrated resources for industrial control systems
US9436739B2 (en) * 2013-12-13 2016-09-06 Vmware, Inc. Dynamic priority-based query scheduling
US10114672B2 (en) 2013-12-31 2018-10-30 Thomson Licensing User-centered task scheduling for multi-screen viewing in cloud computing environment
US9886396B2 (en) * 2014-12-23 2018-02-06 Intel Corporation Scalable event handling in multi-threaded processor cores
US9606942B2 (en) 2015-03-30 2017-03-28 Cavium, Inc. Packet processing system, method and device utilizing a port client chain

Citations (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5564060A (en) * 1994-05-31 1996-10-08 Advanced Micro Devices Interrupt handling mechanism to prevent spurious interrupts in a symmetrical multiprocessing system
US5634073A (en) * 1994-10-14 1997-05-27 Compaq Computer Corporation System having a plurality of posting queues associated with different types of write operations for selectively checking one queue based upon type of read operation
US5675736A (en) * 1995-05-24 1997-10-07 International Business Machines Corporation Multi-node network with internode switching performed within processor nodes, each node separately processing data and control messages
US5805840A (en) * 1996-03-26 1998-09-08 Advanced Micro Devices, Inc. Bus arbiter employing a transaction grading mechanism to dynamically vary arbitration priority
US5809278A (en) * 1993-12-28 1998-09-15 Kabushiki Kaisha Toshiba Circuit for controlling access to a common memory based on priority
US5862355A (en) * 1996-09-12 1999-01-19 Telxon Corporation Method and apparatus for overriding bus prioritization scheme
US5918057A (en) * 1997-03-20 1999-06-29 Industrial Technology Research Institute Method and apparatus for dispatching multiple interrupt requests simultaneously
US5956516A (en) * 1997-12-23 1999-09-21 Intel Corporation Mechanisms for converting interrupt request signals on address and data lines to interrupt message signals
US5956493A (en) * 1996-03-08 1999-09-21 Advanced Micro Devices, Inc. Bus arbiter including programmable request latency counters for varying arbitration priority
US5978852A (en) * 1998-01-06 1999-11-02 3Com Corporation LAN switch interface for providing arbitration between different simultaneous memory access requests
US6000001A (en) * 1997-09-05 1999-12-07 Micron Electronics, Inc. Multiple priority accelerated graphics port (AGP) request queue
US6006303A (en) * 1997-08-28 1999-12-21 Oki Electric Industry Co., Inc. Priority encoding and decoding for memory architecture
US6016528A (en) * 1997-10-29 2000-01-18 Vlsi Technology, Inc. Priority arbitration system providing low latency and guaranteed access for devices
US6026461A (en) * 1995-08-14 2000-02-15 Data General Corporation Bus arbitration system for multiprocessor architecture
US6092137A (en) * 1997-11-26 2000-07-18 Industrial Technology Research Institute Fair data bus arbitration system which assigns adjustable priority values to competing sources
US6119196A (en) * 1997-06-30 2000-09-12 Sun Microsystems, Inc. System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates
US6160562A (en) * 1998-08-18 2000-12-12 Compaq Computer Corporation System and method for aligning an initial cache line of data read from local memory by an input/output device
US6199118B1 (en) * 1998-08-18 2001-03-06 Compaq Computer Corporation System and method for aligning an initial cache line of data read from an input/output device by a central processing unit
US6219763B1 (en) * 1991-07-08 2001-04-17 Seiko Epson Corporation System and method for adjusting priorities associated with multiple devices seeking access to a memory array unit
US6233661B1 (en) * 1998-04-28 2001-05-15 Compaq Computer Corporation Computer system with memory controller that hides the next cycle during the current cycle
US6247102B1 (en) * 1998-03-25 2001-06-12 Compaq Computer Corporation Computer system employing memory controller and bridge interface permitting concurrent operation
US6249847B1 (en) * 1998-08-14 2001-06-19 Compaq Computer Corporation Computer system with synchronous memory arbiter that permits asynchronous memory requests
US6249830B1 (en) * 1996-08-20 2001-06-19 Compaq Computer Corp. Method and apparatus for distributing interrupts in a scalable symmetric multiprocessor system without changing the bus width or bus protocol
US6269433B1 (en) * 1998-04-29 2001-07-31 Compaq Computer Corporation Memory controller using queue look-ahead to reduce memory latency
US6286083B1 (en) * 1998-07-08 2001-09-04 Compaq Computer Corporation Computer system with adaptive memory arbitration scheme
US6553443B1 (en) * 1999-09-28 2003-04-22 Legerity, Inc. Method and apparatus for prioritizing interrupts in a communication system
US6684280B2 (en) * 2000-08-21 2004-01-27 Texas Instruments Incorporated Task based priority arbitration
US6848015B2 (en) * 2001-11-30 2005-01-25 Hewlett-Packard Development Company, L.P. Arbitration technique based on processor task priority

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6418496B2 (en) * 1997-12-10 2002-07-09 Intel Corporation System and apparatus including lowest priority logic to select a processor to receive an interrupt message

Patent Citations (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6219763B1 (en) * 1991-07-08 2001-04-17 Seiko Epson Corporation System and method for adjusting priorities associated with multiple devices seeking access to a memory array unit
US5809278A (en) * 1993-12-28 1998-09-15 Kabushiki Kaisha Toshiba Circuit for controlling access to a common memory based on priority
US5564060A (en) * 1994-05-31 1996-10-08 Advanced Micro Devices Interrupt handling mechanism to prevent spurious interrupts in a symmetrical multiprocessing system
US5634073A (en) * 1994-10-14 1997-05-27 Compaq Computer Corporation System having a plurality of posting queues associated with different types of write operations for selectively checking one queue based upon type of read operation
US5675736A (en) * 1995-05-24 1997-10-07 International Business Machines Corporation Multi-node network with internode switching performed within processor nodes, each node separately processing data and control messages
US6026461A (en) * 1995-08-14 2000-02-15 Data General Corporation Bus arbitration system for multiprocessor architecture
US5956493A (en) * 1996-03-08 1999-09-21 Advanced Micro Devices, Inc. Bus arbiter including programmable request latency counters for varying arbitration priority
US5805840A (en) * 1996-03-26 1998-09-08 Advanced Micro Devices, Inc. Bus arbiter employing a transaction grading mechanism to dynamically vary arbitration priority
US6249830B1 (en) * 1996-08-20 2001-06-19 Compaq Computer Corp. Method and apparatus for distributing interrupts in a scalable symmetric multiprocessor system without changing the bus width or bus protocol
US5862355A (en) * 1996-09-12 1999-01-19 Telxon Corporation Method and apparatus for overriding bus prioritization scheme
US5918057A (en) * 1997-03-20 1999-06-29 Industrial Technology Research Institute Method and apparatus for dispatching multiple interrupt requests simultaneously
US6119196A (en) * 1997-06-30 2000-09-12 Sun Microsystems, Inc. System having multiple arbitrating levels for arbitrating access to a shared memory by network ports operating at different data rates
US6006303A (en) * 1997-08-28 1999-12-21 Oki Electric Industry Co., Inc. Priority encoding and decoding for memory architecture
US6000001A (en) * 1997-09-05 1999-12-07 Micron Electronics, Inc. Multiple priority accelerated graphics port (AGP) request queue
US6016528A (en) * 1997-10-29 2000-01-18 Vlsi Technology, Inc. Priority arbitration system providing low latency and guaranteed access for devices
US6092137A (en) * 1997-11-26 2000-07-18 Industrial Technology Research Institute Fair data bus arbitration system which assigns adjustable priority values to competing sources
US5956516A (en) * 1997-12-23 1999-09-21 Intel Corporation Mechanisms for converting interrupt request signals on address and data lines to interrupt message signals
US20010032286A1 (en) * 1997-12-23 2001-10-18 Pawlowski Stephen S. Mechanisms for converting interrupt request signals on address and data lines to interrupt message signals
US5978852A (en) * 1998-01-06 1999-11-02 3Com Corporation LAN switch interface for providing arbitration between different simultaneous memory access requests
US6247102B1 (en) * 1998-03-25 2001-06-12 Compaq Computer Corporation Computer system employing memory controller and bridge interface permitting concurrent operation
US6233661B1 (en) * 1998-04-28 2001-05-15 Compaq Computer Corporation Computer system with memory controller that hides the next cycle during the current cycle
US6269433B1 (en) * 1998-04-29 2001-07-31 Compaq Computer Corporation Memory controller using queue look-ahead to reduce memory latency
US6286083B1 (en) * 1998-07-08 2001-09-04 Compaq Computer Corporation Computer system with adaptive memory arbitration scheme
US6505260B2 (en) * 1998-07-08 2003-01-07 Compaq Information Technologies Group, L.P. Computer system with adaptive memory arbitration scheme
US6249847B1 (en) * 1998-08-14 2001-06-19 Compaq Computer Corporation Computer system with synchronous memory arbiter that permits asynchronous memory requests
US6199118B1 (en) * 1998-08-18 2001-03-06 Compaq Computer Corporation System and method for aligning an initial cache line of data read from an input/output device by a central processing unit
US6160562A (en) * 1998-08-18 2000-12-12 Compaq Computer Corporation System and method for aligning an initial cache line of data read from local memory by an input/output device
US6553443B1 (en) * 1999-09-28 2003-04-22 Legerity, Inc. Method and apparatus for prioritizing interrupts in a communication system
US6684280B2 (en) * 2000-08-21 2004-01-27 Texas Instruments Incorporated Task based priority arbitration
US6848015B2 (en) * 2001-11-30 2005-01-25 Hewlett-Packard Development Company, L.P. Arbitration technique based on processor task priority

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080320270A1 (en) * 2006-02-24 2008-12-25 Fujitsu Limited Data read-and-write controlling device
US8499135B2 (en) * 2006-02-24 2013-07-30 Fujitsu Limited Memory controller for reading data stored in memory after written thereto using write information table
US20080209430A1 (en) * 2007-02-28 2008-08-28 International Business Machines Corporation System, apparatus, and method for facilitating provisioning in a mixed environment of locales
US10817820B2 (en) 2007-02-28 2020-10-27 International Business Machines Corporation Facilitating provisioning in a mixed environment of locales
US10600014B2 (en) 2007-02-28 2020-03-24 International Business Machines Corporation Facilitating provisioning in a mixed environment of locales
US9317828B2 (en) 2007-02-28 2016-04-19 International Business Machines Corporation Facilitating provisioning in a mixed environment of locales
US20110138098A1 (en) * 2009-02-13 2011-06-09 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
US8549207B2 (en) * 2009-02-13 2013-10-01 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
US8868817B2 (en) 2009-02-13 2014-10-21 The Regents Of The University Of Michigan Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry
US10037295B2 (en) 2009-02-13 2018-07-31 The Regents Of The University Of Michigan Apparatus and methods for generating a selection signal to perform an arbitration in a single cycle between multiple signal inputs having respective data to send
US9514074B2 (en) 2009-02-13 2016-12-06 The Regents Of The University Of Michigan Single cycle arbitration within an interconnect
US20110246688A1 (en) * 2010-04-01 2011-10-06 Irwin Vaz Memory arbitration to ensure low latency for high priority memory requests
US9756082B1 (en) 2012-02-01 2017-09-05 Servicenow, Inc. Scalable network security with fast response protocol
US10628582B2 (en) 2012-02-01 2020-04-21 Servicenow, Inc. Techniques for sharing network security event information
US9680846B2 (en) 2012-02-01 2017-06-13 Servicenow, Inc. Techniques for sharing network security event information
US10032020B2 (en) 2012-02-01 2018-07-24 Servicenow, Inc. Techniques for sharing network security event information
US9167001B1 (en) * 2012-02-01 2015-10-20 Brightpoint Security, Inc. Scalable network security with fast response protocol
US10225288B2 (en) 2012-02-01 2019-03-05 Servicenow, Inc. Scalable network security detection and prevention platform
US11388200B2 (en) 2012-02-01 2022-07-12 Servicenow, Inc. Scalable network security detection and prevention platform
US10412103B2 (en) 2012-02-01 2019-09-10 Servicenow, Inc. Techniques for sharing network security event information
US9038183B1 (en) * 2012-02-01 2015-05-19 Vorstack, Inc. Scalable network security with fast response protocol
US9710644B2 (en) 2012-02-01 2017-07-18 Servicenow, Inc. Techniques for sharing network security event information
US11222111B2 (en) 2012-02-01 2022-01-11 Servicenow, Inc. Techniques for sharing network security event information
US8914406B1 (en) * 2012-02-01 2014-12-16 Vorstack, Inc. Scalable network security with fast response protocol
US10686805B2 (en) 2015-12-11 2020-06-16 Servicenow, Inc. Computer network threat assessment
US11539720B2 (en) 2015-12-11 2022-12-27 Servicenow, Inc. Computer network threat assessment
US11223640B2 (en) 2017-05-03 2022-01-11 Servicenow, Inc. Aggregating network security data for export
US10333960B2 (en) 2017-05-03 2019-06-25 Servicenow, Inc. Aggregating network security data for export
US11743278B2 (en) 2017-05-03 2023-08-29 Servicenow, Inc. Aggregating network security data for export
US11575703B2 (en) 2017-05-05 2023-02-07 Servicenow, Inc. Network security threat intelligence sharing
US11570176B2 (en) 2021-01-28 2023-01-31 Bank Of America Corporation System and method for prioritization of text requests in a queue based on contextual and temporal vector analysis

Also Published As

Publication number Publication date
US6848015B2 (en) 2005-01-25
US20030105911A1 (en) 2003-06-05

Similar Documents

Publication Publication Date Title
US6848015B2 (en) Arbitration technique based on processor task priority
US6321285B1 (en) Bus arrangements for interconnection of discrete and/or integrated modules in a digital system and associated method
US4706190A (en) Retry mechanism for releasing control of a communications path in digital computer system
US5375223A (en) Single register arbiter circuit
US6070209A (en) Delivering transactions between data buses in a computer system
KR910001790B1 (en) Apparatus and its method for arbitrating assigning control of a communications path digital computer system
RU2380743C1 (en) Method and device for clearing semaphore reservation
US6330647B1 (en) Memory bandwidth allocation based on access count priority scheme
US6389526B1 (en) Circuit and method for selectively stalling interrupt requests initiated by devices coupled to a multiprocessor system
US5857083A (en) Bus interfacing device for interfacing a secondary peripheral bus with a system having a host CPU and a primary peripheral bus
KR910001789B1 (en) Cache invalidation apparatus for multiprocessor system of digital computer system
EP0139563B1 (en) Control mechanism for multiprocessor system
US6519666B1 (en) Arbitration scheme for optimal performance
EP0140751A2 (en) Cache invalidation mechanism for multiprocessor systems
US20160188529A1 (en) Guaranteed quality of service in system-on-a-chip uncore fabric
EP0138676B1 (en) Retry mechanism for releasing control of a communications path in a digital computer system
US10133670B2 (en) Low overhead hierarchical connectivity of cache coherent agents to a coherent fabric
US9213545B2 (en) Storing data in any of a plurality of buffers in a memory controller
JP2007094649A (en) Access arbitration circuit
US7234012B2 (en) Peripheral component interconnect arbiter implementation with dynamic priority scheme
EP0139568B1 (en) Message oriented interrupt mechanism for multiprocessor systems
KR100757791B1 (en) Shared resource arbitration method and apparatus
US6480923B1 (en) Information routing for transfer buffers
US5815674A (en) Method and system for interfacing a plurality of bus requesters with a computer bus
JP2813182B2 (en) Multiprocessor computer multifunction device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION