US5931924A - Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights - Google Patents

Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights Download PDF

Info

Publication number
US5931924A
US5931924A US08/839,437 US83943797A US5931924A US 5931924 A US5931924 A US 5931924A US 83943797 A US83943797 A US 83943797A US 5931924 A US5931924 A US 5931924A
Authority
US
United States
Prior art keywords
requesters
request
priority
shared resource
requestors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
US08/839,437
Inventor
Ravi Kumar Arimilli
John Steven Dodson
Jerry Don Lewis
Derek Edward Williams
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US08/839,437 priority Critical patent/US5931924A/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ARIMILLI, RAVI K., DODSON, JOHN S., LEWIS, JERRY D., WILLIAMS, DEREK E.
Priority to JP10097774A priority patent/JPH10301908A/en
Application granted granted Critical
Publication of US5931924A publication Critical patent/US5931924A/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • G06F13/362Handling requests for interconnection or transfer for access to common bus or bus system with centralised access control
    • G06F13/364Handling requests for interconnection or transfer for access to common bus or bus system with centralised access control using independent requests or grants, e.g. using separated request and grant lines

Definitions

  • the present invention relates in general to a method and system for data processing and in particular to a method and system for controlling access to a shared resource in a data processing system. Still more particularly, the present invention relates to a method and system for controlling access to a shared resource by a plurality of requesters in a data processing system, wherein at least a highest priority is assigned to one of the plurality of requesters substantially randomly with respect to previous priorities of the plurality of requesters.
  • regulating access to a shared resource is often accomplished by assigning a fixed priority to each of the devices that may request service from or access to the shared resource.
  • Such fixed priority systems while providing a simple and efficient method of determining which access requests to grant, often fail to allocate adequate access to lower priority devices and operations, particularly when high priority devices generate frequent access requests.
  • some conventional data processing systems implement a "round robin" resource allocation scheme in which access to the shared resource is provided to each of the multiple devices sequentially and cyclically.
  • the implementation of a round robin resource allocation mechanism has its own concomitant problems. First, high priority devices and operations can be delayed, producing latency in critical timing paths and diminishing overall system performance.
  • a method and system for controlling access to a shared resource in a data processing system are provided.
  • a number of requests for access to the resource are generated by a number of requestors that share the resource.
  • Each of the requestors is associated with a priority weight that indicates a probability that the associated requestor will be assigned a highest current priority.
  • Each requestor is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requestors.
  • a request for access to the resource is granted.
  • a requestor corresponding to a granted request is signaled that its request has been granted, and a requestor corresponding to a rejected request is signaled that its request was not granted.
  • FIG. 1 depicts a first illustrative embodiment of a data processing system or portion of a data processing system with which the present invention may advantageously be utilized;
  • FIG. 2 illustrates a more detailed block diagram representation of the pseudo-random generator and resource controller illustrated in FIG. 1;
  • FIG. 3 depicts a request queue, which in an illustrative embodiment of the data processing system illustrated, in FIG. 1, comprises a requester;
  • FIG. 4 is a timing diagram illustrating the relative timings of a request for access to the shared resource by a requester and a response to the request by the resource controller in the data processing system depicted in FIG. 1;
  • FIG. 5 depicts a second illustrative embodiment of data processing system in accordance with the present invention in which the requesters are bus masters, the resource controller is a bus arbiter, and the shared resource is a shared system bus; and
  • FIG. 6 illustrates a more detailed block diagram of the bus arbiter shown in FIG. 5.
  • data processing system 10 which may comprise, for example, a computer system, a processor, or a memory subsystem, includes requesters 12-18 and optional additional requesters indicated by ellipsis notation. Each of requesters 12-18 is coupled to resource controller 20 by a request output signal and by acknowledge and grant/retry input signals. Resource controller 20 controls access by requesters 12-18 to shared resource 22, which can comprise any resource of data processing system 10, including a shared system bus, a L2 cache directory, or a processor. Data processing system 10 further includes performance monitor 54, which monitors and counts selected events within data processing system 10, including requests generated by requesters 12-18.
  • the number of requests for access to (or service from) shared resource 22 that can be concurrently granted is less than the number of possible requests that may be generated by requesters 12-18. Accordingly, when resource controller 20 receives more requests for access to shared resource 22 than can be concurrently granted, resource controller 20 grants the requests of only selected ones of requesters 12-18 according to a priority order. In contrast to prior art data processing systems that allocate access to a shared resource in a deterministic manner, for example, according to fixed or round robin priority order, resource controller 20 utilizes an input from pseudo-random generator 24 to assign at least a highest priority to one of requesters 12-18 in a substantially non-deterministic manner.
  • Pseudo-random generator 24 comprises a 40-bit shift register 30 that is utilized to generate a 40-bit pseudo-random bit pattern during each clock cycle of data processing system 10.
  • bits 1, 18, 20, and 39 of shift register 30 serve as inputs of XNOR gate 32, which produces a 1-bit output that is connected to one input of AND gate 34.
  • AND gate 34 has a second input coupled to clock signal 36, which supplies the clock pulses utilized to synchronize the operation of the components of data processing system 10, and an output connected to bit 0 of shift register 30.
  • POR power-on reset
  • shift register 30 is initialized to all zeroes.
  • XNOR gate 32 performs an exclusive-NOR operation on the values of bits 1, 18, 20, and 39.
  • the resulting 1-bit output of XNOR gate 32 is shifted into bit 0 of shift register 30 when clock signal 36 goes high, and the values of bits 0-38 are shifted to the right (bit 39 is discarded), thereby generating a different bit pattern each cycle.
  • FIG. 2 depicts an embodiment of pseudo-random generator 24 that utilizes of specific bits of shift register 30 in order to generate a pseudo-random bit pattern having a good distribution of values, it should be understood that other combinations and numbers of bits may alternatively be employed.
  • shift register 30 in order to make the bit patterns produced by shift register 30 truly non-deterministic over the operating life of data processing system 10, a larger shift register containing, for example, 80 bits could be utilized. Alternatively, a non-deterministic bit pattern could be obtained by sampling shift register 30 asynchronously with respect to clock signal 36.
  • the values of predetermined bits within shift register 30 are sampled by resource controller 20, which utilizes the state indicated by the predetermined bits to assign a highest priority to one of requesters 12-18. It is important to note that the number of bit values supplied to resource controller 20 is implementation-specific and depends upon the desired priority weight granularity. In the exemplary embodiment depicted in FIG. 2, the values of four predetermined bits, for example, bits 3, 7, 27, and 34, are utilized in order to provide an indication of one of sixteen possible states to resource controller 20 during each clock cycle. The state indicated by the 4-bit input is decoded by decoder 40 to produce a 16-bit value that indicates the decoded state by the bit position of a single bit set to 1.
  • Resource controller 20 further includes a software-accessible control register 42, which includes four 16-bit fields that each correspond to one of requesters 12-18.
  • Control register 42 is written by software executing within data processing system 10 to assign each of the 16 possible states to one of requesters 12-18 by setting corresponding bits within each of the bit fields of control register 42. In order to avoid resource contention, a given bit position can be set to 1 in only a single field of control register 42.
  • Each of requesters 12-18 can be assigned equal priority weight by setting an equal number of bits within each of fields A-D of control register 42. Alternatively, each of requesters 12-18 can selectively be given greater or lesser priority weight by setting a greater or fewer number of bits in the corresponding field of control register 42. Thus, to give requester A 12 relatively greater priority weight, a large number of states can be allocated to requester A 12 by setting a number of bits within field A of control register 42. It should be understood that although a particular one of requesters 12-18 may be accorded a greater priority weight by assigning that requester a large number of states, any of requesters 12-18 can be assigned the highest priority in any given cycle based upon the pseudo-random bit pattern generated by pseudo-random generator 24. Thus, being allocated a large number of states does not guarantee that a requester will be assigned the highest priority in any given cycle, but only the improves the probability that the requester will be assigned the highest priority.
  • Resource controller 20 also includes a control register 52 having the same arrangement and function as control register 42.
  • control register 52 can be written by performance monitor 54 in response to events occurring within data processing system 10.
  • Performance monitor 54 monitors and counts predefined events within data processing system 10 such as queue entries, clock cycles, misses in cache, instructions dispatched to a particular execution unit, and mispredicted branches. Based upon such monitoring, performance monitor 54 can dynamically update control register 52 in order to assign a greater or lesser priority weight to each of requesters 12-18.
  • queue 100 which in an exemplary embodiment of data processing system 10 comprises one of requesters 12-18.
  • Queue 100 may comprise a castout queue within a L2 cache or a queue for storing directory writes, for example.
  • queue 100 has an associated high water mark 102 that specifies a threshold, which if exceeded, causes performance monitor 54 to raise the priority weight of queue 100.
  • performance monitor 54 By monitoring selected events within data processing system 10, such as the number of requests by a particular requester that remain outstanding, and by adjusting priority weights of requesters in response to the monitored events, performance monitor 54 provides real-time optimal resource management within data processing system 10.
  • corresponding bit fields of control registers 42 and 52 are input into one of multiplexers 60-66. If performance monitor dynamic control of access to shared resource 22 is enabled, for example, by setting a control register, performance monitor 54 asserts a select signal 56 that selects the values of the fields of control register 52 as the outputs of multiplexers 60-66. Alternatively, if performance monitor dynamic control of access to shared resource 22 is not enabled, select signal 56 is not asserted, and multiplexers 60-66 select the values of the fields of control register 42 as outputs. As illustrated, the 16-bit outputs of multiplexers 60-66 are each individually ANDed with the 16-bit output of decoder 40 utilizing AND gates 70-76.
  • each of AND gates 70-76 are then ORed utilizing one of OR gates 80-86 to produce four 1-bit signals that indicate which of requesters 12-18 was granted the highest priority during the current cycle.
  • each of the 1-bit signals is then ANDed with a corresponding request signal utilizing one of AND gates 90-96.
  • the output of each of AND gates 90-96 is then transmitted to the corresponding one of requesters 12-18 as a grant (1) or retry (0) signal.
  • resource controller 20 includes priority assignment logic 98, which assigns at least one subsidiary priority to one of requesters 12-18 in response to NOR gate 97 indicating that the assignment of the highest priority did not generate a grant signal.
  • Priority assignment logic 98 can assign the subsidiary priority utilizing any one or a combination of methodologies.
  • priority assignment logic 98 may employ one or more of pseudo-random, fixed, round robin, fairness, or other priority assignment methodologies. Regardless of which method is utilized to determine the subsidiary priority, priority assignment logic 98 transmits a grant signal to the requester granted access to shared resource 22 in the current cycle.
  • FIG. 4 there is illustrated a timing diagram showing the timing relationship between generation of a request by one of requesters 12-18 and receipt of a grant signal from resource controller 20.
  • one of requesters 12-18 asserts a request signal at time T 0 .
  • resource controller responds to the request by asserting an acknowledgement signal.
  • resource controller also asserts a grant signal at time T 1 . If no grant signal is received by the requester at time T 1 , the requester interprets the acknowledgement signal as a retry.
  • the requestor can remove the request in response to the grant of access to shared resource 22 or in response to the request no longer being valid. Alternatively, in response to receipt of a retry signal, the requester can continue to assert the request.
  • Requestors 12-18 can be implemented to support a number of behaviors in response to continue receipt of a retry signal.
  • requesters 12-18 can be configured to retry a particular request for only a programmable (i.e, software specified) number of times.
  • a requester can access selected bits of shift register 30 to ascertain a pseudo-random number of times to retry a request. For example, in an embodiment where a maximum of 31 retries are permitted, 5 bits of shift register 30 can be read by a requester in response to receipt of a first retry signal in order to determine a number of times to retry a particular request.
  • performance monitor 54 can dynamically set the number of retries permitted for a particular request based, for example, upon the number of outstanding requests by other requesters.
  • Data processing system 10 can similarly implement a number of different methods of handling requests once the maximum number of retries for a request has been reached.
  • requesters 12-18 can remove a request for a programmable period of time once the maximum number of retries has been reached.
  • a request can be removed for a pseudo-randomly selected period of time determined by accessing predetermined bits within shift register 30. For example, in an exemplary embodiment in which 5 bits within shift register 30 are utilized, a request can be removed for a number of clock cycles specified by the 5 bits.
  • performance monitor 54 can alternatively be configured to dynamically set the number of cycles for which requesters 12-18 must remove requests that have been retried the maximum number of times.
  • data processing system 200 includes requesters 212-218, which are each coupled to shared system bus 222 and which can each master (i.e., initiate bus transactions on) system bus 222.
  • Requests for ownership of system bus 222 generated by requesters 212-218 are arbitrated by bus arbiter 220 in response to priority signals 230-236 generated by pseudo-random generator 224.
  • Priority signals 230-236 which each correspond to a respective one of requesters 212-218, each contain pseudo-random duration pulses spaced by pseudo-randomly determined intervals.
  • at most one of priority signals 230-236 can be logic low at a time.
  • bus arbiter 220 includes transistors 250-256, which each correspond to one of requesters 212-218.
  • Each of transistors 250-256 has a source connected to the request line of its associated requester, a gate connected to the corresponding one of priority signals 230-236, and a drain connected to the associated requester as a grant line.
  • priority signals 230-236 are utilized to gate requests received from requesters 212-218 so that a requester making a request and having a logic low priority signal is granted ownership of system bus 222.
  • the present invention provides an improved method and system for controlling access to a shared resource in a data processing system.
  • the method and system of the present invention assign at least a highest priority to one of a plurality of requesters non-deterministically in order to avoid live locks and to prevent a single requester from monopolizing the shared resource. While providing all requesters an opportunity to obtain access to the shared resource, the present invention also permits requesters to be given diverse priority weights, under the control of either software or hardware, in order to ensure that high priority requesters have a greater probability of obtaining access to the shared resource.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Bus Control (AREA)
  • Multi Processors (AREA)

Abstract

A method and system for controlling access to a shared resource in a data processing system are described. According to the method, a number of requests for access to the resource are generated by a number of requesters that share the resource. Each of the requesters is associated with a priority weight that indicates a probability that the associated requester will be assigned a highest current priority. Each requester is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requesters. In response to the current priorities of the requesters, a request for access to the resource is granted. In one embodiment, a requester corresponding to a granted request is signaled that its request has been granted, and a requester corresponding to a rejected request is signaled that its request was not granted.

Description

BACKGROUND OF THE INVENTION
1. Technical Field
The present invention relates in general to a method and system for data processing and in particular to a method and system for controlling access to a shared resource in a data processing system. Still more particularly, the present invention relates to a method and system for controlling access to a shared resource by a plurality of requesters in a data processing system, wherein at least a highest priority is assigned to one of the plurality of requesters substantially randomly with respect to previous priorities of the plurality of requesters.
2. Description of the Related Art
In data processing systems, it is common for a single resource to be shared by multiple devices that request service from or access to the resource. In these instances, some method of controlling access to the resource must be implemented to ensure that the devices obtain access to the shared resource in an manner that promotes efficient operation of the data processing system.
In conventional data processing systems, regulating access to a shared resource is often accomplished by assigning a fixed priority to each of the devices that may request service from or access to the shared resource. Such fixed priority systems, while providing a simple and efficient method of determining which access requests to grant, often fail to allocate adequate access to lower priority devices and operations, particularly when high priority devices generate frequent access requests. In order to address this problem, some conventional data processing systems implement a "round robin" resource allocation scheme in which access to the shared resource is provided to each of the multiple devices sequentially and cyclically. However, the implementation of a round robin resource allocation mechanism has its own concomitant problems. First, high priority devices and operations can be delayed, producing latency in critical timing paths and diminishing overall system performance. Second, and more subtlety, empirical data indicates that deterministic methods of allocating resource access, such as round robin, can lead to "live locks" or situations in which request timings permit only a few processes or devices to gain access to a shared resource, while effectively preventing access to the shared resource by other processes or devices.
As should thus be apparent, it would be desirable to provide an improved method and system for controlling access to a shared resource within a data processing system. In particular, it would be desirable to provide a method and system for controlling access to a shared resource that permit both high priority and low priority devices adequate access to a shared resource. In addition, it would be desirable to minimize the request to grant latency. Furthermore, in order to minimize live locks, it would be desirable for the method and system for controlling access to the shared resource to operate in a substantially non-deterministic manner.
SUMMARY OF THE INVENTION
It is therefore one object of the present invention to provide an improved method and system for data processing.
It is another object of the present invention to provide an improved method and system for controlling access to a shared resource in a data processing system.
It is yet another object of the present invention to provide a method and system for controlling access to a shared resource by a plurality of requestors in a data processing system, wherein at least a highest priority is assigned to one of the plurality of requestors substantially randomly with respect to previous priorities of the plurality of requestors.
The foregoing objects are achieved as is now described. A method and system for controlling access to a shared resource in a data processing system are provided. According to the method, a number of requests for access to the resource are generated by a number of requestors that share the resource. Each of the requestors is associated with a priority weight that indicates a probability that the associated requestor will be assigned a highest current priority. Each requestor is then assigned a current priority that is determined substantially randomly with respect to previous priorities of the requestors. In response to the current priorities of the requestors, a request for access to the resource is granted. In one embodiment, a requestor corresponding to a granted request is signaled that its request has been granted, and a requestor corresponding to a rejected request is signaled that its request was not granted.
The above as well as additional objects, features, and advantages of the present invention will become apparent in the following detailed written description.
BRIEF DESCRIPTION OF THE DRAWINGS
The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein:
FIG. 1 depicts a first illustrative embodiment of a data processing system or portion of a data processing system with which the present invention may advantageously be utilized;
FIG. 2 illustrates a more detailed block diagram representation of the pseudo-random generator and resource controller illustrated in FIG. 1;
FIG. 3 depicts a request queue, which in an illustrative embodiment of the data processing system illustrated, in FIG. 1, comprises a requester;
FIG. 4 is a timing diagram illustrating the relative timings of a request for access to the shared resource by a requester and a response to the request by the resource controller in the data processing system depicted in FIG. 1;
FIG. 5 depicts a second illustrative embodiment of data processing system in accordance with the present invention in which the requesters are bus masters, the resource controller is a bus arbiter, and the shared resource is a shared system bus; and
FIG. 6 illustrates a more detailed block diagram of the bus arbiter shown in FIG. 5.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
With reference now to the figures and in particular with reference to FIG. 1, there is depicted a data processing system in accordance with a first illustrative embodiment of the present invention. As illustrated, data processing system 10, which may comprise, for example, a computer system, a processor, or a memory subsystem, includes requesters 12-18 and optional additional requesters indicated by ellipsis notation. Each of requesters 12-18 is coupled to resource controller 20 by a request output signal and by acknowledge and grant/retry input signals. Resource controller 20 controls access by requesters 12-18 to shared resource 22, which can comprise any resource of data processing system 10, including a shared system bus, a L2 cache directory, or a processor. Data processing system 10 further includes performance monitor 54, which monitors and counts selected events within data processing system 10, including requests generated by requesters 12-18.
As illustrated in FIG. 1, the number of requests for access to (or service from) shared resource 22 that can be concurrently granted is less than the number of possible requests that may be generated by requesters 12-18. Accordingly, when resource controller 20 receives more requests for access to shared resource 22 than can be concurrently granted, resource controller 20 grants the requests of only selected ones of requesters 12-18 according to a priority order. In contrast to prior art data processing systems that allocate access to a shared resource in a deterministic manner, for example, according to fixed or round robin priority order, resource controller 20 utilizes an input from pseudo-random generator 24 to assign at least a highest priority to one of requesters 12-18 in a substantially non-deterministic manner.
Referring now to FIG. 2, there is illustrated a more detailed block diagram representation of pseudo-random generator 24 and resource controller 20 within data processing system 10. Pseudo-random generator 24 comprises a 40-bit shift register 30 that is utilized to generate a 40-bit pseudo-random bit pattern during each clock cycle of data processing system 10. As depicted, bits 1, 18, 20, and 39 of shift register 30 serve as inputs of XNOR gate 32, which produces a 1-bit output that is connected to one input of AND gate 34. AND gate 34 has a second input coupled to clock signal 36, which supplies the clock pulses utilized to synchronize the operation of the components of data processing system 10, and an output connected to bit 0 of shift register 30. During power-on reset (POR), shift register 30 is initialized to all zeroes. Thereafter, XNOR gate 32 performs an exclusive-NOR operation on the values of bits 1, 18, 20, and 39. The resulting 1-bit output of XNOR gate 32 is shifted into bit 0 of shift register 30 when clock signal 36 goes high, and the values of bits 0-38 are shifted to the right (bit 39 is discarded), thereby generating a different bit pattern each cycle. While FIG. 2 depicts an embodiment of pseudo-random generator 24 that utilizes of specific bits of shift register 30 in order to generate a pseudo-random bit pattern having a good distribution of values, it should be understood that other combinations and numbers of bits may alternatively be employed. Furthermore, in order to make the bit patterns produced by shift register 30 truly non-deterministic over the operating life of data processing system 10, a larger shift register containing, for example, 80 bits could be utilized. Alternatively, a non-deterministic bit pattern could be obtained by sampling shift register 30 asynchronously with respect to clock signal 36.
During each clock cycle, the values of predetermined bits within shift register 30 are sampled by resource controller 20, which utilizes the state indicated by the predetermined bits to assign a highest priority to one of requesters 12-18. It is important to note that the number of bit values supplied to resource controller 20 is implementation-specific and depends upon the desired priority weight granularity. In the exemplary embodiment depicted in FIG. 2, the values of four predetermined bits, for example, bits 3, 7, 27, and 34, are utilized in order to provide an indication of one of sixteen possible states to resource controller 20 during each clock cycle. The state indicated by the 4-bit input is decoded by decoder 40 to produce a 16-bit value that indicates the decoded state by the bit position of a single bit set to 1.
Resource controller 20 further includes a software-accessible control register 42, which includes four 16-bit fields that each correspond to one of requesters 12-18. Control register 42 is written by software executing within data processing system 10 to assign each of the 16 possible states to one of requesters 12-18 by setting corresponding bits within each of the bit fields of control register 42. In order to avoid resource contention, a given bit position can be set to 1 in only a single field of control register 42.
Each of requesters 12-18 can be assigned equal priority weight by setting an equal number of bits within each of fields A-D of control register 42. Alternatively, each of requesters 12-18 can selectively be given greater or lesser priority weight by setting a greater or fewer number of bits in the corresponding field of control register 42. Thus, to give requester A 12 relatively greater priority weight, a large number of states can be allocated to requester A 12 by setting a number of bits within field A of control register 42. It should be understood that although a particular one of requesters 12-18 may be accorded a greater priority weight by assigning that requester a large number of states, any of requesters 12-18 can be assigned the highest priority in any given cycle based upon the pseudo-random bit pattern generated by pseudo-random generator 24. Thus, being allocated a large number of states does not guarantee that a requester will be assigned the highest priority in any given cycle, but only the improves the probability that the requester will be assigned the highest priority.
Resource controller 20 also includes a control register 52 having the same arrangement and function as control register 42. In contrast to control register 42, which is programmable by software, control register 52 can be written by performance monitor 54 in response to events occurring within data processing system 10. Performance monitor 54 monitors and counts predefined events within data processing system 10 such as queue entries, clock cycles, misses in cache, instructions dispatched to a particular execution unit, and mispredicted branches. Based upon such monitoring, performance monitor 54 can dynamically update control register 52 in order to assign a greater or lesser priority weight to each of requesters 12-18.
Referring now to FIG. 3, there is depicted a queue 100, which in an exemplary embodiment of data processing system 10 comprises one of requesters 12-18. Queue 100 may comprise a castout queue within a L2 cache or a queue for storing directory writes, for example. As illustrated, queue 100 has an associated high water mark 102 that specifies a threshold, which if exceeded, causes performance monitor 54 to raise the priority weight of queue 100. By monitoring selected events within data processing system 10, such as the number of requests by a particular requester that remain outstanding, and by adjusting priority weights of requesters in response to the monitored events, performance monitor 54 provides real-time optimal resource management within data processing system 10.
Referring again to FIG. 2, corresponding bit fields of control registers 42 and 52 are input into one of multiplexers 60-66. If performance monitor dynamic control of access to shared resource 22 is enabled, for example, by setting a control register, performance monitor 54 asserts a select signal 56 that selects the values of the fields of control register 52 as the outputs of multiplexers 60-66. Alternatively, if performance monitor dynamic control of access to shared resource 22 is not enabled, select signal 56 is not asserted, and multiplexers 60-66 select the values of the fields of control register 42 as outputs. As illustrated, the 16-bit outputs of multiplexers 60-66 are each individually ANDed with the 16-bit output of decoder 40 utilizing AND gates 70-76. The 16 bits output by each of AND gates 70-76 are then ORed utilizing one of OR gates 80-86 to produce four 1-bit signals that indicate which of requesters 12-18 was granted the highest priority during the current cycle. As depicted in FIG. 2, each of the 1-bit signals is then ANDed with a corresponding request signal utilizing one of AND gates 90-96. The output of each of AND gates 90-96 is then transmitted to the corresponding one of requesters 12-18 as a grant (1) or retry (0) signal.
As will be appreciated by those skilled in the art, the requester that is awarded highest priority in a given cycle may not generate (or have queued) a request for access to shared resource 22 during that cycle. Accordingly, resource controller 20 includes priority assignment logic 98, which assigns at least one subsidiary priority to one of requesters 12-18 in response to NOR gate 97 indicating that the assignment of the highest priority did not generate a grant signal. Priority assignment logic 98 can assign the subsidiary priority utilizing any one or a combination of methodologies. For example, priority assignment logic 98 may employ one or more of pseudo-random, fixed, round robin, fairness, or other priority assignment methodologies. Regardless of which method is utilized to determine the subsidiary priority, priority assignment logic 98 transmits a grant signal to the requester granted access to shared resource 22 in the current cycle.
Referring now to FIG. 4, there is illustrated a timing diagram showing the timing relationship between generation of a request by one of requesters 12-18 and receipt of a grant signal from resource controller 20. As illustrated, one of requesters 12-18 asserts a request signal at time T0. Subsequently, at time T1, resource controller responds to the request by asserting an acknowledgement signal. In addition, if the request for access to shared resource 22 was granted, resource controller also asserts a grant signal at time T1. If no grant signal is received by the requester at time T1, the requester interprets the acknowledgement signal as a retry. Thereafter, at time T2, the requestor can remove the request in response to the grant of access to shared resource 22 or in response to the request no longer being valid. Alternatively, in response to receipt of a retry signal, the requester can continue to assert the request.
Requestors 12-18 can be implemented to support a number of behaviors in response to continue receipt of a retry signal. First, requesters 12-18 can be configured to retry a particular request for only a programmable (i.e, software specified) number of times. Alternatively, a requester can access selected bits of shift register 30 to ascertain a pseudo-random number of times to retry a request. For example, in an embodiment where a maximum of 31 retries are permitted, 5 bits of shift register 30 can be read by a requester in response to receipt of a first retry signal in order to determine a number of times to retry a particular request. As a second alternative, if optimal control of access to shared resource 22 is desired, performance monitor 54 can dynamically set the number of retries permitted for a particular request based, for example, upon the number of outstanding requests by other requesters.
Data processing system 10 can similarly implement a number of different methods of handling requests once the maximum number of retries for a request has been reached. For example, requesters 12-18 can remove a request for a programmable period of time once the maximum number of retries has been reached. Alternatively, a request can be removed for a pseudo-randomly selected period of time determined by accessing predetermined bits within shift register 30. For example, in an exemplary embodiment in which 5 bits within shift register 30 are utilized, a request can be removed for a number of clock cycles specified by the 5 bits. Again, if optimal control of access to shared resource 22 is desired, performance monitor 54 can alternatively be configured to dynamically set the number of cycles for which requesters 12-18 must remove requests that have been retried the maximum number of times.
With reference now to FIG. 5, there is depicted a second illustrative embodiment of a data processing system in accordance with the present invention. As illustrated, data processing system 200 includes requesters 212-218, which are each coupled to shared system bus 222 and which can each master (i.e., initiate bus transactions on) system bus 222. Requests for ownership of system bus 222 generated by requesters 212-218 are arbitrated by bus arbiter 220 in response to priority signals 230-236 generated by pseudo-random generator 224. Priority signals 230-236, which each correspond to a respective one of requesters 212-218, each contain pseudo-random duration pulses spaced by pseudo-randomly determined intervals. Importantly, at most one of priority signals 230-236 can be logic low at a time.
Referring now to FIG. 6, there is illustrated a more detailed view of bus arbiter 220 of FIG. 5. As depicted, bus arbiter 220 includes transistors 250-256, which each correspond to one of requesters 212-218. Each of transistors 250-256 has a source connected to the request line of its associated requester, a gate connected to the corresponding one of priority signals 230-236, and a drain connected to the associated requester as a grant line. In operation, priority signals 230-236 are utilized to gate requests received from requesters 212-218 so that a requester making a request and having a logic low priority signal is granted ownership of system bus 222.
As has been described, the present invention provides an improved method and system for controlling access to a shared resource in a data processing system. The method and system of the present invention assign at least a highest priority to one of a plurality of requesters non-deterministically in order to avoid live locks and to prevent a single requester from monopolizing the shared resource. While providing all requesters an opportunity to obtain access to the shared resource, the present invention also permits requesters to be given diverse priority weights, under the control of either software or hardware, in order to ensure that high priority requesters have a greater probability of obtaining access to the shared resource.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention.

Claims (22)

What is claimed is:
1. A method of controlling access to a shared resource in a data processing system having a plurality of requestors, said method comprising:
associating each of a plurality of priority weights with a respective one of said plurality of requesters, each priority weight determining a probability that the associated requestor will be assigned a highest current priority, wherein said associating step includes assigning one or more of a plurality of possible substantially random bit patterns to each of said plurality of requesters, wherein at least one of said plurality of requesters is concurrently assigned at least two of said plurality of possible substantially random bit patterns;
assigning each of a plurality of current priorities to a respective one of a plurality of requestors, at least a highest current priority among said plurality of current priorities being assigned substantially randomly with respect to previous priorities of said plurality of requesters and independently of which of said plurality of priority weights is associated with each of said plurality of requestors, wherein assigning said plurality of current priorities includes:
generating a substantially random bit pattern; and
allocating said highest current priority to one of said plurality of requesters associated with a corresponding possible substantially random bit pattern; and
in response to receipt of one or more requests for access to said shared resource from said plurality of requesters, granting a selected request among said one or more requests for access to said shared resource in response to said current priorities of said plurality of requestors.
2. The method of claim 1, said substantially random bit pattern including a plurality of bits each having a value of 1 or 0, wherein said allocating step comprises the step of allocating said highest current priority to one of said plurality of requesters in response to fewer than all of said plurality of bits.
3. The method of claim 1, and further comprising signaling a requester among said plurality of requesters that generated said selected request that said selected request has been granted.
4. The method of claim 1, and further comprising:
rejecting a request; and
signaling a particular requester among said plurality of requestors that generated said rejected request that said rejected request was not granted.
5. The method of claim 4, wherein said signaling step comprises transmitting a retry signal to said particular requester.
6. The method of claim 5, and further comprising:
in response to receipt of said retry signal by said particular requestor, again generating a request for said shared resource by said particular requestor.
7. The method of claim 6, wherein said step of again generating a request for said shared resource is performed by said particular requester only a substantially randomly selected number of times in response to continued receipt of retry signals.
8. The method of claim 7, and further comprising thereafter removing said request for said shared resource by said particular requester for a substantially randomly selected period of time.
9. The method of claim 1, said assigning step comprising assigning a current priority other than said highest current priority to one of said plurality of requesters according to a predetermined rule.
10. The method of claim 1, wherein at least two of said plurality of priority weights are equal.
11. A system, comprising:
a resource;
a plurality of requestors that share said resource, each of said plurality of requesters including means for generating a request for access to said resource;
means for associating each of a plurality of priority weights with a respective one of said plurality of requesters, each priority weight determining a probability that the associated requestor will be assigned a highest current priority, wherein said means for associating includes means for assigning one or more of a plurality of possible substantially random bit patterns to each of said plurality of requesters, wherein at least one of said plurality of requestors is concurrently assigned at least two of said plurality of possible substantially random bit patterns;
means for assigning each of a plurality of current priorities to a respective one of said plurality of requesters, at least a highest priority among said plurality of current priorities being assigned substantially randomly with respect to previous priorities of said plurality of requestors and independently of which of said plurality of priority weights is associated with each of said plurality of requesters, wherein said means for assigning said plurality of current priorities includes:
means for generating a substantially random bit pattern; and
means for allocating said highest current priority to one of said plurality of requestors associated with a corresponding possible substantially random bit pattern; and
means, responsive to receipt of one or more requests for access to said shared resource from said plurality of requestors, for granting a selected request among said one or more requests for access to said shared resource in response to said current priorities of said plurality of requesters.
12. The system of claim 11, said substantially random bit pattern including a plurality of bits each having a value of 1 or 0, wherein said means for allocating comprises means for allocating said highest current priority to one of said plurality of requestors in response to fewer than all of said plurality of bits.
13. The system of claim 11, and further comprising means for signaling a requester among said plurality of requesters that generated said selected request that said selected request has been granted.
14. The system of claim 13, said means for granting a selected request comprising means for rejecting a request, wherein said means for signaling further comprises means for signaling a particular requester among said plurality of requesters that generated said rejected request that said rejected request was not granted.
15. The system of claim 14, wherein said means for signaling a particular requester among said plurality of requesters that generated said rejected request that said rejected request was not granted comprises means for transmitting a retry signal to said particular requester.
16. The system of claim 15, each of said plurality of requesters further comprising:
means, responsive to receipt of a retry signal, for again generating a request for said shared resource.
17. The system of claim 16, wherein said means for again generating a request for said shared resource comprises means for again generating a request for said shared resource only a substantially randomly selected number of times in response to continued receipt of retry signals.
18. The system of claim 17, and further comprising means for thereafter removing said request for said shared resource by said particular requestor for a substantially randomly selected period of time.
19. The system of claim 11, wherein said means for assigning and said means for granting access together form a resource controller.
20. The system of claim 11, wherein a number of said plurality of requesters is greater than a number of requests that can be granted concurrently.
21. The system of claim 11, said means for assigning comprising means for assigning a current priority other than said highest current priority to one of said plurality of requesters according to a predetermined rule.
22. The system of claim 11, wherein at least two of said plurality of priority weights are equal.
US08/839,437 1997-04-14 1997-04-14 Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights Expired - Fee Related US5931924A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US08/839,437 US5931924A (en) 1997-04-14 1997-04-14 Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights
JP10097774A JPH10301908A (en) 1997-04-14 1998-04-09 Method and system for controlling access to shared resource

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US08/839,437 US5931924A (en) 1997-04-14 1997-04-14 Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights

Publications (1)

Publication Number Publication Date
US5931924A true US5931924A (en) 1999-08-03

Family

ID=25279727

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/839,437 Expired - Fee Related US5931924A (en) 1997-04-14 1997-04-14 Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights

Country Status (2)

Country Link
US (1) US5931924A (en)
JP (1) JPH10301908A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6167478A (en) * 1998-10-05 2000-12-26 Infineon Technologies North America Corp. Pipelined arbitration system and method
US6493354B1 (en) * 1998-11-11 2002-12-10 Qualcomm, Incorporated Resource allocator
US20030204663A1 (en) * 2002-04-30 2003-10-30 Stuber Russell B. Apparatus for arbitrating non-queued split master devices on a data bus
US6662216B1 (en) * 1997-04-14 2003-12-09 International Business Machines Corporation Fixed bus tags for SMP buses
EP1403773A2 (en) * 2002-09-30 2004-03-31 Matsushita Electric Industrial Co., Ltd. Resource management device
US6724766B2 (en) * 1997-06-20 2004-04-20 Alcatel Method and arrangement for prioritized data transmission of packets
US20040215858A1 (en) * 2003-04-24 2004-10-28 International Business Machines Corporation Concurrent access of shared resources
US20040225755A1 (en) * 2003-05-07 2004-11-11 Quach Tuan M. Method and apparatus for avoiding live-lock in a multinode system
US20040267992A1 (en) * 2002-02-28 2004-12-30 Stuber Russell B Look ahead split release for a data bus
US20050005050A1 (en) * 2003-07-07 2005-01-06 Protocom Technology Corporation Memory bus assignment for functional devices in an audio/video signal processing system
US7047322B1 (en) * 2003-09-30 2006-05-16 Unisys Corporation System and method for performing conflict resolution and flow control in a multiprocessor system
US20080091866A1 (en) * 2006-10-12 2008-04-17 International Business Machines Corporation Maintaining forward progress in a shared L2 by detecting and breaking up requestor starvation
US7603672B1 (en) * 2003-12-23 2009-10-13 Unisys Corporation Programmable request handling system and method
US20090319709A1 (en) * 2006-08-25 2009-12-24 Tero Vallius Circuit, method and arrangement for implementing simple and reliable distributed arbitration on a bus
US20150339164A1 (en) * 2009-12-23 2015-11-26 Citrix Systems, Inc. Systems and methods for managing spillover limits in a multi-core system
US10091132B2 (en) * 2016-01-29 2018-10-02 Raytheon Company Systems and methods for resource contention resolution

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4470112A (en) * 1982-01-07 1984-09-04 Bell Telephone Laboratories, Incorporated Circuitry for allocating access to a demand-shared bus
US4752872A (en) * 1984-06-28 1988-06-21 International Business Machines Corporation Arbitration device for latching only the highest priority request when the common resource is busy
US5247677A (en) * 1992-05-22 1993-09-21 Apple Computer, Inc. Stochastic priority-based task scheduler
US5265215A (en) * 1991-04-22 1993-11-23 International Business Machines Corporation Multiprocessor system and interrupt arbiter thereof
US5265223A (en) * 1991-08-07 1993-11-23 Hewlett-Packard Company Preservation of priority in computer bus arbitration
US5379434A (en) * 1992-12-18 1995-01-03 International Business Machines Corporation Apparatus and method for managing interrupts in a multiprocessor system
US5515516A (en) * 1994-03-01 1996-05-07 Intel Corporation Initialization mechanism for symmetric arbitration agents
US5586331A (en) * 1990-09-13 1996-12-17 International Business Machines Corporation Duplicated logic and interconnection system for arbitration among multiple information processors
US5706446A (en) * 1995-05-18 1998-01-06 Unisys Corporation Arbitration system for bus requestors with deadlock prevention
US5717872A (en) * 1996-01-11 1998-02-10 Unisys Corporation Flexible, soft, random-like counter system for bus protocol waiting periods
US5754800A (en) * 1991-07-08 1998-05-19 Seiko Epson Corporation Multi processor system having dynamic priority based on row match of previously serviced address, number of times denied service and number of times serviced without interruption

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4470112A (en) * 1982-01-07 1984-09-04 Bell Telephone Laboratories, Incorporated Circuitry for allocating access to a demand-shared bus
US4752872A (en) * 1984-06-28 1988-06-21 International Business Machines Corporation Arbitration device for latching only the highest priority request when the common resource is busy
US5586331A (en) * 1990-09-13 1996-12-17 International Business Machines Corporation Duplicated logic and interconnection system for arbitration among multiple information processors
US5265215A (en) * 1991-04-22 1993-11-23 International Business Machines Corporation Multiprocessor system and interrupt arbiter thereof
US5754800A (en) * 1991-07-08 1998-05-19 Seiko Epson Corporation Multi processor system having dynamic priority based on row match of previously serviced address, number of times denied service and number of times serviced without interruption
US5265223A (en) * 1991-08-07 1993-11-23 Hewlett-Packard Company Preservation of priority in computer bus arbitration
US5247677A (en) * 1992-05-22 1993-09-21 Apple Computer, Inc. Stochastic priority-based task scheduler
US5379434A (en) * 1992-12-18 1995-01-03 International Business Machines Corporation Apparatus and method for managing interrupts in a multiprocessor system
US5515516A (en) * 1994-03-01 1996-05-07 Intel Corporation Initialization mechanism for symmetric arbitration agents
US5706446A (en) * 1995-05-18 1998-01-06 Unisys Corporation Arbitration system for bus requestors with deadlock prevention
US5717872A (en) * 1996-01-11 1998-02-10 Unisys Corporation Flexible, soft, random-like counter system for bus protocol waiting periods

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6662216B1 (en) * 1997-04-14 2003-12-09 International Business Machines Corporation Fixed bus tags for SMP buses
US6724766B2 (en) * 1997-06-20 2004-04-20 Alcatel Method and arrangement for prioritized data transmission of packets
US6167478A (en) * 1998-10-05 2000-12-26 Infineon Technologies North America Corp. Pipelined arbitration system and method
US6493354B1 (en) * 1998-11-11 2002-12-10 Qualcomm, Incorporated Resource allocator
US20040267992A1 (en) * 2002-02-28 2004-12-30 Stuber Russell B Look ahead split release for a data bus
US7174401B2 (en) 2002-02-28 2007-02-06 Lsi Logic Corporation Look ahead split release for a data bus
US20030204663A1 (en) * 2002-04-30 2003-10-30 Stuber Russell B. Apparatus for arbitrating non-queued split master devices on a data bus
US6948019B2 (en) * 2002-04-30 2005-09-20 Lsi Logic Corporation Apparatus for arbitrating non-queued split master devices on a data bus
EP1403773A3 (en) * 2002-09-30 2005-12-07 Matsushita Electric Industrial Co., Ltd. Resource management device
EP1403773A2 (en) * 2002-09-30 2004-03-31 Matsushita Electric Industrial Co., Ltd. Resource management device
US20040073730A1 (en) * 2002-09-30 2004-04-15 Matsushita Electric Industrial Co., Ltd. Resource management device
US7032046B2 (en) 2002-09-30 2006-04-18 Matsushita Electric Industrial Co., Ltd. Resource management device for managing access from bus masters to shared resources
US20040215858A1 (en) * 2003-04-24 2004-10-28 International Business Machines Corporation Concurrent access of shared resources
US7047337B2 (en) * 2003-04-24 2006-05-16 International Business Machines Corporation Concurrent access of shared resources utilizing tracking of request reception and completion order
US7308510B2 (en) * 2003-05-07 2007-12-11 Intel Corporation Method and apparatus for avoiding live-lock in a multinode system
US20040225755A1 (en) * 2003-05-07 2004-11-11 Quach Tuan M. Method and apparatus for avoiding live-lock in a multinode system
US7284080B2 (en) * 2003-07-07 2007-10-16 Sigmatel, Inc. Memory bus assignment for functional devices in an audio/video signal processing system
US20050005050A1 (en) * 2003-07-07 2005-01-06 Protocom Technology Corporation Memory bus assignment for functional devices in an audio/video signal processing system
US7047322B1 (en) * 2003-09-30 2006-05-16 Unisys Corporation System and method for performing conflict resolution and flow control in a multiprocessor system
US7603672B1 (en) * 2003-12-23 2009-10-13 Unisys Corporation Programmable request handling system and method
US20090319709A1 (en) * 2006-08-25 2009-12-24 Tero Vallius Circuit, method and arrangement for implementing simple and reliable distributed arbitration on a bus
US8190802B2 (en) * 2006-08-25 2012-05-29 Atomia Oy Circuit, method and arrangement for implementing simple and reliable distributed arbitration on a bus
US20080091866A1 (en) * 2006-10-12 2008-04-17 International Business Machines Corporation Maintaining forward progress in a shared L2 by detecting and breaking up requestor starvation
US20150339164A1 (en) * 2009-12-23 2015-11-26 Citrix Systems, Inc. Systems and methods for managing spillover limits in a multi-core system
US10846136B2 (en) * 2009-12-23 2020-11-24 Citrix Systems, Inc. Systems and methods for managing spillover limits in a multi-core system
US10091132B2 (en) * 2016-01-29 2018-10-02 Raytheon Company Systems and methods for resource contention resolution

Also Published As

Publication number Publication date
JPH10301908A (en) 1998-11-13

Similar Documents

Publication Publication Date Title
US5896539A (en) Method and system for controlling access to a shared resource in a data processing system utilizing dynamically-determined weighted pseudo-random priorities
US5935234A (en) Method and system for controlling access to a shared resource in a data processing system utilizing pseudo-random priorities
US5931924A (en) Method and system for controlling access to a shared resource that each requestor is concurrently assigned at least two pseudo-random priority weights
US6006303A (en) Priority encoding and decoding for memory architecture
EP0439987B1 (en) Arbitration system limiting high priority successive grants
EP0426413B1 (en) Multiprocessor arbitration in single processor arbitration schemes
US5996037A (en) System and method for arbitrating multi-function access to a system bus
US5528767A (en) Programmable multi-level bus arbitration apparatus in a data processing system
US5740380A (en) Method and system for apportioning computer bus bandwidth
EP0524682A1 (en) A centralized backplane bus arbiter for multiprocessor systems
US4682282A (en) Minimum latency tie-breaking arbitration logic circuitry
JPH0863429A (en) Multibus dynamic arbiter
US5528766A (en) Multiple arbitration scheme
US20040059879A1 (en) Access priority protocol for computer system
US7617344B2 (en) Methods and apparatus for controlling access to resources in an information processing system
US5761446A (en) Livelock avoidance
US6212589B1 (en) System resource arbitration mechanism for a host bridge
US6467032B1 (en) Controlled reissue delay of memory requests to reduce shared memory address contention
JP3614281B2 (en) Arbitration circuit
US6279066B1 (en) System for negotiating access to a shared resource by arbitration logic in a shared resource negotiator
US20020129210A1 (en) Multiprocessor system snoop scheduling mechanism for limited bandwidth snoopers that uses dynamic hardware/software controls
US6442632B1 (en) System resource arbitration mechanism for a host bridge
Wu et al. Predictable sharing of last-level cache partitions for multi-core safety-critical systems
US7099974B2 (en) Method, apparatus, and system for reducing resource contention in multiprocessor systems
US5758104A (en) Random delay subsystems

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ARIMILLI, RAVI K.;DODSON, JOHN S.;LEWIS, JERRY D.;AND OTHERS;REEL/FRAME:008510/0864

Effective date: 19970411

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20070803