US6766360B1 - Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture - Google Patents

Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture Download PDF

Info

Publication number
US6766360B1
US6766360B1 US09/616,583 US61658300A US6766360B1 US 6766360 B1 US6766360 B1 US 6766360B1 US 61658300 A US61658300 A US 61658300A US 6766360 B1 US6766360 B1 US 6766360B1
Authority
US
United States
Prior art keywords
group
data
rac
remote
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US09/616,583
Inventor
Patrick N. Conway
Yukihiro Nakagawa
Jung Rung Jiang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to US09/616,583 priority Critical patent/US6766360B1/en
Assigned to FUJITSU LIMITED reassignment FUJITSU LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CONWAY, PATRICK N., JIANG, JUNG RUNG, NAKAGAWA, YUKIHIRO
Priority to JP2001215834A priority patent/JP2002091824A/en
Application granted granted Critical
Publication of US6766360B1 publication Critical patent/US6766360B1/en
Adjusted expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/25Using a specific main memory architecture
    • G06F2212/254Distributed memory
    • G06F2212/2542Non-uniform memory access [NUMA] architecture

Definitions

  • the present invention relates generally to a memory access and more particularly, to a system and a method for manipulating requests for shared data in a multi-node computer network system.
  • CC NUMA cache coherent non-uniform memory access
  • a conventional protocol referred to as Modified/Exclusive, Shared, Invalid (“MESI”) evolved to help to increase data access speed.
  • MEMI Modified/Exclusive, Shared, Invalid
  • a memory controller stores and keeps track of information about data in a multi-node system. It determines on which node data is presently residing in multi-node systems.
  • the Remote Access Cache caches the data for remote requests in order to speed access to remote data by a subsequent memory request from a node in the same group.
  • CC NUMA cache coherent non-uniform memory access
  • a first processor issues a “read-to-share” or a “read-to-own” request to remote memory, it first needs to access the Remote Access Cache (RAC) and then to access the directory for the remote memory agent.
  • RAC Remote Access Cache
  • a problem with this approach is that it serializes the RAC access and the remote directory access.
  • the existing approach increases remote memory latency by not allowing overlap of these two operations.
  • the data in the RAC could be present in the Modified (M) or Exclusive (E) state. If the RAC has the line in E state, then it has “read-write” permission for its copy of the line, but it has not yet written to the line. If the RAC is in M state, then it has “read-write” permission for its cached copy of the line and it has already modified the line. When the most recent data is in the RAC, and the state of the cache line is M and the RAC supplies the cache line in response to a “read-to-share” or “read-to-own” request.
  • M Modified
  • E Exclusive
  • a remote “read-to-share” request which hits a line in the M state in the RAC must downgrade the line state from M to S by writing the line back to memory and return a shared copy to the requestor.
  • a remote “read-to-own” request must send an ownership transfer notification to the directory to indicate who the new owner of the line is. Ownership transfer notification is required because the directory must always track which cache is the exclusive owner of a cache line in the ME state at the directory. However, ownership transfer complicates the protocol.
  • a line which has been modified may first need to be evicted from the RAC in order to create space for the new line to be installed in the RAC.
  • the possibility of cache line eviction requires that the RAC must be read on every “read-to-share” or “read-to-own” access.
  • Such a new system should provide congestion relief by bypassing the RAC when it is busy.
  • such a new system should simplify the protocol by eliminating eviction of Modified data from the RAC and should eliminate ownership transfer notification of the directory anytime writeback or a HIT to Modified or Exclusive data occurs.
  • Such a system should avoid serializing the RAC access and the memory access, thereby reducing memory latency.
  • a preferred embodiment of the present invention includes a computer network system for accessing data that includes a plurality of groups, each group including a plurality of nodes that couple through an interconnect system, each node including one or more central processing units (or processors) with each processor having a processor cache. Each node further includes a memory agent, a main memory, and a directory coupled to the processors and processor caches.
  • the system also includes a directory coupled to a Request Outstanding Buffer (ROB) to record the progress of a memory transaction in the system.
  • a cache line is the smallest unit of data that can be stored in cache and tracked by the directory. Data is supplied through the cache line.
  • the information stored in the directory refers to which node(s) has a particular cache line as well as the status of data in those cache lines.
  • the status of data in the cache line at the directory may be, for example, Modified/Exclusive (“ME”), shared (“S”), or invalid (“I”).
  • Modified/Exclusive state indicates that the line has been read by a caching memory agent for read-write access.
  • Shared state indicates that the line has been read by a caching memory agent for read-only access.
  • Invalid state indicates that the line is not cached in any cache in the system. If the directory state is Modified/Exclusive (ME), the owning node is also recorded in the directory entry. If the directory state is Shared (S), a list of sharing nodes is recorded in the directory entry.
  • ME Modified/Exclusive
  • S Shared
  • the system further comprises the ROB coupled to a memory agent to record the progress of a data requests.
  • the ROB may be connected to remote nodes through the global interconnect system. Entries in the ROB include the following fields: REQUEST, STATE, and TRANSACTION ID.
  • the system further includes a remote access cache (RAC) to cache remote memory references.
  • the RAC caches only clean remote data in S state and does not cache remote data in the ME state. Entries in the RAC include the following fields: ADDRESS TAG, STATE, and DATA.
  • a preferred method for accessing data comprises: requesting “read-to-share” data from a memory line in a remote node; issuing simultaneously two requests: to the RAC and to the directory for the remote memory node; returning MISS back to the ROB if the cache line in the RAC is not cached; returning data to the requesting processor in group A from the remote memory node and installing the cache line in the RAC.
  • the cache line in the RAC is cached, returning a RAC HIT to the ROB.
  • the fact that there is a “HIT” in the cache indicates that the state of the line in the directory is Shared, but not Modified/Exclusive, or Invalid.
  • the “read-to-share” request from the first processor is issued to the RAC and is also simultaneously issued to a remote home node. Overlapping these two operations avoids serializing the RAC access and the memory access. This beneficially reduces memory latency for the case when the RAC access is a MISS.
  • the “read-to-share” access from a processor node hits in the cache, then data can be returned and used immediately by the processor without waiting for a response from the directory controller at the remote home node.
  • the fact that there is a HIT in the RAC indicates that the state of the line in the directory is Shared, and not Modified/Exclusive, or Invalid. Since the data in the RAC is only Shared, this obviates the need to wait for the result of the directory lookup.
  • the present invention also beneficially simplifies the protocol by eliminating evictions of data in the RAC before installing the new cache line. Further, the present invention allows congestion relief since the RAC can be bypassed whenever the RAC is busy. In this situation, the “read” request goes directly to the remote home node bypassing the RAC. Data is then returned directly to the requestor and is not installed in the RAC. This is possible because if the cache line is present in the RAC, then it is in the shared state. The data in the RAC is always a copy of the data in the memory. Therefore, the data can be returned from the memory when the RAC is bypassed.
  • the RAC mechanism of the present invention provides a greater degree of fault tolerance without incurring any performance overhead since a RAC access error can be simply treated as a RAC MISS.
  • the error is corrected with no additional overhead.
  • the presence of the RAC does not increase memory access latency. That is, the latency to remote memory with a RAC MISS is the same as the latency to remote memory without a RAC. Therefore, the RAC can only provide a benefit of performance, even if the miss rate of the RAC is high.
  • FIG. 1 is a block diagram of one embodiment of an overall architecture of a multi-node network system in accordance with the present invention.
  • FIG. 2 pictorially illustrates remote access cache (RAC) entries in accordance with the present invention.
  • RAC remote access cache
  • FIG. 3 pictorially illustrates directory entry format recording the global state of a cache line in the system in accordance with the present invention.
  • FIG. 4 pictorially illustrates request outstanding buffer (ROB) entries in accordance with the present invention.
  • FIG. 5 is a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is not being cached in a remote access cache (RAC) in accordance with the present invention.
  • RAC remote access cache
  • FIG. 6 is a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is being cached in a remote access cache (RAC) in accordance with the present invention.
  • RAC remote access cache
  • FIG. 7 is a state transition diagram for the request outstanding buffer (ROB) in accordance with the present invention.
  • FIG. 1 is a block diagram of one embodiment of an overall architecture of a multi-node network system in accordance with the present invention.
  • the multi-node network system includes two local interconnect systems for group A ( 10 ), having the first 12 and the second node 14 , and for group B ( 40 ), having the first node 42 and the second node 44 .
  • the two local interconnect systems are connected through a global interconnect 50 .
  • the first 12 and the second 14 node are coupled to each other through an interconnect system 8 .
  • the first node 12 includes a memory agent 22 , a main memory 18 , a directory 20 , one or more processors 16 K- 16 (K+m) (generally 16 ), a request outstanding buffer (ROB) 26 and a remote access cache (RAC) 28 .
  • Each processor 16 includes a processor cache 24 K- 24 (K+m) (generally 24 ).
  • the caches 24 of processors in the first node are coupled with the memory memory agent 22 .
  • the main memory 18 in the first node 12 is coupled with the memory agent 22 .
  • the directory 20 and the RAC 28 are coupled with the ROB 26 , which is coupled to the memory agent 22 .
  • the second node 14 includes an memory agent 32 , a main memory 38 , a directory 20 , one or more processors 36 L- 36 (L+n) (generally 36 ), a request outstanding buffer (ROB) 26 and a remote access cache (RAC) 28 .
  • Each processor 36 includes a processor cache 34 L- 34 (L+n) (generally 34 ).
  • the caches 34 of processors in the second node 14 are coupled with the memory agent 32 .
  • the main memory 38 in the second node 14 is coupled with the memory agent 32 .
  • the directory 20 and the RAC 28 are coupled with the ROB 26 , which is coupled to the memory agent 32 .
  • the first node 42 and the second node 44 are coupled to each other through an interconnect system 9 .
  • the first node 42 in Group B ( 40 ) includes a memory agent 52 , a main memory 48 , a directory 60 , one or more processors 46 M- 46 (M+m) (generally 46 ), a request outstanding buffer (ROB) 56 and a remote access cache (RAC) 58 .
  • Each processor 46 includes a processor cache 54 M- 54 (M+m) (generally 54 ).
  • the caches 54 of processors in the first node 42 are coupled with the memory agent 52 .
  • the main memory 48 is coupled with the memory agent 52 .
  • the directory 60 and the RAC 58 are coupled with the ROB 56 .
  • the ROB 56 is coupled to the memory agent 52 .
  • the second node 44 in Group B ( 40 ) includes an memory agent 62 , a main memory 58 , a directory 60 , one or more processors 66 N- 66 (N+n) (generally 66 ), a request outstanding buffer (ROB) 56 and a remote access cache (RAC) 58 .
  • Each processor 66 includes a processor cache 64 N- 64 (N+n) (generally 64 ).
  • the caches 64 of processors in the second node 44 are coupled with the memory agent 62 .
  • the main memory 68 is coupled with the memory agent 62 .
  • the directory 60 and the RAC 58 are coupled with the ROB 56 .
  • the ROB 56 is coupled to the memory agent 62 .
  • the processor may be a conventional processor, for example, Intel Pentium®-type processor, Sun SPARC®-type processor, a Motorola PowerPC®-type processor or the like.
  • the processor cache 24 , 34 , 54 , 64 may be a conventional processor cache.
  • the main memory 18 , 38 , 48 , 68 may be conventional memory, for example, a dynamic random access memory (DRAM).
  • DRAM dynamic random access memory
  • the memory agent 22 , 32 , 52 , 62 interfaces the processor to main memory 18 , 38 , 48 , 68 .
  • the memory agent is a memory controller which reads and writes DRAM.
  • the DRAM could be either conventional DDR-SDRAM or RDRAM.
  • the ROB 26 , 56 may be a conventional buffer for tracking a new data request.
  • the directory 20 , 60 may be a conventional directory to record the global state of a cache line in the system.
  • the information in the directory 20 , 60 may be structured in a table or chart format as pictorially illustrated by FIG. 3 .
  • the directory 20 , 60 may be embodied in software, firmware, hardware or a combination of software, firmware, and/or software.
  • the directory 20 , 60 is configured to store information regarding the status of a cache line.
  • a cache line is the smallest unit of data that can be stored in cache and tracked by the directory. Data is supplied through the cache line.
  • the information stored in the directory refers to which node(s) has a particular cache line as well as the status of data in those cache lines.
  • the ADDRESS TAG field is matched with the address of a “read-to-share” request to determine if the RAC access is a HIT or MISS.
  • the STATE field records the state of the cache line.
  • the RAC supports two states, Shared and Invalid, and caches only clean remote data. If the directory state is Modified Exclusive Shared/Invalid (“MESI”) for a cache line, the cache line cannot be present in the RAC.
  • MEMI Modified Exclusive Shared/Invalid
  • a HIT in the RAC indicates that the state of the cache line in the directory is Shared, but not Modified, Exclusive or Invalid.
  • a MISS in the RAC indicates that the state of the cache line in the directory is not Shared.
  • the DATA field is present to cache clean remote data.
  • FIG. 3 it pictorially illustrates directory entry format recording the global state of a cache line in the system in a tabular format.
  • the table may include the state of a cache line in a directory by representing it as shared (“S”) (the line may be present in multiple caches in the system and the data may be supplied by memory agent or one of the caches), modified (“M”) (line is present in one cache which supplies the data), or invalid (“I”) (the data is not being supplied or shared), or exclusive (“E”) (the line is present in one cache in the system, data may be supplied by memory agent or the cache and the state of the line is downgraded from E to S). Further, the directory maintains a Sharing List.
  • the directory marks a requesting node as a Sharer and adds that node to the Sharing List.
  • the directory entry includes a TRANSACTION ID field, which is used to associate messages with a particular outstanding transaction. Every message carries a Transaction ID.
  • FIG. 4 illustrates an entry to record the progress of a transaction in a ROB.
  • the entry in the ROB comprises the following fields: REQUEST, STATE, and TRANSACTION ID.
  • the “REQUEST” field is present to record the requests coming into and out of the ROB.
  • this field records a “read-to-share” request issued by a processor in the requesting node to a memory line in the remote home node. Also, when the ROB issues a “look up” request, it is recorded in the “REQUEST” field.
  • the ROB implements a finite state machine.
  • the STATE field in the ROB shows the current state of the ROB finite state machine.
  • a state transition diagram for the ROB is illustrated in FIG. 7 .
  • FREE state a read-to-share request causes allocation of a free ROB entry and a transition to state S 1 .
  • a RAC lookup message is issued to the RAC and a Directory lookup message is simultaneously issued to the directory for the home node.
  • S 1 state a RAC MISS causes a transition to M 1 state.
  • S 1 state a Memory Data causes a transition to S 2 state and a Read Response is returned to the requesting processor.
  • S 1 state a RAC HIT causes a transition to H 1 state.
  • a Read Response is returned to the requesting processor.
  • S 2 state a RAC MISS causes a transition to FREE state and the ROB entry is deallocated.
  • a RAC HIT causes a transition to FREE state, the data is discarded and the ROB entry
  • the TRANSACTION ID field uniquely identifies each transaction and is used to distinguish between messages belonging to unrelated transaction flows.
  • Each message carries a TRANSACTION ID field which is used to associate the message with a particular transaction.
  • FIG. 5 there is illustrated a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is not being cached in the RAC in accordance with the present invention.
  • a first processor 16 in the first node 12 in group A ( 10 ) issues a “read-to-share” request 1 for data to the ROB 26 .
  • the “read-to-share” request is allocated an entry 1 in the ROB 26 .
  • the STATE field of the ROB transitions to S 1 state.
  • the ROB 26 simultaneously issues a first request 2 to the RAC 28 , which is responsible for satisfying subsequent “read-to-share” requests to that cache line from nodes which access the RAC 28 , and issues a second request 2 a to the directory 60 for the Group B ( 40 ) for the remote home node 42 .
  • the address includes the destination node. If the cache line in the RAC 28 is not being cached, the RAC 28 returns 3 RAC MISS back to the ROB 26 . That is, the RAC 28 reports that it does not have the cache line requested. In response, the ROB 26 transitions the STATE field to M 1 state to indicate that a RAC MISS occurred and that the RAC 28 did not satisfy the request.
  • the directory 60 Once the directory 60 has received the lookup request 2 a, it looks up the state of the cache line, which is Invalid. The directory 60 issues a “memory read” request 3 a to the first remote node 42 and transitions the state of the directory entry for the requested cache line from an Invalid state to a Shared state. Further, the directory 60 adds the first node 12 in group A ( 10 ) to the list of sharers for this cache line 3 b.
  • the first remote node 42 returns data 4 a to the ROB 26 in the requesting node through the global interconnect system 50 .
  • the ROB 26 returns data 5 a to the processor 16 in group A ( 10 ).
  • the ROB 26 installs data 5 a in the RAC 28 so that it will be available to nodes in the same group which subsequently access the RAC 28 when the RAC 28 receives the next “read-to-share” request.
  • the processor “read-to-share” request has been satisfied.
  • it is not necessary to first evict the entry from the RAC 28 before installing the new cache line because the new cache line is installed in the RAC 28 by simply overwriting existing entry with a new entry.
  • the entry in the ROB 26 is deallocated and its STATE transitions to FREE state. The process then ends 6 .
  • the RAC can simply be bypassed and the “read” request goes directly to the remote node.
  • This feature allows the system to respond dynamically to the level of congestion at the RAC. Data is then returned directly to the requestor and is not installed in RAC. This is possible because if the cache line is present in the RAC, then it is in the Shared state. The data in the RAC is always a copy of the data in the memory. Therefore, the data can be returned from the memory when the RAC is bypassed.
  • FIG. 6 illustrates a flow diagram of one embodiment of a process for requesting data when the cache line in the RAC is cached in accordance with the present invention.
  • the process starts 0 , for example, as a first processor 36 in the second node 14 in group A sends a “read-to-share” request 1 for data to the memory line in the remote node. Address includes node destination.
  • the “read-to-share” request is allocated an entry 1 in the ROB.
  • the STATE field in the ROB entry transitions to S 1 state.
  • the ROB simultaneously issues a first request 2 to the remote access cache 28 , which is responsible for satisfying subsequent “read-to-share” requests to that cache line from nodes which access the RAC 28 , and a second request 2 a to the directory 60 for the remote home node.
  • the cache line in the RAC If the cache line in the RAC is being cached, it returns a RAC HIT 3 to the ROB 26 .
  • the fact that there is a “HIT” in the cache indicates that the state of the cache line in the directory is Shared, but not Modified, Exclusive, or Invalid.
  • the ROB 26 transitions the STATE field to H 1 state to indicate that a RAC HIT occurred. That is, the RAC 28 satisfied the request.
  • the data then is returned 4 to the requesting node 14 that issued the “read-to-share” request. At this point the processor “read-to-share” request has been satisfied.
  • the directory 60 looks up the state of the line which is Shared. It sends a “memory read” request 3 a to the memory agent 52 for the remote home node 42 . Further, the directory adds the second node 14 in group A to the list of sharers for this cache line 3 b .
  • the remote home node 42 returns data 4 a to the ROB 26 in the requesting node through the global interconnect system 50 .
  • the ROB 26 discards the received data 4 b since the original request was satisfied with the memory line cached in the RAC 28 .
  • the entry in the ROB is deallocated by transitioning its STATE field to FREE state. The process then ends 5 . It should be noted that if the read access from a processor node “hits” in the cache, then data can be returned and used immediately by the processor without waiting for a response from the directory at the remote node.

Abstract

A computer network system for manipulating requests for shared data includes a plurality of groups and each group has a plurality of nodes and each node has a plurality of processors. The system further comprises a request outstanding buffer (ROB) for recording data requests, a remote access cache (RAC) for caching the results of prior memory requests which are remote to a requesting node, and a directory for recording a global state of a cache line in the system. The RAC supports only two states, Shared and Invalid, and caches only clean remote data. If the directory state is Modified/Exclusive, the line is indicated to not be in the RAC. The behavior of the RAC is described for two important cases: initial RAC does not have the line caches and initial RAC has the line cached. The requested data is supplied to the requesting node from the RAC when the RAC's line is cached and when the RAC's line is not cached, the requested data is supplied from the remote home node and the requested data is installed in the RAC. In the case when the data is not present in the RAC, the request to the remote home node is overlapped with the RAC access to minimize remote memory access latency.

Description

FIELD OF THE INVENTION
The present invention relates generally to a memory access and more particularly, to a system and a method for manipulating requests for shared data in a multi-node computer network system.
BACKGROUND OF THE INVENTION
Conventional cache coherent non-uniform memory access (“CC NUMA”) is known. In a multi-node system+using non-uniform memory access, if a central processing unit (“CPU”) accesses memory at its own node, i.e., a local node, the time to access data is fast. By contrast, in a non-uniform memory access at a node other than the central processing unit's own node, i.e., a remote node, the time to access the data is slow.
A conventional protocol referred to as Modified/Exclusive, Shared, Invalid (“MESI”) evolved to help to increase data access speed. In this protocol, a memory controller stores and keeps track of information about data in a multi-node system. It determines on which node data is presently residing in multi-node systems.
The Remote Access Cache (RAC) caches the data for remote requests in order to speed access to remote data by a subsequent memory request from a node in the same group.
In a conventional cache coherent non-uniform memory access (“CC NUMA”) when a first processor issues a “read-to-share” or a “read-to-own” request to remote memory, it first needs to access the Remote Access Cache (RAC) and then to access the directory for the remote memory agent. A problem with this approach is that it serializes the RAC access and the remote directory access. The existing approach increases remote memory latency by not allowing overlap of these two operations.
Further, the data in the RAC could be present in the Modified (M) or Exclusive (E) state. If the RAC has the line in E state, then it has “read-write” permission for its copy of the line, but it has not yet written to the line. If the RAC is in M state, then it has “read-write” permission for its cached copy of the line and it has already modified the line. When the most recent data is in the RAC, and the state of the cache line is M and the RAC supplies the cache line in response to a “read-to-share” or “read-to-own” request. A remote “read-to-share” request which hits a line in the M state in the RAC must downgrade the line state from M to S by writing the line back to memory and return a shared copy to the requestor. A remote “read-to-own” request must send an ownership transfer notification to the directory to indicate who the new owner of the line is. Ownership transfer notification is required because the directory must always track which cache is the exclusive owner of a cache line in the ME state at the directory. However, ownership transfer complicates the protocol.
If the remote “read-to-share” access misses in the RAC, a line which has been modified may first need to be evicted from the RAC in order to create space for the new line to be installed in the RAC. The possibility of cache line eviction requires that the RAC must be read on every “read-to-share” or “read-to-own” access.
In addition, because a cache line can only be present in exactly one RAC in the system in the Exclusive and Modified state, performance does not scale well with a large number of RACs. Once the number of RACs in the system increases, the odds of hitting Exclusive or Modified data in the RAC decline.
Therefore, there is a need for a memory access system and method that reduces latency of remote memory accesses. Such a new system should provide congestion relief by bypassing the RAC when it is busy. In addition, such a new system should simplify the protocol by eliminating eviction of Modified data from the RAC and should eliminate ownership transfer notification of the directory anytime writeback or a HIT to Modified or Exclusive data occurs. Most importantly, such a system should avoid serializing the RAC access and the memory access, thereby reducing memory latency.
SUMMARY OF THE INVENTION
A preferred embodiment of the present invention includes a computer network system for accessing data that includes a plurality of groups, each group including a plurality of nodes that couple through an interconnect system, each node including one or more central processing units (or processors) with each processor having a processor cache. Each node further includes a memory agent, a main memory, and a directory coupled to the processors and processor caches.
The system also includes a directory coupled to a Request Outstanding Buffer (ROB) to record the progress of a memory transaction in the system. A cache line is the smallest unit of data that can be stored in cache and tracked by the directory. Data is supplied through the cache line. The information stored in the directory refers to which node(s) has a particular cache line as well as the status of data in those cache lines. The status of data in the cache line at the directory may be, for example, Modified/Exclusive (“ME”), shared (“S”), or invalid (“I”). Modified/Exclusive state indicates that the line has been read by a caching memory agent for read-write access. Shared state indicates that the line has been read by a caching memory agent for read-only access. Invalid state indicates that the line is not cached in any cache in the system. If the directory state is Modified/Exclusive (ME), the owning node is also recorded in the directory entry. If the directory state is Shared (S), a list of sharing nodes is recorded in the directory entry.
The system further comprises the ROB coupled to a memory agent to record the progress of a data requests. The ROB may be connected to remote nodes through the global interconnect system. Entries in the ROB include the following fields: REQUEST, STATE, and TRANSACTION ID.
The system further includes a remote access cache (RAC) to cache remote memory references. The RAC caches only clean remote data in S state and does not cache remote data in the ME state. Entries in the RAC include the following fields: ADDRESS TAG, STATE, and DATA.
A preferred method for accessing data comprises: requesting “read-to-share” data from a memory line in a remote node; issuing simultaneously two requests: to the RAC and to the directory for the remote memory node; returning MISS back to the ROB if the cache line in the RAC is not cached; returning data to the requesting processor in group A from the remote memory node and installing the cache line in the RAC. Alternatively, if the cache line in the RAC is cached, returning a RAC HIT to the ROB. The fact that there is a “HIT” in the cache indicates that the state of the line in the directory is Shared, but not Modified/Exclusive, or Invalid. Then, modifying the STATE field in the ROB accordingly to indicate whether the cache line is cached in the RAC. Finally, returning data to the requesting node. The data received by the ROB from the remote node is discarded once the original request is satisfied with the memory line cached in the RAC.
In the present invention, the “read-to-share” request from the first processor is issued to the RAC and is also simultaneously issued to a remote home node. Overlapping these two operations avoids serializing the RAC access and the memory access. This beneficially reduces memory latency for the case when the RAC access is a MISS. Thus, if the “read-to-share” access from a processor node hits in the cache, then data can be returned and used immediately by the processor without waiting for a response from the directory controller at the remote home node. The fact that there is a HIT in the RAC indicates that the state of the line in the directory is Shared, and not Modified/Exclusive, or Invalid. Since the data in the RAC is only Shared, this obviates the need to wait for the result of the directory lookup.
The present invention also beneficially simplifies the protocol by eliminating evictions of data in the RAC before installing the new cache line. Further, the present invention allows congestion relief since the RAC can be bypassed whenever the RAC is busy. In this situation, the “read” request goes directly to the remote home node bypassing the RAC. Data is then returned directly to the requestor and is not installed in the RAC. This is possible because if the cache line is present in the RAC, then it is in the shared state. The data in the RAC is always a copy of the data in the memory. Therefore, the data can be returned from the memory when the RAC is bypassed.
Next, the RAC mechanism of the present invention provides a greater degree of fault tolerance without incurring any performance overhead since a RAC access error can be simply treated as a RAC MISS. When the data from memory is installed, the error is corrected with no additional overhead.
Finally, the presence of the RAC does not increase memory access latency. That is, the latency to remote memory with a RAC MISS is the same as the latency to remote memory without a RAC. Therefore, the RAC can only provide a benefit of performance, even if the miss rate of the RAC is high.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a block diagram of one embodiment of an overall architecture of a multi-node network system in accordance with the present invention.
FIG. 2 pictorially illustrates remote access cache (RAC) entries in accordance with the present invention.
FIG. 3 pictorially illustrates directory entry format recording the global state of a cache line in the system in accordance with the present invention.
FIG. 4 pictorially illustrates request outstanding buffer (ROB) entries in accordance with the present invention.
FIG. 5 is a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is not being cached in a remote access cache (RAC) in accordance with the present invention.
FIG. 6 is a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is being cached in a remote access cache (RAC) in accordance with the present invention.
FIG. 7 is a state transition diagram for the request outstanding buffer (ROB) in accordance with the present invention.
DERAILED DESCRIPTION OF THE INVENTION
The present invention is a system and a method for manipulating requests for shared data from a remote access cache (RAC) or from a remote node. Figure (FIG.) 1 is a block diagram of one embodiment of an overall architecture of a multi-node network system in accordance with the present invention. The multi-node network system includes two local interconnect systems for group A (10), having the first 12 and the second node 14, and for group B (40), having the first node 42 and the second node 44. The two local interconnect systems are connected through a global interconnect 50.
In the Group A (10), the first 12 and the second 14 node are coupled to each other through an interconnect system 8. The first node 12 includes a memory agent 22, a main memory 18, a directory 20, one or more processors 16K-16(K+m) (generally 16), a request outstanding buffer (ROB) 26 and a remote access cache (RAC) 28. Each processor 16 includes a processor cache 24K-24 (K+m) (generally 24). The caches 24 of processors in the first node are coupled with the memory memory agent 22. The main memory 18 in the first node 12 is coupled with the memory agent 22. The directory 20 and the RAC 28 are coupled with the ROB 26, which is coupled to the memory agent 22.
The second node 14 includes an memory agent 32, a main memory 38, a directory 20, one or more processors 36L-36(L+n) (generally 36), a request outstanding buffer (ROB) 26 and a remote access cache (RAC) 28. Each processor 36 includes a processor cache 34L-34 (L+n) (generally 34). The caches 34 of processors in the second node 14 are coupled with the memory agent 32. The main memory 38 in the second node 14 is coupled with the memory agent 32. The directory 20 and the RAC 28 are coupled with the ROB 26, which is coupled to the memory agent 32.
In the Group B (40), the first node 42 and the second node 44 are coupled to each other through an interconnect system 9. The first node 42 in Group B (40) includes a memory agent 52, a main memory 48, a directory 60, one or more processors 46M-46(M+m) (generally 46), a request outstanding buffer (ROB) 56 and a remote access cache (RAC) 58. Each processor 46 includes a processor cache 54M-54 (M+m) (generally 54). The caches 54 of processors in the first node 42 are coupled with the memory agent 52. The main memory 48 is coupled with the memory agent 52. The directory 60 and the RAC 58 are coupled with the ROB 56. The ROB 56 is coupled to the memory agent 52.
The second node 44 in Group B (40) includes an memory agent 62, a main memory 58, a directory 60, one or more processors 66N-66(N+n) (generally 66), a request outstanding buffer (ROB) 56 and a remote access cache (RAC) 58. Each processor 66 includes a processor cache 64N-64 (N+n) (generally 64). The caches 64 of processors in the second node 44 are coupled with the memory agent 62. The main memory 68 is coupled with the memory agent 62. The directory 60 and the RAC 58 are coupled with the ROB 56. The ROB 56 is coupled to the memory agent 62.
It is noted that in each node 12, 14, 42, 44, the processor may be a conventional processor, for example, Intel Pentium®-type processor, Sun SPARC®-type processor, a Motorola PowerPC®-type processor or the like. The processor cache 24, 34, 54, 64 may be a conventional processor cache. The main memory 18, 38, 48, 68 may be conventional memory, for example, a dynamic random access memory (DRAM).
The memory agent 22, 32, 52, 62 interfaces the processor to main memory 18, 38, 48, 68. The memory agent is a memory controller which reads and writes DRAM. The DRAM could be either conventional DDR-SDRAM or RDRAM.
The ROB 26, 56 may be a conventional buffer for tracking a new data request. The directory 20, 60 may be a conventional directory to record the global state of a cache line in the system. The information in the directory 20, 60 may be structured in a table or chart format as pictorially illustrated by FIG. 3. The directory 20, 60 may be embodied in software, firmware, hardware or a combination of software, firmware, and/or software. The directory 20, 60 is configured to store information regarding the status of a cache line. A cache line is the smallest unit of data that can be stored in cache and tracked by the directory. Data is supplied through the cache line. The information stored in the directory refers to which node(s) has a particular cache line as well as the status of data in those cache lines.
The remote access cache 28, 58 is a conventional processor cache modified to support two states, Shared and Invalid. It caches only clean remote data. Referring now to FIG. 2, there is illustrated a RAC entry comprising the following fields: ADDRESS TAG, STATE, and DATA.
The ADDRESS TAG field is matched with the address of a “read-to-share” request to determine if the RAC access is a HIT or MISS.
The STATE field records the state of the cache line. The RAC supports two states, Shared and Invalid, and caches only clean remote data. If the directory state is Modified Exclusive Shared/Invalid (“MESI”) for a cache line, the cache line cannot be present in the RAC. A HIT in the RAC indicates that the state of the cache line in the directory is Shared, but not Modified, Exclusive or Invalid. In contrast, a MISS in the RAC indicates that the state of the cache line in the directory is not Shared. The DATA field is present to cache clean remote data.
Referring now to FIG. 3, it pictorially illustrates directory entry format recording the global state of a cache line in the system in a tabular format. The table may include the state of a cache line in a directory by representing it as shared (“S”) (the line may be present in multiple caches in the system and the data may be supplied by memory agent or one of the caches), modified (“M”) (line is present in one cache which supplies the data), or invalid (“I”) (the data is not being supplied or shared), or exclusive (“E”) (the line is present in one cache in the system, data may be supplied by memory agent or the cache and the state of the line is downgraded from E to S). Further, the directory maintains a Sharing List. If the state of a cache line is Shared, then the directory marks a requesting node as a Sharer and adds that node to the Sharing List. In addition, the directory entry includes a TRANSACTION ID field, which is used to associate messages with a particular outstanding transaction. Every message carries a Transaction ID.
FIG. 4 illustrates an entry to record the progress of a transaction in a ROB. The entry in the ROB comprises the following fields: REQUEST, STATE, and TRANSACTION ID.
The “REQUEST” field is present to record the requests coming into and out of the ROB. In particular, this field records a “read-to-share” request issued by a processor in the requesting node to a memory line in the remote home node. Also, when the ROB issues a “look up” request, it is recorded in the “REQUEST” field.
The ROB implements a finite state machine. The STATE field in the ROB shows the current state of the ROB finite state machine. A state transition diagram for the ROB is illustrated in FIG. 7. In FREE state, a read-to-share request causes allocation of a free ROB entry and a transition to state S1. A RAC lookup message is issued to the RAC and a Directory lookup message is simultaneously issued to the directory for the home node. In S1 state, a RAC MISS causes a transition to M1 state. In S1 state, a Memory Data causes a transition to S2 state and a Read Response is returned to the requesting processor. In S1 state, a RAC HIT causes a transition to H1 state. A Read Response is returned to the requesting processor. In S2 state, a RAC MISS causes a transition to FREE state and the ROB entry is deallocated. In S2 state, a RAC HIT causes a transition to FREE state, the data is discarded and the ROB entry is deallocated.
The TRANSACTION ID field uniquely identifies each transaction and is used to distinguish between messages belonging to unrelated transaction flows. Each message carries a TRANSACTION ID field which is used to associate the message with a particular transaction.
Referring now to FIG. 5, there is illustrated a flow diagram of one embodiment of a process for manipulating requests for shared data when a cache line is not being cached in the RAC in accordance with the present invention. Once the process starts 0, a first processor 16 in the first node 12 in group A (10) issues a “read-to-share” request 1 for data to the ROB 26. The “read-to-share” request is allocated an entry 1 in the ROB 26. The STATE field of the ROB transitions to S1 state.
Once the request is received, the ROB 26 simultaneously issues a first request 2 to the RAC 28, which is responsible for satisfying subsequent “read-to-share” requests to that cache line from nodes which access the RAC 28, and issues a second request 2 a to the directory 60 for the Group B (40) for the remote home node 42. The address includes the destination node. If the cache line in the RAC 28 is not being cached, the RAC 28 returns 3 RAC MISS back to the ROB 26. That is, the RAC 28 reports that it does not have the cache line requested. In response, the ROB 26 transitions the STATE field to M1 state to indicate that a RAC MISS occurred and that the RAC 28 did not satisfy the request.
Once the directory 60 has received the lookup request 2 a, it looks up the state of the cache line, which is Invalid. The directory 60 issues a “memory read” request 3 a to the first remote node 42 and transitions the state of the directory entry for the requested cache line from an Invalid state to a Shared state. Further, the directory 60 adds the first node 12 in group A (10) to the list of sharers for this cache line 3 b.
Next, the first remote node 42 returns data 4 a to the ROB 26 in the requesting node through the global interconnect system 50. The ROB 26, in turn, returns data 5 a to the processor 16 in group A (10). Also, the ROB 26 installs data 5 a in the RAC 28 so that it will be available to nodes in the same group which subsequently access the RAC 28 when the RAC 28 receives the next “read-to-share” request. At this point the processor “read-to-share” request has been satisfied. According to the present invention, it is not necessary to first evict the entry from the RAC 28 before installing the new cache line because the new cache line is installed in the RAC 28 by simply overwriting existing entry with a new entry. In addition, the entry in the ROB 26 is deallocated and its STATE transitions to FREE state. The process then ends 6.
It should be noted that if the RAC is busy, the RAC can simply be bypassed and the “read” request goes directly to the remote node. This feature allows the system to respond dynamically to the level of congestion at the RAC. Data is then returned directly to the requestor and is not installed in RAC. This is possible because if the cache line is present in the RAC, then it is in the Shared state. The data in the RAC is always a copy of the data in the memory. Therefore, the data can be returned from the memory when the RAC is bypassed.
FIG. 6 illustrates a flow diagram of one embodiment of a process for requesting data when the cache line in the RAC is cached in accordance with the present invention. The process starts 0, for example, as a first processor 36 in the second node 14 in group A sends a “read-to-share” request 1 for data to the memory line in the remote node. Address includes node destination. The “read-to-share” request is allocated an entry 1 in the ROB. The STATE field in the ROB entry transitions to S1 state. Once the request is received, the ROB simultaneously issues a first request 2 to the remote access cache 28, which is responsible for satisfying subsequent “read-to-share” requests to that cache line from nodes which access the RAC 28, and a second request 2 a to the directory 60 for the remote home node.
If the cache line in the RAC is being cached, it returns a RAC HIT 3 to the ROB 26. The fact that there is a “HIT” in the cache indicates that the state of the cache line in the directory is Shared, but not Modified, Exclusive, or Invalid. Then, the ROB 26 transitions the STATE field to H1 state to indicate that a RAC HIT occurred. That is, the RAC 28 satisfied the request. The data then is returned 4 to the requesting node 14 that issued the “read-to-share” request. At this point the processor “read-to-share” request has been satisfied.
Once the directory 60 has received the lookup request 2 a, it looks up the state of the line which is Shared. It sends a “memory read” request 3 a to the memory agent 52 for the remote home node 42. Further, the directory adds the second node 14 in group A to the list of sharers for this cache line 3 b. Next, the remote home node 42 returns data 4 a to the ROB 26 in the requesting node through the global interconnect system 50. The ROB 26, in turn, discards the received data 4 b since the original request was satisfied with the memory line cached in the RAC 28. In addition, the entry in the ROB is deallocated by transitioning its STATE field to FREE state. The process then ends 5. It should be noted that if the read access from a processor node “hits” in the cache, then data can be returned and used immediately by the processor without waiting for a response from the directory at the remote node.

Claims (8)

What is claimed is:
1. A method for manipulating requests for shared data in a computer network having a plurality of groups of nodes comprising:
receiving a request for shared data from a node within a first group wherein each group of the plurality has a plurality of nodes, each node having a plurality of processors;
issuing simultaneously a first request for the same shared data to a first remote access cache (RAC) for the first group for storing only clean remote data marked Shared indicating read-only access wherein the clean remote data is received from another group in the computer network and a directory lookup request to another group for the same shared data; and
responsive to the first remote access cache having the data cached
supplying a copy of the requested data from the first remote access cache to the requesting node within the first group, and responsive to receiving the requested data from the other group that received the directory lookup request,
discarding the requested data from the other group.
2. A system for manipulating requests for shared data in a computer network having a plurality of groups of nodes comprising:
means for receiving a request for shared data from a node within a first group wherein each group of the plurality has a plurality of nodes, each node having a plurality of processors;
means for issuing simultaneously a first request for the same shared data to a first remote access cache (RAC) for the first group for storing only clean remote data marked Shared indicating read-only access wherein the clean remote data is received from another group in the computer network and a directory lookup request to another group for the same shared data; and
means for, responsive to the remote access cache having the data cached, supplying a copy of the requested data from the first remote access cache to the requesting node within the first group, and responsive to receiving the requested data from the other group that received the directory lookup request, means for discarding the requested data from the other group.
3. A system for manipulating requests for shared data between groups of nodes in a computer network comprising:
a first request outstanding buffer (ROB) for a first group wherein each group of the plurality has a plurality of nodes, each node having a plurality of processors and a memory agent, the first ROB being communicatively coupled to receive requests from each respective memory agent for each of the nodes in the first group and being communicatively coupled to a network interface for communicating with a second ROB for a second group of nodes;
a first remote access cache (RAC) for storing for the first group only clean remote data marked Shared indicating read-only access wherein the clean remote data is received from the second group in the computer network, the first RAC being communicatively coupled to the first ROB; and
a first directory for the first group of nodes for recording the progress of a memory transaction between the first group and the second group, the first directory being communicatively coupled to the first ROB:
wherein the first ROB issues simultaneously a first request for shared data to the first remote access cache (RAC) for the first group and a directory lookup request to the second group for the same shared data; and
wherein responsive to the first remote access cache having the data cached, the first ROB supplies a copy of the requested data from the first remote access cache to the requesting node within the first group, and responsive to receiving the requested data from the second group that received the directory lookup request, discarding the requested data from the second group.
4. The method of claim 1 further comprising:
responsive to the first remote access cache not having the data cached, responsive to receiving the requested data from the other group that received the directory lookup request, installing the requested data from the other group in the first remote access cache and supplying a copy of the requested data to the requesting node within the first group.
5. The method of claim 4 wherein installing the requested data from the other group in the remote access cache comprises overwriting an existing cache line entry with the requested data.
6. The method of claim 1 further comprising:
receiving the directory lookup request at a second group in the network; and
adding the requesting node to a list of sharers for the requested data.
7. A system for manipulating requests for shared data in a computer network having a plurality of groups of nodes comprising:
means for receiving a request for shared data from a node within a first group wherein each group of the plurality has a plurality of nodes, each node having a plurality of processors;
means for, responsive to a busy state of a first remote access cache (RAC) for the first group for storing only clean remote data marked Shared indicating read-only access wherein the clean remote data is received from another group in the computer network, bypassing the first remote access cache and means for sending a directory lookup request for the same data to another group; and
means for, responsive to the first remote access cache being free, issuing simultaneously a first request for the same shared data to the first remote access cache (RAC) for the first group and a directory lookup request to another group for the same shared data, and
means for, responsive to the first remote access cache having the data cached, supplying a copy of the requested data from the first remote access cache to the requesting node within the first group, and responsive to receiving the requested data from the other group that received the directory lookup request, means for discarding the requested data from the other group.
8. A method for manipulating requests for shared data in a computer network having a plurality of groups of nodes comprising:
receiving a request for shared data from a node within a first group wherein each group of the plurality has a plurality of nodes, each node having a plurality of processors;
responsive to a busy state of a first remote access cache (RAC) for the first group for storing only clean remote data marked Shared indicating read-only access wherein the clean remote data is received from another group in the computer network, bypassing the first remote access cache and sending a directory lookup request for the same data to another group; and
responsive to the first remote access cache being free, issuing simultaneously a first request for the same shared data to the first remote access cache (RAC) for the first group and a directory lookup request to another group for the same shared data, and
responsive to the first remote access cache having the data cached supplying a copy of the requested data from the first remote access cache to the requesting node within the first group, and responsive to receiving the requested data from the other group the received the directory lookup request, discarding the requested data from the other group.
US09/616,583 2000-07-14 2000-07-14 Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture Expired - Fee Related US6766360B1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US09/616,583 US6766360B1 (en) 2000-07-14 2000-07-14 Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture
JP2001215834A JP2002091824A (en) 2000-07-14 2001-07-16 System and method for operating shared data request

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/616,583 US6766360B1 (en) 2000-07-14 2000-07-14 Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture

Publications (1)

Publication Number Publication Date
US6766360B1 true US6766360B1 (en) 2004-07-20

Family

ID=24470117

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/616,583 Expired - Fee Related US6766360B1 (en) 2000-07-14 2000-07-14 Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture

Country Status (2)

Country Link
US (1) US6766360B1 (en)
JP (1) JP2002091824A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030217234A1 (en) * 2002-05-15 2003-11-20 Broadcom Corp. System having address-based intranode coherency and data-based internode coherency
US20030217233A1 (en) * 2002-05-15 2003-11-20 Broadcom Corp. Remote line directory which covers subset of shareable CC-NUMA memory space
US20040030842A1 (en) * 2002-06-28 2004-02-12 Sun Microsystems, Inc. Mechanism for starvation avoidance while maintaining cache consistency in computer systems
US20040073754A1 (en) * 2002-06-28 2004-04-15 Cypher Robert E. Computer system supporting read-to-write-back transactions for I/O devices
US20050160430A1 (en) * 2004-01-15 2005-07-21 Steely Simon C.Jr. System and method for updating owner predictors
US20050198187A1 (en) * 2004-01-15 2005-09-08 Tierney Gregory E. System and method for providing parallel data requests
US20060031450A1 (en) * 2004-07-07 2006-02-09 Yotta Yotta, Inc. Systems and methods for providing distributed cache coherence
US20060214027A1 (en) * 2004-06-30 2006-09-28 Micheli Paul R Fluid atomizing system and method
US20070067382A1 (en) * 2005-08-30 2007-03-22 Xian-He Sun Memory server
US7240143B1 (en) * 2003-06-06 2007-07-03 Broadbus Technologies, Inc. Data access and address translation for retrieval of data amongst multiple interconnected access nodes
US20080048055A1 (en) * 2002-08-19 2008-02-28 Illinois Tool Works Inc. Spray gun having mechanism for internally swirling and breaking up a fluid
US20080059724A1 (en) * 2003-06-06 2008-03-06 Stifter Francis J Jr Content distribution and switching amongst data streams
US20090164652A1 (en) * 2007-12-21 2009-06-25 General Instrument Corporation Methods and System for Processing Time-Based Content
US9456243B1 (en) 2003-06-06 2016-09-27 Arris Enterprises, Inc. Methods and apparatus for processing time-based content
US9619303B2 (en) 2012-04-11 2017-04-11 Hewlett Packard Enterprise Development Lp Prioritized conflict handling in a system
US10042762B2 (en) * 2016-09-14 2018-08-07 Advanced Micro Devices, Inc. Light-weight cache coherence for data processors with limited data sharing

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7512742B2 (en) * 2006-01-17 2009-03-31 International Business Machines Corporation Data processing system, cache system and method for precisely forming an invalid coherency state indicating a broadcast scope
US8055847B2 (en) * 2008-07-07 2011-11-08 International Business Machines Corporation Efficient processing of data requests with the aid of a region cache

Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4663706A (en) * 1982-10-28 1987-05-05 Tandem Computers Incorporated Multiprocessor multisystem communications network
US5175839A (en) * 1987-12-24 1992-12-29 Fujitsu Limited Storage control system in a computer system for double-writing
US5280612A (en) * 1991-11-26 1994-01-18 International Business Machines Corporation Multiple version database concurrency control system
US5303362A (en) * 1991-03-20 1994-04-12 Digital Equipment Corporation Coupled memory multiprocessor computer system including cache coherency management protocols
US5465338A (en) 1993-08-24 1995-11-07 Conner Peripherals, Inc. Disk drive system interface architecture employing state machines
US5561780A (en) 1993-12-30 1996-10-01 Intel Corporation Method and apparatus for combining uncacheable write data into cache-line-sized write buffers
US5592671A (en) * 1993-03-02 1997-01-07 Kabushiki Kaisha Toshiba Resource management system and method
US5727150A (en) * 1995-05-05 1998-03-10 Silicon Graphics, Inc. Apparatus and method for page migration in a non-uniform memory access (NUMA) system
US5761460A (en) 1996-07-19 1998-06-02 Compaq Computer Corporation Reconfigurable dual master IDE interface
US5829032A (en) * 1994-10-31 1998-10-27 Kabushiki Kaisha Toshiba Multiprocessor system
US5859985A (en) 1996-01-14 1999-01-12 At&T Wireless Services, Inc. Arbitration controller for providing arbitration on a multipoint high speed serial bus using drivers having output enable pins
US5887134A (en) 1997-06-30 1999-03-23 Sun Microsystems System and method for preserving message order while employing both programmed I/O and DMA operations
US6006255A (en) * 1996-04-05 1999-12-21 International Business Machines Corporation Networked computer system and method of communicating using multiple request packet classes to prevent deadlock
US6014690A (en) * 1997-10-24 2000-01-11 Digital Equipment Corporation Employing multiple channels for deadlock avoidance in a cache coherency protocol
US6026472A (en) * 1997-06-24 2000-02-15 Intel Corporation Method and apparatus for determining memory page access information in a non-uniform memory access computer system
US6044438A (en) * 1997-07-10 2000-03-28 International Business Machiness Corporation Memory controller for controlling memory accesses across networks in distributed shared memory processing systems
US6055605A (en) * 1997-10-24 2000-04-25 Compaq Computer Corporation Technique for reducing latency of inter-reference ordering using commit signals in a multiprocessor system having shared caches
US6085293A (en) * 1998-08-17 2000-07-04 International Business Machines Corporation Non-uniform memory access (NUMA) data processing system that decreases latency by expediting rerun requests
US20020184345A1 (en) 2001-05-17 2002-12-05 Kazunori Masuyama System and Method for partitioning a computer system into domains
US20020186711A1 (en) 2001-05-17 2002-12-12 Kazunori Masuyama Fault containment and error handling in a partitioned system with shared resources
US20030005156A1 (en) 2001-06-29 2003-01-02 Sudheer Miryala Scalable and flexible method for address space decoding in a multiple node computer system
US20030007457A1 (en) 2001-06-29 2003-01-09 Farrell Jeremy J. Hardware mechanism to improve performance in a multi-node computer system
US20030007493A1 (en) 2001-06-28 2003-01-09 Hitoshi Oi Routing mechanism for static load balancing in a partitioned computer system with a fully connected network
US20030023666A1 (en) 2001-06-28 2003-01-30 Conway Patrick N. System and method for low overhead message passing between domains in a partitioned server

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4663706A (en) * 1982-10-28 1987-05-05 Tandem Computers Incorporated Multiprocessor multisystem communications network
US5175839A (en) * 1987-12-24 1992-12-29 Fujitsu Limited Storage control system in a computer system for double-writing
US5303362A (en) * 1991-03-20 1994-04-12 Digital Equipment Corporation Coupled memory multiprocessor computer system including cache coherency management protocols
US5280612A (en) * 1991-11-26 1994-01-18 International Business Machines Corporation Multiple version database concurrency control system
US5592671A (en) * 1993-03-02 1997-01-07 Kabushiki Kaisha Toshiba Resource management system and method
US5465338A (en) 1993-08-24 1995-11-07 Conner Peripherals, Inc. Disk drive system interface architecture employing state machines
US5561780A (en) 1993-12-30 1996-10-01 Intel Corporation Method and apparatus for combining uncacheable write data into cache-line-sized write buffers
US5829032A (en) * 1994-10-31 1998-10-27 Kabushiki Kaisha Toshiba Multiprocessor system
US5727150A (en) * 1995-05-05 1998-03-10 Silicon Graphics, Inc. Apparatus and method for page migration in a non-uniform memory access (NUMA) system
US5859985A (en) 1996-01-14 1999-01-12 At&T Wireless Services, Inc. Arbitration controller for providing arbitration on a multipoint high speed serial bus using drivers having output enable pins
US6006255A (en) * 1996-04-05 1999-12-21 International Business Machines Corporation Networked computer system and method of communicating using multiple request packet classes to prevent deadlock
US5761460A (en) 1996-07-19 1998-06-02 Compaq Computer Corporation Reconfigurable dual master IDE interface
US6026472A (en) * 1997-06-24 2000-02-15 Intel Corporation Method and apparatus for determining memory page access information in a non-uniform memory access computer system
US5887134A (en) 1997-06-30 1999-03-23 Sun Microsystems System and method for preserving message order while employing both programmed I/O and DMA operations
US6044438A (en) * 1997-07-10 2000-03-28 International Business Machiness Corporation Memory controller for controlling memory accesses across networks in distributed shared memory processing systems
US6014690A (en) * 1997-10-24 2000-01-11 Digital Equipment Corporation Employing multiple channels for deadlock avoidance in a cache coherency protocol
US6055605A (en) * 1997-10-24 2000-04-25 Compaq Computer Corporation Technique for reducing latency of inter-reference ordering using commit signals in a multiprocessor system having shared caches
US6085293A (en) * 1998-08-17 2000-07-04 International Business Machines Corporation Non-uniform memory access (NUMA) data processing system that decreases latency by expediting rerun requests
US20020184345A1 (en) 2001-05-17 2002-12-05 Kazunori Masuyama System and Method for partitioning a computer system into domains
US20020186711A1 (en) 2001-05-17 2002-12-12 Kazunori Masuyama Fault containment and error handling in a partitioned system with shared resources
US20030007493A1 (en) 2001-06-28 2003-01-09 Hitoshi Oi Routing mechanism for static load balancing in a partitioned computer system with a fully connected network
US20030023666A1 (en) 2001-06-28 2003-01-30 Conway Patrick N. System and method for low overhead message passing between domains in a partitioned server
US20030005156A1 (en) 2001-06-29 2003-01-02 Sudheer Miryala Scalable and flexible method for address space decoding in a multiple node computer system
US20030007457A1 (en) 2001-06-29 2003-01-09 Farrell Jeremy J. Hardware mechanism to improve performance in a multi-node computer system

Non-Patent Citations (17)

* Cited by examiner, † Cited by third party
Title
Abandah, Gheith A., and Davidson, Edward S., Effects of Architectural and Technological Advances on the HP/Convex Exemplar's Memory and Communications Performance, IEEE 1998-1063-6897, pp. 318-329.
Abandah, Gheith A., and Davidson, Edward S., Effects of Architectural and Technological Advances on the HP/Convex Exemplar's Memory and Communications Performance, IEEE 1998—1063-6897, pp. 318-329.
Abandah, Gheith A., et al.; Effects of Architectural and Technological Advances on the HP/Convex Exemplar's Memory and Communication Performance, pp. 318-329; 1998 25<th >International Symposium on Computer Architecture.
Abandah, Gheith A., et al.; Effects of Architectural and Technological Advances on the HP/Convex Exemplar's Memory and Communication Performance, pp. 318-329; 1998 25th International Symposium on Computer Architecture.
Falsafi, Babak and Wood, David A., Reactive NUMA: A Design for Unifying S-COMA and CC-NUMA, ISCA '97, CO, USA, pp. 229-240.
FALSAFI, Babak, et al.; Reactive NUMA: A Design for Unifying S-COMA and CC-NUMA; pp. 229-240; 1997 24<th >International Symposium on Computer Architecture.
FALSAFI, Babak, et al.; Reactive NUMA: A Design for Unifying S-COMA and CC-NUMA; pp. 229-240; 1997 24th International Symposium on Computer Architecture.
Geralds, John in Silicon Valley. Sun enhances partitioning in Starfire Unix server. Dec. 08, 1999 VNU Business Publishing Limited [retrieved on Apr. 11, 2001]. Retrieved from the internet: URL:http://www.vnunet.com/print/104311.
IBM. The IBM NUMA-Q enterprise server architecture. Solving issues of latency and scalability in multiprocessor systems. Jan. 19, 2000, 10 pages.
Lovett, Tom and Clapp, Russell; StiNG: A CC-NUMA Computer System for the Commercial Marketplace; ISCA '95, PA, USA, pp. 308-317.
Lovett, Tom, et al.; StiNG: A CC-NUMA Computer System for the Commerical Marketplace; pp. 308-317; 1996 23<rd >International Symposium on Computer Architecture.
Lovett, Tom, et al.; StiNG: A CC-NUMA Computer System for the Commerical Marketplace; pp. 308-317; 1996 23rd International Symposium on Computer Architecture.
Servers White Paper. Sun Enterprise(TM) 1000 Server: Dynamic System Domains. Sun Microsystems, Inc., Palo Alto, CA, USA. 2001. [retrieved on Apr. 11, 2001]. Retrieved from the internet: URL:http://www.sun.com/servers/white-papers/domains.html?pagestyle=print.
Servers White Paper. Sun Enterprise™ 1000 Server: Dynamic System Domains. Sun Microsystems, Inc., Palo Alto, CA, USA. 2001. [retrieved on Apr. 11, 2001]. Retrieved from the internet: URL:http://www.sun.com/servers/white-papers/domains.html?pagestyle=print.
Unisys White Paper. Cellular Multiprocessing Shared Memory: Shared Memory and Windows, Sep. 2000, pp. 1-16.
Willard, Christopher, an IDC White Paper. Superdome-Hewlett-Packard Extends Its High-End Computing Capabilities, (2000), pp. 1-20.
Willard, Christopher, an IDC White Paper. Superdome—Hewlett-Packard Extends Its High-End Computing Capabilities, (2000), pp. 1-20.

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965973B2 (en) * 2002-05-15 2005-11-15 Broadcom Corporation Remote line directory which covers subset of shareable CC-NUMA memory space
US20030217233A1 (en) * 2002-05-15 2003-11-20 Broadcom Corp. Remote line directory which covers subset of shareable CC-NUMA memory space
US20030217234A1 (en) * 2002-05-15 2003-11-20 Broadcom Corp. System having address-based intranode coherency and data-based internode coherency
US7003631B2 (en) * 2002-05-15 2006-02-21 Broadcom Corporation System having address-based intranode coherency and data-based internode coherency
US7210006B2 (en) * 2002-06-28 2007-04-24 Sun Microsystems, Inc. Computer system supporting read-to-write-back transactions for I/O devices
US20040073754A1 (en) * 2002-06-28 2004-04-15 Cypher Robert E. Computer system supporting read-to-write-back transactions for I/O devices
US20040030842A1 (en) * 2002-06-28 2004-02-12 Sun Microsystems, Inc. Mechanism for starvation avoidance while maintaining cache consistency in computer systems
US7165149B2 (en) * 2002-06-28 2007-01-16 Sun Microsystems, Inc. Mechanism for starvation avoidance while maintaining cache consistency in computer systems
US20080048055A1 (en) * 2002-08-19 2008-02-28 Illinois Tool Works Inc. Spray gun having mechanism for internally swirling and breaking up a fluid
US9456243B1 (en) 2003-06-06 2016-09-27 Arris Enterprises, Inc. Methods and apparatus for processing time-based content
US9286214B2 (en) 2003-06-06 2016-03-15 Arris Enterprises, Inc. Content distribution and switching amongst data streams
US7240143B1 (en) * 2003-06-06 2007-07-03 Broadbus Technologies, Inc. Data access and address translation for retrieval of data amongst multiple interconnected access nodes
US20080059724A1 (en) * 2003-06-06 2008-03-06 Stifter Francis J Jr Content distribution and switching amongst data streams
US7962696B2 (en) 2004-01-15 2011-06-14 Hewlett-Packard Development Company, L.P. System and method for updating owner predictors
US20050160430A1 (en) * 2004-01-15 2005-07-21 Steely Simon C.Jr. System and method for updating owner predictors
US7240165B2 (en) * 2004-01-15 2007-07-03 Hewlett-Packard Development Company, L.P. System and method for providing parallel data requests
US20050198187A1 (en) * 2004-01-15 2005-09-08 Tierney Gregory E. System and method for providing parallel data requests
US20060214027A1 (en) * 2004-06-30 2006-09-28 Micheli Paul R Fluid atomizing system and method
WO2006014573A3 (en) * 2004-07-07 2008-03-27 Yotta Yotta Inc Systems and methods for providing distributed cache coherence
US7975018B2 (en) * 2004-07-07 2011-07-05 Emc Corporation Systems and methods for providing distributed cache coherence
US20060031450A1 (en) * 2004-07-07 2006-02-09 Yotta Yotta, Inc. Systems and methods for providing distributed cache coherence
US7865570B2 (en) 2005-08-30 2011-01-04 Illinois Institute Of Technology Memory server
US20070067382A1 (en) * 2005-08-30 2007-03-22 Xian-He Sun Memory server
US20090164652A1 (en) * 2007-12-21 2009-06-25 General Instrument Corporation Methods and System for Processing Time-Based Content
US8966103B2 (en) 2007-12-21 2015-02-24 General Instrument Corporation Methods and system for processing time-based content
US9619303B2 (en) 2012-04-11 2017-04-11 Hewlett Packard Enterprise Development Lp Prioritized conflict handling in a system
US10042762B2 (en) * 2016-09-14 2018-08-07 Advanced Micro Devices, Inc. Light-weight cache coherence for data processors with limited data sharing

Also Published As

Publication number Publication date
JP2002091824A (en) 2002-03-29

Similar Documents

Publication Publication Date Title
US6766360B1 (en) Caching mechanism for remote read-only data in a cache coherent non-uniform memory access (CCNUMA) architecture
US7698508B2 (en) System and method for reducing unnecessary cache operations
US7814286B2 (en) Method and apparatus for filtering memory write snoop activity in a distributed shared memory computer
JP5116418B2 (en) Method for processing data in a multiprocessor data processing system, processing unit for multiprocessor data processing system, and data processing system
KR100970229B1 (en) Computer system with processor cache that stores remote cache presence information
US7669010B2 (en) Prefetch miss indicator for cache coherence directory misses on external caches
US8347037B2 (en) Victim cache replacement
US6289420B1 (en) System and method for increasing the snoop bandwidth to cache tags in a multiport cache memory subsystem
US6868481B1 (en) Cache coherence protocol for a multiple bus multiprocessor system
US8209489B2 (en) Victim cache prefetching
US6704843B1 (en) Enhanced multiprocessor response bus protocol enabling intra-cache line reference exchange
JP3898984B2 (en) Non-uniform memory access (NUMA) computer system
US20030009623A1 (en) Non-uniform memory access (NUMA) data processing system having remote memory cache incorporated within system memory
US20030009643A1 (en) Two-stage request protocol for accessing remote memory data in a NUMA data processing system
US20030009634A1 (en) Non-uniform memory access (NUMA) data processing system that provides notification of remote deallocation of shared data
US8327072B2 (en) Victim cache replacement
JP2000250812A (en) Memory cache system and managing method therefor
US9110808B2 (en) Formation of an exclusive ownership coherence state in a lower level cache upon replacement from an upper level cache of a cache line in a private shared owner state
EP1543425A2 (en) Computer system with integrated directory and processor cache
JPH11506852A (en) Reduction of cache snooping overhead in a multi-level cache system having a large number of bus masters and a shared level 2 cache
EP0834129A1 (en) Method and apparatus for reducing cache snooping overhead in a multilevel cache system
US7383398B2 (en) Preselecting E/M line replacement technique for a snoop filter
US7797495B1 (en) Distributed directory cache
US8489822B2 (en) Providing a directory cache for peripheral devices
KR20090053837A (en) Mechanisms and methods of using self-reconciled data to reduce cache coherence overhead in multiprocessor systems

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJITSU LIMITED, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CONWAY, PATRICK N.;NAKAGAWA, YUKIHIRO;JIANG, JUNG RUNG;REEL/FRAME:010945/0259

Effective date: 20000710

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

CC Certificate of correction
FPAY Fee payment

Year of fee payment: 4

FPAY Fee payment

Year of fee payment: 8

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20160720