US20010034816A1 - Complete and concise remote (ccr) directory - Google Patents

Complete and concise remote (ccr) directory Download PDF

Info

Publication number
US20010034816A1
US20010034816A1 US09/281,787 US28178799A US2001034816A1 US 20010034816 A1 US20010034816 A1 US 20010034816A1 US 28178799 A US28178799 A US 28178799A US 2001034816 A1 US2001034816 A1 US 2001034816A1
Authority
US
United States
Prior art keywords
main memory
shared
compute nodes
maintaining
directory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US09/281,787
Other versions
US6338123B2 (en
Inventor
Maged M. Michael
Ashwini Nanda
Douglas J. Joseph
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US09/281,787 priority Critical patent/US6338123B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JOSEPH, DOUGLAS J., MICHAEL, MAGED M., NANDA, ASHWINI
Priority to KR1020000011508A priority patent/KR100348200B1/en
Priority to JP2000081018A priority patent/JP2000298659A/en
Publication of US20010034816A1 publication Critical patent/US20010034816A1/en
Application granted granted Critical
Publication of US6338123B2 publication Critical patent/US6338123B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0817Cache consistency protocols using directory methods
    • G06F12/0822Copy directories
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J33/00Camp cooking devices without integral heating means
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47JKITCHEN EQUIPMENT; COFFEE MILLS; SPICE MILLS; APPARATUS FOR MAKING BEVERAGES
    • A47J36/00Parts, details or accessories of cooking-vessels
    • A47J36/24Warming devices
    • A47J36/2477Warming devices using solid fuel, e.g. with candles
    • CCHEMISTRY; METALLURGY
    • C01INORGANIC CHEMISTRY
    • C01BNON-METALLIC ELEMENTS; COMPOUNDS THEREOF; METALLOIDS OR COMPOUNDS THEREOF NOT COVERED BY SUBCLASS C01C
    • C01B32/00Carbon; Compounds thereof
    • C01B32/30Active carbon

Definitions

  • the present invention relates to efficient processing of memory requests in cache-based systems. More specifically, the present invention relates to improved processing speed of memory requests (or other coherence requests) in the coherence controller of shared memory multiprocessor servers or in the cache controller of uniprocessor systems.
  • the coherence mechanism is usually implemented as a part of the cache controllers using a snoopy coherence protocol.
  • the snoopy protocol cannot be used in large systems that are connected through an interconnection network due to the lack of a bus.
  • these systems use a directory-based protocol to maintain cache coherence.
  • the directories are associated with the main memory and maintain the state information of the various caches on the memory lines. This state information includes data indicating which cache(s) has a copy of the line or whether the line has been modified in a cache(s).
  • FIG. 1 is a representation of a “full map” arrangement.
  • a memory directory 100 is provided for main memory 120 .
  • entries 140 of the main directory 100 include state information for each memory line 160 of main memory 120 . That is, there is a one to one (state) mapping between a main memory line 160 and a memory directory entry 140 (i.e., there is full mapping).
  • FIG. 2 is a representation of a sparse directory arrangement.
  • a sparse directory 200 is smaller in size than the memory director 100 of FIG. 1 and is organized as a subset of the memory directory 100 .
  • the sparse directory 200 includes state information entries 240 for only a subset of the memory lines 260 of main memory 220 . That is, multiple memory lines are mapped to a location in the sparse directory 200 .
  • a sparse directory 200 can be implemented in an economical fashion using fast static RAMs.
  • It is, therefore, an object of the present invention to provide a structure and method for a system for maintaining coherence of cache lines in a shared memory multiplexor system comprising a system area network and a plurality of compute nodes connected to the system area network.
  • Each of the compute nodes includes a local main memory, a local shared cache and a local coherence controller.
  • Compute nodes external to a given compute node are defined as “external” shared caches.
  • the coherence controller includes shadow directories, each corresponding to one of the external shared caches.
  • Each of the shadow directories includes state information of the local main memory cached in the external shared caches.
  • the shadow directories include only state information of the local main memory cached in the external shared caches.
  • Each of the shadow directories includes a plurality of sets, each of the sets includes a plurality of entries and each of the entries is a memory address of the local main memory.
  • each entry includes tag bits and state bits such as a presence bit and a modified bit. The presence bit indicates whether a line of the local main memory is stored in an external shared cache and the modified bit indicates whether the line of the local main memory is modified in the external cache.
  • the CCR directory By keeping information on the exact number of remotely cached lines, the CCR directory provides a dynamic full map directory of presently shared lines, but only uses the memory of a sparse directory. Consequently, the CCR directory has all the advantages of a full map directory. In contrast, a conventional sparse directory keeps the state information only on a subset of the memory lines that could have been remotely cached in a full map directory scheme, which leads to inferior performance and a more complex protocol when compared to a conventional full map directory.
  • FIG. 1 is a schematic diagram of a full map memory directory structure
  • FIG. 2 is a schematic diagram of a sparse directory memory structure
  • FIG. 3 is a schematic diagram of compute nodes connected to a system area network
  • FIG. 4 is a schematic diagram of a CCR directory.
  • CCR Complete and Concise Remote
  • the CCR preserves the performance advantage of a full map directory while requiring as little space as a sparse directory.
  • the CCR directory keeps state information only on the memory lines that are currently cached in a remote node (as opposed to all memory lines in case of full map directory).
  • the CCR directory size is proportional to the size of the caches in the system, instead of the total memory size, and is, therefore, much less expensive than the full map directory.
  • the CCR directory keeps the exact amount of information necessary to maintaining coherence.
  • the CCR directory never has to force any invalidations and therefore does not present the disadvantages of a sparse directory.
  • FIG. 3 depicts a multiprocessor system environment in which the CCR directory 350 of the present invention can be implemented.
  • a coherence controller 360 is responsible for maintaining coherence among the caches in the compute node 310 .
  • the compute nodes 310 exist on a system area network (SAN) 300 .
  • Each compute node 310 includes one or more processors with associated caches 320 , one or more shared/remote caches 330 , one or more main memory modules 340 , at least one CCR directory 350 , at least one coherence controller 360 and several I/O devices (not shown).
  • memory for a compute node can be located in separate modules independent of the compute node. In that case, the coherence controller 360 and the CCR directory 350 can be disposed with the memory 340 or the processor 320 .
  • the Complete and Concise Remote (CCR) directory 350 keeps state information on the memory lines belonging to the local home memory that are cached in remote nodes. This is done by keeping a shadow of each shared cache directory or remote cache directory 330 in the system (except for the shared or remote cache(s) in the local node) in the local node's CCR directory 350 .
  • the CCR directory could be implemented in a 64-way system using 8-way nodes per coherence controller which would allow seven shadow directories B-H in each coherence controller 360 , as shown in FIG. 4. More specifically, FIG. 4 shows the organization of the CCR directory 360 for a given compute node 310 configuration which, in this example, is defined as compute node A.
  • each shared or remote cache 330 in each compute node 310 is a 64MB, 4-way set associative with 64 byte lines. Therefore, each shared or remote cache has 256K shadow directory sets 41 . Shadows directories B-H in node A's CCR directory therefore contain 256K sets, each set 41 containing state bits for four cache lines.
  • a shadow directory 40 Even though a shadow directory 40 has enough space to keep the state information on all the lines in the remote cache it represents, it only keeps state information on the lines in the remote cache that belong to the local home memory (e.g., node A).
  • the shadow directory C contains the state bits for the lines belonging to home memory A that are presently in remote cache 330 in node C. But the lines in the remote cache 330 in node C belonging to memories 340 for nodes C through H are not represented in the shadow directory 40 of node C in the CCR directory 350 of node A.
  • the CCR directory 350 needs the remote cache controller 360 to inform the home coherence controller 360 (e.g., A's coherence controller 360 , in this example) containing the shadow B-H when the remote cache 330 evicts a line corresponding to that home node's memory 340 .
  • the home coherence controller 360 e.g., A's coherence controller 360 , in this example
  • the degree of associativity of the shadow directory 40 in the CCR directory 350 is the same as the degree of associativity of the corresponding remote cache 330 , and the CCR directory 350 is informed about the evictions from the remote cache 330 , it is guaranteed that a CCR directory set 41 in the shadow directory 40 will always have a slot available when the remote cache needs to allocate a new line in that set 41 .
  • each CCR directory 330 includes a dedicated shadow cache for each remote cache 330 , a directory entry is never evicted from the CCR shadow directory 40 unless the line is being evicted in the corresponding remote cache.
  • FIG. 4 also illustrates the details of the address fields for accessing a CCR directory 350 , assuming a 40-bit system wide physical address.
  • Each entry 42 in a shadow 40 keeps a 14-bit tag and two state bits.
  • the presence bit P tells if the line is present in the corresponding remote cache and the modified bit M tells if the line is modified in that cache.
  • the P bit in all the CCR directory entries is initialized to 0 at system reset.
  • the CCR directory 350 provides a dynamic full map directory of presently shared lines, but only uses the memory of a conventional sparse directory. Consequently, the CCR directory has all the advantages of a full map directory.
  • a conventional sparse directory keeps the state information only on a subset of the memory lines that could have been remotely cached in a full map directory scheme, which leads to inferior performance and a more complex protocol when compared to a full map directory.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Chemical & Material Sciences (AREA)
  • Organic Chemistry (AREA)
  • Food Science & Technology (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Inorganic Chemistry (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Multi Processors (AREA)

Abstract

A method and structure for a system for maintaining coherence of cache lines in a shared memory multiplexor system comprises a system area network and a plurality of compute nodes connected to the system area network. Each of the compute nodes includes a local main memory, a local shared cache and a local coherence controller and compute nodes external to a given compute node include external shared caches and the coherence controller includes shadow directories, each corresponding to one of the external shared caches. Each of the shadow directories includes state information of the local main memory cached in the external shared caches. The shadow directories include only state information of the local main memory cached in the external shared caches.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to efficient processing of memory requests in cache-based systems. More specifically, the present invention relates to improved processing speed of memory requests (or other coherence requests) in the coherence controller of shared memory multiprocessor servers or in the cache controller of uniprocessor systems. [0002]
  • 2 . Description of the Related Art [0003]
  • Conventional computer systems often include on-chip or off-chip cache memories which are used with processors to speed up accesses to system memory. In a shared memory multiprocessor system, more than one processor can store a copy of the same memory location(s) (or line(s)) in its cache memory. A cache coherence mechanism is required to maintain consistency among the multiple cached copies of the same memory line. [0004]
  • In small, bus-based multiprocessor systems, the coherence mechanism is usually implemented as a part of the cache controllers using a snoopy coherence protocol. The snoopy protocol cannot be used in large systems that are connected through an interconnection network due to the lack of a bus. As a result, these systems use a directory-based protocol to maintain cache coherence. The directories are associated with the main memory and maintain the state information of the various caches on the memory lines. This state information includes data indicating which cache(s) has a copy of the line or whether the line has been modified in a cache(s). [0005]
  • Conventionally, these directories are organized as “full map” memory directories where the state information on every single memory line is stored by mapping each memory line to a unique location in the directory. FIG. 1 is a representation of a “full map” arrangement. A [0006] memory directory 100 is provided for main memory 120. In this implementation, entries 140 of the main directory 100 include state information for each memory line 160 of main memory 120. That is, there is a one to one (state) mapping between a main memory line 160 and a memory directory entry 140 (i.e., there is full mapping).
  • As a result, when the size of [0007] main memory 120 increases, the memory directory 100 size also increases. If the memory directory 100 is implemented as relatively fast static RAM, tracking the size of main memory 120 becomes prohibitively expensive. If the memory directory 100 is implemented using slow static RAMs or DRAMs, higher cost is avoided. However, a penalty is incurred in overall system performance due to the slower static RAM or DRAM chips. In fact, each directory access in such implementations will take approximately 5-20 controller cycles to complete.
  • In order to address this problem, “sparse” memory directories have been conventionally used in place of the (“full map”) memory directories. FIG. 2 is a representation of a sparse directory arrangement. A [0008] sparse directory 200 is smaller in size than the memory director 100 of FIG. 1 and is organized as a subset of the memory directory 100. The sparse directory 200 includes state information entries 240 for only a subset of the memory lines 260 of main memory 220. That is, multiple memory lines are mapped to a location in the sparse directory 200. Thus, due to its smaller size, a sparse directory 200 can be implemented in an economical fashion using fast static RAMs.
  • However, when there is contention among [0009] memory lines 260 for the same sparse directory entry field 240, the state information of one of the lines 260 must be replaced. There is no backup state information in the sparse directory arrangement. Therefore, when a line 260 is replaced from the sparse directory 200, all the caches in the overall system having a copy of that line must be asked to invalidate their copies. This incomplete directory information leads to both coherence protocol complexity and performance loss.
  • Thus, there is a need for a system which improves coherence/caching efficiency without adversely affecting overall system performance and maintains a relatively simple coherence protocol environment. [0010]
  • SUMMARY OF THE INVENTION
  • It is, therefore, an object of the present invention to provide a structure and method for a system for maintaining coherence of cache lines in a shared memory multiplexor system comprising a system area network and a plurality of compute nodes connected to the system area network. Each of the compute nodes includes a local main memory, a local shared cache and a local coherence controller. Compute nodes external to a given compute node are defined as “external” shared caches. The coherence controller includes shadow directories, each corresponding to one of the external shared caches. Each of the shadow directories includes state information of the local main memory cached in the external shared caches. [0011]
  • The shadow directories include only state information of the local main memory cached in the external shared caches. Each of the shadow directories includes a plurality of sets, each of the sets includes a plurality of entries and each of the entries is a memory address of the local main memory. Furthermore, each entry includes tag bits and state bits such as a presence bit and a modified bit. The presence bit indicates whether a line of the local main memory is stored in an external shared cache and the modified bit indicates whether the line of the local main memory is modified in the external cache. [0012]
  • By keeping information on the exact number of remotely cached lines, the CCR directory provides a dynamic full map directory of presently shared lines, but only uses the memory of a sparse directory. Consequently, the CCR directory has all the advantages of a full map directory. In contrast, a conventional sparse directory keeps the state information only on a subset of the memory lines that could have been remotely cached in a full map directory scheme, which leads to inferior performance and a more complex protocol when compared to a conventional full map directory.[0013]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The foregoing and other objects, aspects and advantages will be better understood from the following detailed description of preferred embodiments of the invention with reference to the drawings, in which: [0014]
  • FIG. 1 is a schematic diagram of a full map memory directory structure; [0015]
  • FIG. 2 is a schematic diagram of a sparse directory memory structure; [0016]
  • FIG. 3 is a schematic diagram of compute nodes connected to a system area network; and [0017]
  • FIG. 4 is a schematic diagram of a CCR directory. [0018]
  • DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
  • Disclosed herein is a new directory structure called a Complete and Concise Remote (CCR) directory. The CCR preserves the performance advantage of a full map directory while requiring as little space as a sparse directory. The CCR directory keeps state information only on the memory lines that are currently cached in a remote node (as opposed to all memory lines in case of full map directory). As a result, the CCR directory size is proportional to the size of the caches in the system, instead of the total memory size, and is, therefore, much less expensive than the full map directory. [0019]
  • However, the CCR directory keeps the exact amount of information necessary to maintaining coherence. Thus, the CCR directory never has to force any invalidations and therefore does not present the disadvantages of a sparse directory. [0020]
  • FIG. 3 depicts a multiprocessor system environment in which the [0021] CCR directory 350 of the present invention can be implemented. A coherence controller 360 is responsible for maintaining coherence among the caches in the compute node 310.
  • The [0022] compute nodes 310 exist on a system area network (SAN) 300. Each compute node 310 includes one or more processors with associated caches 320, one or more shared/remote caches 330, one or more main memory modules 340, at least one CCR directory 350, at least one coherence controller 360 and several I/O devices (not shown). One skilled in the art will appreciate that memory for a compute node can be located in separate modules independent of the compute node. In that case, the coherence controller 360 and the CCR directory 350 can be disposed with the memory 340 or the processor 320.
  • The Complete and Concise Remote (CCR) [0023] directory 350 keeps state information on the memory lines belonging to the local home memory that are cached in remote nodes. This is done by keeping a shadow of each shared cache directory or remote cache directory 330 in the system (except for the shared or remote cache(s) in the local node) in the local node's CCR directory 350.
  • For example, the CCR directory could be implemented in a 64-way system using 8-way nodes per coherence controller which would allow seven shadow directories B-H in each [0024] coherence controller 360, as shown in FIG. 4. More specifically, FIG. 4 shows the organization of the CCR directory 360 for a given compute node 310 configuration which, in this example, is defined as compute node A.
  • In this example, the shared cache or the [0025] remote cache 330 in each compute node 310 is a 64MB, 4-way set associative with 64 byte lines. Therefore, each shared or remote cache has 256K shadow directory sets 41. Shadows directories B-H in node A's CCR directory therefore contain 256K sets, each set 41 containing state bits for four cache lines.
  • Even though a [0026] shadow directory 40 has enough space to keep the state information on all the lines in the remote cache it represents, it only keeps state information on the lines in the remote cache that belong to the local home memory (e.g., node A). For example, in FIG. 4, the shadow directory C contains the state bits for the lines belonging to home memory A that are presently in remote cache 330 in node C. But the lines in the remote cache 330 in node C belonging to memories 340 for nodes C through H are not represented in the shadow directory 40 of node C in the CCR directory 350 of node A.
  • In order to maintain an exact shadow of the [0027] remote caches 330, the CCR directory 350 needs the remote cache controller 360 to inform the home coherence controller 360 (e.g., A's coherence controller 360, in this example) containing the shadow B-H when the remote cache 330 evicts a line corresponding to that home node's memory 340. Since the degree of associativity of the shadow directory 40 in the CCR directory 350 is the same as the degree of associativity of the corresponding remote cache 330, and the CCR directory 350 is informed about the evictions from the remote cache 330, it is guaranteed that a CCR directory set 41 in the shadow directory 40 will always have a slot available when the remote cache needs to allocate a new line in that set 41. In other words, since each CCR directory 330 includes a dedicated shadow cache for each remote cache 330, a directory entry is never evicted from the CCR shadow directory 40 unless the line is being evicted in the corresponding remote cache.
  • FIG. 4 also illustrates the details of the address fields for accessing a [0028] CCR directory 350, assuming a 40-bit system wide physical address. Each entry 42 in a shadow 40 keeps a 14-bit tag and two state bits. The presence bit P tells if the line is present in the corresponding remote cache and the modified bit M tells if the line is modified in that cache. The P bit in all the CCR directory entries is initialized to 0 at system reset.
  • The states of a line in the corresponding remote cache interpreted from the P and M bits are shown in the table in FIG. 4. As would be apparent to one ordinarily skilled in the art given this disclosure, the foregoing can be modified to accommodate any sized system. [0029]
  • By keeping the information on the exact number of remotely cached lines, the [0030] CCR directory 350 provides a dynamic full map directory of presently shared lines, but only uses the memory of a conventional sparse directory. Consequently, the CCR directory has all the advantages of a full map directory. In contrast, a conventional sparse directory keeps the state information only on a subset of the memory lines that could have been remotely cached in a full map directory scheme, which leads to inferior performance and a more complex protocol when compared to a full map directory.
  • While it is possible to modify the original sparse directory scheme to keep information equivalent to the CCR directory; substantial problems exist with such an enhanced sparse directory. Such an enhanced sparse directory would receive the evict information from the remote caches and would have sufficient space to shadow the remote caches. However, such an enhanced sparse directory would have to have an associativity of n*w in a system with n remote caches which are w-way set associative, and would need a huge multiplexor to obtain the presence bit vector when there is a hit. On the other hand, the inventive CCR directory has n number of w-way shadows that would need small multiplexors to get the directory information. Gathering the presence bit information from the n possible hits is a simple logic operation. Thus the CCR directory would avoid the extra latency penalty of a large multiplexor of such an enhanced sparse directory. [0031]
  • While the invention has been described in terms of preferred embodiments, those skilled in the art will recognize that the invention can be practiced with modification within the spirit and scope of the appended claims. [0032]

Claims (20)

What is claimed is:
1. A system for maintaining coherence of cache lines in a shared memory multiplexor system comprising:
a system area network; and
a plurality of compute nodes connected to said system area network,
wherein each of said compute nodes includes a main memory, a shared cache and a coherence controller,
wherein said coherence controller includes shadow directories, each corresponding to shared caches of other compute nodes connected to said system area network,
wherein each of said shadow directories includes state information of said main memory cached in said shared caches of said other compute nodes.
2. The system in
claim 1
, wherein each of said shadow directories includes only state information of said main memory cached in said shared caches of said other compute nodes.
3. The system in
claim 1
, wherein each of said shadow directories includes a plurality of sets, each of said sets including a plurality of entries, each of said entries comprising a memory address of said main memory.
4. The system in
claim 3
, wherein each entry includes tag bits and state bits.
5. The system in
claim 1
, wherein said coherence controller maintains a dynamic full map directory of shared lines in said main memory.
6. A coherence controller for maintaining coherence of cache lines in a shared memory multiplexor system that includes a system area network and a plurality of compute nodes connected to said system area network, wherein each of said compute nodes includes a main memory, a shared cache and said coherence controller, said coherence controller comprising:
shadow directories, each corresponding to shared caches of other compute nodes, wherein each of said shadow directories includes state information of said main memory cached in said shared caches of said other compute nodes.
7. The coherence controller in
claim 6
, wherein each of said shadow directories includes only state information of said main memory cached in said shared caches of said other compute nodes.
8. The coherence controller in
claim 6
, wherein each of said shadow directories includes a plurality of sets, each of said sets including a plurality of entries, each of said entries comprising a memory address of said main memory.
9. The coherence controller in
claim 8
, wherein each entry includes tag bits and state bits.
10. The coherence controller in
claim 6
, wherein said coherence controller maintains a dynamic full map directory of shared lines in said main memory.
11. A method for maintaining coherence of cache lines in a shared memory multiplexor system that includes a system area network and a plurality of compute nodes connected to said system area network, wherein each of said compute nodes includes a main memory, a shared cache and a coherence controller, said method comprising:
maintaining shadow directories in said coherence controller, each of said shadow directories corresponding to shared caches of other compute nodes; and
maintaining state information of said main memory cached in said shared caches in corresponding ones of said shadow directories of said other compute nodes.
12. The method in
claim 11
, wherein said maintaining of said state information maintains only state information of said main memory cached in said shared caches of said other compute nodes.
13. The method in
claim 11
, wherein said maintaining of said shadow directories includes maintaining a plurality of sets, each of said sets including a plurality of entries, each of said entries comprising a memory address of said main memory.
14. The method in
claim 13
, wherein each entry includes tag bits and state bits.
15. The method in
claim 11
, wherein said maintaining of said shadow directories in said coherence controller maintains a dynamic full map directory of shared lines in said main memory.
16. A program storage device readable by machine, tangibly embodying a program of instructions executable by said machine to perform method steps for maintaining coherence of cache lines in a shared memory multiplexor system that includes a system area network and a plurality of compute nodes connected to said system area network, wherein each of said compute nodes includes a main memory, a shared cache and a coherence controller, said method comprising:
maintaining shadow directories in said coherence controller, each of said shadow directories corresponding to shared caches of other compute nodes; and
maintaining state information of said main memory cached in said shared caches in corresponding ones of said shadow directories of said other compute nodes.
17. The program storage device in
claim 16
, wherein said maintaining of said state information maintains only state information of said main memory cached in said shared caches of said other compute nodes.
18. The program storage device in
claim 16
, wherein said maintaining of said shadow directories includes maintaining a plurality of sets, each of said sets including a plurality of entries, each of said entries comprising a memory address of said main memory.
19. The program storage device in
claim 18
, wherein each entry includes tag bits and state bits.
20. The program storage device in
claim 16
, wherein said maintaining of said shadow directories in said coherence controller maintains a dynamic full map directory of shared lines in said main memory.
US09/281,787 1999-03-31 1999-03-31 Complete and concise remote (CCR) directory Expired - Fee Related US6338123B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US09/281,787 US6338123B2 (en) 1999-03-31 1999-03-31 Complete and concise remote (CCR) directory
KR1020000011508A KR100348200B1 (en) 1999-03-31 2000-03-08 Complete and concise remote (ccr) directory
JP2000081018A JP2000298659A (en) 1999-03-31 2000-03-22 Complete and concise remote(ccr) directory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US09/281,787 US6338123B2 (en) 1999-03-31 1999-03-31 Complete and concise remote (CCR) directory

Publications (2)

Publication Number Publication Date
US20010034816A1 true US20010034816A1 (en) 2001-10-25
US6338123B2 US6338123B2 (en) 2002-01-08

Family

ID=23078785

Family Applications (1)

Application Number Title Priority Date Filing Date
US09/281,787 Expired - Fee Related US6338123B2 (en) 1999-03-31 1999-03-31 Complete and concise remote (CCR) directory

Country Status (3)

Country Link
US (1) US6338123B2 (en)
JP (1) JP2000298659A (en)
KR (1) KR100348200B1 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030046356A1 (en) * 1999-11-08 2003-03-06 Alvarez Manuel Joseph Method and apparatus for transaction tag assignment and maintenance in a distributed symmetric multiprocessor system
US20040030841A1 (en) * 2002-08-06 2004-02-12 Ashwini Nanda Method and system for organizing coherence directories in shared memory systems
US20050120183A1 (en) * 2003-12-01 2005-06-02 Desota Donald R. Local region table for storage of information regarding memory access by other nodes
US20080147970A1 (en) * 2006-12-14 2008-06-19 Gilad Sade Data storage system having a global cache memory distributed among non-volatile memories within system disk drives
US20090137129A1 (en) * 2005-08-22 2009-05-28 Hitachi Chemical Dupont Microsystems Ltd. Method for manufacturing semiconductor device
WO2010131373A1 (en) * 2009-05-15 2010-11-18 Hitachi,Ltd. Storage subsystem
EP2343655A1 (en) * 2008-10-02 2011-07-13 Fujitsu Limited Memory access method and information processing apparatus
GB2470878B (en) * 2008-04-02 2013-03-20 Intel Corp Adaptive cache organization for chip multiprocessors
GB2499697A (en) * 2012-01-04 2013-08-28 Ibm Multiprocessor system with an index to a second processor's cache in a first processor
US10042804B2 (en) 2002-11-05 2018-08-07 Sanmina Corporation Multiple protocol engine transaction processing
JP2019517687A (en) * 2016-05-31 2019-06-24 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated Cache coherence for processing in memory

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6662216B1 (en) * 1997-04-14 2003-12-09 International Business Machines Corporation Fixed bus tags for SMP buses
US6738836B1 (en) * 2000-08-31 2004-05-18 Hewlett-Packard Development Company, L.P. Scalable efficient I/O port protocol
US6594744B1 (en) 2000-12-11 2003-07-15 Lsi Logic Corporation Managing a snapshot volume or one or more checkpoint volumes with multiple point-in-time images in a single repository
US6629203B1 (en) * 2001-01-05 2003-09-30 Lsi Logic Corporation Alternating shadow directories in pairs of storage spaces for data storage
FR2820850B1 (en) * 2001-02-15 2003-05-09 Bull Sa CONSISTENCY CONTROLLER FOR MULTIPROCESSOR ASSEMBLY, MODULE AND MULTIPROCESSOR ASSEMBLY WITH MULTIMODULE ARCHITECTURE INCLUDING SUCH A CONTROLLER
US6678809B1 (en) 2001-04-13 2004-01-13 Lsi Logic Corporation Write-ahead log in directory management for concurrent I/O access for block storage
US7370154B2 (en) * 2004-02-24 2008-05-06 Silicon Graphics, Inc. Method and apparatus for maintaining coherence information in multi-cache systems
US7213106B1 (en) * 2004-08-09 2007-05-01 Sun Microsystems, Inc. Conservative shadow cache support in a point-to-point connected multiprocessing node
JP4362454B2 (en) * 2005-04-07 2009-11-11 富士通株式会社 Cache coherence management device and cache coherence management method
US8099556B2 (en) * 2005-09-13 2012-01-17 Arm Limited Cache miss detection in a data processing apparatus
US20070079072A1 (en) * 2005-09-30 2007-04-05 Collier Josh D Preemptive eviction of cache lines from a directory
US7475193B2 (en) * 2006-01-18 2009-01-06 International Business Machines Corporation Separate data and coherency cache directories in a shared cache in a multiprocessor system
US7543116B2 (en) * 2006-01-30 2009-06-02 International Business Machines Corporation Data processing system, cache system and method for handling a flush operation in a data processing system having multiple coherency domains
US8185724B2 (en) * 2006-03-03 2012-05-22 Arm Limited Monitoring values of signals within an integrated circuit
WO2007101969A1 (en) * 2006-03-06 2007-09-13 Arm Limited Accessing a cache in a data processing apparatus
US7937535B2 (en) * 2007-02-22 2011-05-03 Arm Limited Managing cache coherency in a data processing apparatus
US8037252B2 (en) * 2007-08-28 2011-10-11 International Business Machines Corporation Method for reducing coherence enforcement by selective directory update on replacement of unmodified cache blocks in a directory-based coherent multiprocessor
US7945739B2 (en) * 2007-08-28 2011-05-17 International Business Machines Corporation Structure for reducing coherence enforcement by selective directory update on replacement of unmodified cache blocks in a directory-based coherent multiprocessor
US20100169578A1 (en) * 2008-12-31 2010-07-01 Texas Instruments Incorporated Cache tag memory
JP7100237B2 (en) * 2017-09-11 2022-07-13 富士通株式会社 Arithmetic processing device and control method of arithmetic processing device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5802578A (en) * 1996-06-12 1998-09-01 Sequent Computer Systems, Inc. Multinode computer system with cache for combined tags

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7529799B2 (en) * 1999-11-08 2009-05-05 International Business Machines Corporation Method and apparatus for transaction tag assignment and maintenance in a distributed symmetric multiprocessor system
US20030046356A1 (en) * 1999-11-08 2003-03-06 Alvarez Manuel Joseph Method and apparatus for transaction tag assignment and maintenance in a distributed symmetric multiprocessor system
US20040030841A1 (en) * 2002-08-06 2004-02-12 Ashwini Nanda Method and system for organizing coherence directories in shared memory systems
US6792512B2 (en) * 2002-08-06 2004-09-14 International Business Machines Corporation Method and system for organizing coherence directories in shared memory systems
US10042804B2 (en) 2002-11-05 2018-08-07 Sanmina Corporation Multiple protocol engine transaction processing
US20050120183A1 (en) * 2003-12-01 2005-06-02 Desota Donald R. Local region table for storage of information regarding memory access by other nodes
US7089372B2 (en) * 2003-12-01 2006-08-08 International Business Machines Corporation Local region table for storage of information regarding memory access by other nodes
US20090137129A1 (en) * 2005-08-22 2009-05-28 Hitachi Chemical Dupont Microsystems Ltd. Method for manufacturing semiconductor device
US20080147970A1 (en) * 2006-12-14 2008-06-19 Gilad Sade Data storage system having a global cache memory distributed among non-volatile memories within system disk drives
US8762636B2 (en) * 2006-12-14 2014-06-24 Emc Corporation Data storage system having a global cache memory distributed among non-volatile memories within system disk drives
GB2470878B (en) * 2008-04-02 2013-03-20 Intel Corp Adaptive cache organization for chip multiprocessors
EP2343655A1 (en) * 2008-10-02 2011-07-13 Fujitsu Limited Memory access method and information processing apparatus
EP2343655A4 (en) * 2008-10-02 2012-08-22 Fujitsu Ltd Memory access method and information processing apparatus
US20110185128A1 (en) * 2008-10-02 2011-07-28 Fujitsu Limited Memory access method and information processing apparatus
US20110153954A1 (en) * 2009-05-15 2011-06-23 Hitachi, Ltd. Storage subsystem
US8954666B2 (en) 2009-05-15 2015-02-10 Hitachi, Ltd. Storage subsystem
WO2010131373A1 (en) * 2009-05-15 2010-11-18 Hitachi,Ltd. Storage subsystem
GB2499697A (en) * 2012-01-04 2013-08-28 Ibm Multiprocessor system with an index to a second processor's cache in a first processor
GB2499697B (en) * 2012-01-04 2014-04-02 Ibm Near neighbor data cache sharing
US8719507B2 (en) 2012-01-04 2014-05-06 International Business Machines Corporation Near neighbor data cache sharing
US8719508B2 (en) 2012-01-04 2014-05-06 International Business Machines Corporation Near neighbor data cache sharing
JP2019517687A (en) * 2016-05-31 2019-06-24 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッドAdvanced Micro Devices Incorporated Cache coherence for processing in memory
JP7160682B2 (en) 2016-05-31 2022-10-25 アドバンスト・マイクロ・ディバイシズ・インコーポレイテッド Cache coherence for processing in memory

Also Published As

Publication number Publication date
US6338123B2 (en) 2002-01-08
KR100348200B1 (en) 2002-08-09
KR20010006755A (en) 2001-01-26
JP2000298659A (en) 2000-10-24

Similar Documents

Publication Publication Date Title
US6338123B2 (en) Complete and concise remote (CCR) directory
US6408362B1 (en) Data processing system, cache, and method that select a castout victim in response to the latencies of memory copies of cached data
US6901495B2 (en) Cache memory system allowing concurrent reads and writes to cache lines to increase snoop bandwith
US6912628B2 (en) N-way set-associative external cache with standard DDR memory devices
US5926829A (en) Hybrid NUMA COMA caching system and methods for selecting between the caching modes
US6826651B2 (en) State-based allocation and replacement for improved hit ratio in directory caches
US5893144A (en) Hybrid NUMA COMA caching system and methods for selecting between the caching modes
US5136700A (en) Apparatus and method for reducing interference in two-level cache memories
JP4447580B2 (en) Partitioned sparse directory for distributed shared memory multiprocessor systems
US20030005236A1 (en) Imprecise snooping based invalidation mechanism
US6832294B2 (en) Interleaved n-way set-associative external cache
US7117312B1 (en) Mechanism and method employing a plurality of hash functions for cache snoop filtering
US7325102B1 (en) Mechanism and method for cache snoop filtering
US6311253B1 (en) Methods for caching cache tags
US6345344B1 (en) Cache allocation mechanism for modified-unsolicited cache state that modifies victimization priority bits
US6442653B1 (en) Data processing system, cache, and method that utilize a coherency state to indicate the latency of cached data
US9442856B2 (en) Data processing apparatus and method for handling performance of a cache maintenance operation
US6792512B2 (en) Method and system for organizing coherence directories in shared memory systems
US20020002659A1 (en) System and method for improving directory lookup speed
US7047364B2 (en) Cache memory management
US6349369B1 (en) Protocol for transferring modified-unsolicited state during data intervention
US20050033920A1 (en) Cache structure and methodology
JPH06195263A (en) Cache memory system
JPH11296432A (en) Information processor and memory management system

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JOSEPH, DOUGLAS J.;MICHAEL, MAGED M.;NANDA, ASHWINI;REEL/FRAME:009880/0205

Effective date: 19990331

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20060108