GB2500964A - Forward progress mechanism for stores in the presence of load contention in a system favouring loads by state alteration. - Google Patents

Forward progress mechanism for stores in the presence of load contention in a system favouring loads by state alteration. Download PDF

Info

Publication number
GB2500964A
GB2500964A GB1300936.0A GB201300936A GB2500964A GB 2500964 A GB2500964 A GB 2500964A GB 201300936 A GB201300936 A GB 201300936A GB 2500964 A GB2500964 A GB 2500964A
Authority
GB
United Kingdom
Prior art keywords
cache line
state
coherence
target cache
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1300936.0A
Other versions
GB2500964B (en
GB201300936D0 (en
Inventor
Derek Edward Williams
Guy Lynn Guthrie
Hien Minh Le
Hugh Shen
Jeffrey A Stuecheli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of GB201300936D0 publication Critical patent/GB201300936D0/en
Publication of GB2500964A publication Critical patent/GB2500964A/en
Application granted granted Critical
Publication of GB2500964B publication Critical patent/GB2500964B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0842Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

Disclosed is a cache coherency protocol for multiprocessor data processing systems 104. The systems have a set of cache memories 230. A cache memory issues a read-type operation for a target cache line. While waiting for receipt of the target cache line, the cache memory monitors to detect a competing store-type operation for the target cache line. In response to receiving the target cache line, the cache memory installs the target cache line in the cache memory, and sets a coherency state of the target cache line installed in the cache memory based on whether the competing store-type operation is detected. The coherence state may be a first state indicating that the target cache line can source copies of the target cache line to requestors. In response to issuing the read-type operation, the cache memory receiving a coherence message indicating the state, wherein setting the coherence state for the target cache line comprises the cache memory setting the coherence state to the first state indicated by the coherence message if the competing store-type operation is not detected.

Description

I
FORWARD PROGRESS MECHANISM FOR STORES IN THE PRESENCE OF LOAD
CONTENTION IN A SYSTEM FAVORING LOADS BY STATE ALTERATiON
BACKGROUND OF THE INV[NTI()N
1. Technical Field:
100011 The present invention relates generally to data processing and, in particular, to servicing processor operations in a data processing system. Still more particularly, the present invention relates to dynamically adopting a coherency state of a cache line to reduce contention experienced by store-type operations.
2. Description of the Related Art:
[00021 A conventional symmetric multiprocessor (SMP) computer system, such as a server computer system, includes multiple processing units all coupkd to a system interconnect, which typically comprises one or more address, data and control buses. Coupled to the system interconnect is a system memory, which represents the lowest level of volatile memory in the multiprocessor computer system and which gcncrally is accessible for read and rite access by all processing units. In order to reduce access latency to instructions and data residing in the system memory, each processing unit is typical]y further supported by a respective multi-level cache hierarchy, the lower level(s) of which may be shared by one or more processor cores.
[0003] Cache memories are commonly utilized to temporarily buffer memory blocks that might be accessed by a processor in order to speed up processing by reducing access latency introduced by having to load needed data and instructions from system memory. In some multiprocessor (MP) systems, the cache hierarchy includes at least two levels. The level one (LI) or upper-level cache is usuall.y a private cache associated with a particular processor core and catmot be accessed by other cores in au MP system. Typically, in response to a memory access instruction such as a load or store instruction, the processor core first accesses the directory of the upper-level cache. If the requested memory block is not found in the upper-level cache, the processor core then access lower-level caches (e.g., level two (L2) or level three (L3) caches) for the requested memory block. The lowest level cache (e.g., L3) is often shared among several processor cores.
[0004] Because multiple processor cores may request write access to a same cache line of data and because modified cache lines are not immediately synchronized with system memory, the cache hierarchies of multiprocessor computer systems typically implement a cache coherency protocol to ensure at least a minimum level of coherence among the various processor core's "views" of the contents of system memory. In particular, cache coherency requires, at a minimum, that after a processing unit accesses a copy of a memory block and subsequently accesses an updated. copy of the memory block, the processing unit cannot again access the old copy of the memory block.
[00051 A cachc coherency protocol typically defines a set of cohcrence states stored in association with the cache lines of each cache hierarchy, as well as a set of coherency messages utilized to communicate the coherence state information between cache hierarchies. In many cases, the coherence states and state transitions of the coherence protocol are designed to favor read-type memory access operations over store-type operations. The prioritization of read4ype operations over store-type operations can lead to forward progress issues for store-type operations in the presence of significant load contention.
SUMMARY OF THE INVENTION
10006] A preferred embodiment of the invention provides a multiprocessor data processing system includes a plurality of cache memories including a cache memory. The cache memory issues a read-type operation for a target cache line. While waiting for receipt of the target cache line, the cache memory monitors to detect a competing store--type operation for the target cache line. In response to receiving the target cache line, the cache memory installs the target cache line in the cache memory, and sets a coherency state of the target cache line installed in the cache memory based on whether the competing store-type operation is detected.
BRIEF DESCRIPTiON OF THE DRAWINGS
100071 The novel features believed characteristic of the invention are set forth in the appended claims. The invention itself however, as well as a preferred mode of use, further objects and advantages thereof, will best be understood by reference to the following detailed description of an illustrative embodiment when read in conjunction with the accompanying drawings, wherein: 100081 FIG. I is a high-level block diagram of an exemplary data processing system in accordance with one embodiment; 100091 FIG. 2 is a more detailed block diagram of an exemplary processing unit in accordance with one embodiment; 10010] FIG. 3 is a detailed block diagram of an U cache slice in accordance with one embodiment; 100111 FIG. 4 is an exemplary timing diagram of the processing of a processor memory access operation in a lower level cache in accordance with one embodiment; and [0012] FG. 5 is a high level logical flowchart of an exemplary process of servicing a processor memory access operation in accordance with one embodiment.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBOffiMENT(S) 100131 With reference now to the figures, wherein like reference numerals refer to like and corresponding parts throughout, and in particular with reference to FIG. 1, there is illustrated a high-level block diagram depicting an exemplary data processing system in accordance with one embodiment. The data processing system is depicted as a cache coherent symmetric multiprocessor (SMP) data processing system 100. As shown, data processing system 100 includes multiple processing nodes 102a, 102b for processing data and instructions. Processing nodes 102 are coupled to a system interconnect 110 for conveying address, data and control information. System interconnect 110 may be implemented, for example, as a bused interconnect, a switched interconnect or a hybrid interconnect.
[00141 In the depicted embodiment, each processing node 102 is realized as a multi-chip module (MCM) containing four processing units 104a-104d, each preferably realized as a respective integrated circuit. The processing units 104 within each processing node 102 are coupled for communication to each other and system interconnect 110 by a local interconnect 114, which, like system interconnect 110. may be implemented, for example, with one or more buses and/or switches.
100151 As described below in greater detail with referet ice to FIG. 2, processing units 104 each include a memory controller 106 coupled to local interconnect 114 to provide an interface to a respective system memory 108. Data and instructions residing in system memories 108 can generally be accessed and modified by a processor core in any processing unit 104 of any processing node 102 within data processing system 100. In alternative embodiments, one or more memory controllers 106 (and system memories 108) can be coupled to system interconnect rather than a local interconnect 114.
10016 Those skilled in the art will appreciate that SMP data processing system 100 of FIG. 1 can include many additional non-illustrated components, such as interconnect bridges, non-volatile storage, ports for connection to networks or attached devices, etc. Because such additional components are not necessary for an understanding of the described embodiments, they are not illustrated in FIG. 1 or discussed. fiwthcr herein. It should also be understood, however, that the enhancements described herein are applicable to cache coherent data processing systems of diverse architectures and are in no way Limited to the generalized data processing system architecture illusifated in FIG. 1.
100171 Referring now to FIG. 2. there is depicted a more detailed block diagram of an exemplary processing unit 104 in accordance with one embodiment. In the depicted embodiment, each processing unit 104 is an integrated circuit including two processor cores 200a. 200h for processing instructions and data. In a preferred embodiment, each processor core is capable of independently executing multiple hardware threads of execution simultaneously. As depicted, each processor core 200 includes one or more execution units, such as load-store unit (LSU) 202, for executing instructions. The instructions executed by LSU 202 include memory access instructions that request access to a memory block or cause the generation of a request for access to a memory block.
100181 The operation of each processor core 200 is supported by a multi-level volatile memory hierarchy having at its lowest level a shared system memory 108 accessed via an integrated memory controller 106, and at its upper levels, one or more levels of cache memory, which in the illustrative embodiment include a store-through level one (Ll) cache 226 within and private to each processor core 200, and a respective store-in level two (L2) cachc 230 tbr each processor core 200a, 200k In order to efficiently handle multiple concurrent memory access requests to cacheable addresses, each L2 cache 230 can be implemented with multiple L2 cache slices 230a1-230aN, each of which handles memory access requests for a respective set of real memory addresses.
00I9l Although the illustrated cache hierarchies includes only two levels of cache, those skilled in the art will appreciate that alternative embodiments may include additional levels (L3, L4, etc.) of on-chip or off-chip in-line or lookaside cache, which may he thlly inclusive, partially inclusive, or non-inclusive of the contents the upper levels of cache.
100201 Each processing unit 104 further includes an integrated and distributed fabric controller 216 responsible for controlling the flow of operations on local interconnect 114 and system interconnect 110 and for implementing the coherency communication required to implement the selected cache coherency protocol. Processing unit 104 further includes an integrated I/O (inpulioutput) controller 214 supporting the attachment of one or more I/O devices (not depicted).
100211 In operation, when a hardware thread under execution by a processor core 200 includes a memory access instruction requesting a specified memory access operation to be performed, LSIJ 202 executes the memory access instruction to determine the target real address of the memory access request. LSU 202 then transmits to hash logic 206 within its processor core 200 at least the memory access request, which includes at least a transaction type (ttype) and a target real address. Hash logic 206 hashes the target real address to identify the appropriate destination (e.g.. L2 cache slice 230a1-230aN) and dispatches the request for servicing to the appropriate destination.
100221 With reference now to FIG. 3, there is illustrated a more detailed block diagram of an exemplary embodiment of one of L2 cache shce 230a1-230aN (referred to generically as L2 cache slice 230a) in accordance with one embodiment. As shown in FIG. 3, L2 cache slice 230a includes a cache array 302 and a directory 308 of the contents of cache array 302. Although not explicitly illustrated, cache array 302 preferably is implemented with a single read port and single write port to reduce the die area required to implement cache array 302.
100231 Assuming cache array 302 and directory 308 are set associative as is conventional, memory locations in system memories 108 are mapped to particular congruence classes within cache array 302 utilizing predetermined index bits within the system memory (real) addresses, The particular memory blocks stored within the cache lines of cache array 302 are recorded in cache directory 308, which contains one directory entry for each cache line. V/bile not expressly depicted in FIG. 3, it will be understood by those skilled in the art that each directory entry in cache directory 308 includes various fields, for example, a tag field that identifies the real address of the memory block held in the corresponding cache line of cache array 302, a state field tha.t indicate the coherency state of the cache line, a LRU (Least Recently Used) field indicating a replacement order fbr the cache line with respect to other cache lines in the same congruence class, arid inclusivity bits indicating whether the memory block is held in the associated LI cache 226.
100241 L2 cache slice 230a includes multiple (e.g., 16) Read-Claim (RC) maclimes 312a-312n for independently and concurrently servicing load (LD) and store (ST) requests received from the affiliated processor core 200. In order to service remote memory access requests originating from processor cores 200 other than the affiliated processor core 200, 12 cache slicc 230a also includes multiple snoop machines 31 ia-31 Ia Each snoop machine 311 can independently and concurrently handle a remote memory access request "snooped" from local interconnect 114. As will be appreciated, the servicing of memory access requests by RC machines 312 may require the replacement or invalidation of memory blocks within cache array 302 -Accordingly, L2 cache slice 230a includes CO (castout) machines 310 that manage the removal and wTiteback of memory blocks from cache array 302.
100251 L2 cache slice 230a further includes an arbiter 305 that controls multiplexers Ml -M2 to order the processing of local memory access requests received from affiliated processor core 200 and remote requests snooped on local interconnect 114. Memory access requests, including local load and store operations and remote read and write operations, arc forwarded in accordance with the arbitration policy implemented by arbiter 305 to dispatch logic, such as a dispatch pipeline 306, which processes each readiload and store request is processed with. respect to directory 308 and cache array 302 over a given number of cycles.
[0026J L2 cache slice 230a also includes an R.C queue 320 and a CPI (castout push intervention) queue 318 that respectively buffer data being inserted into and removed front the cache array 302. RC queue 320 includes a number of buffer entries that each individually correspond to a particular one of RC machines 312 such that each RC machine 312 that is dispatched retrieves data from only the designated buffer entry. Similarly, CPI queue 318 includes a number of buffer entries that each individually correspond to a particular one of the castout machines 310 and snoop machines 31 1, such that each CO machine 310 and each snooper 311 that is dispatched retrieves data from only the respective designated CP1 buffer entry.
[0027] Each RC machine 312 also has assigned to it a respective one of multiple RC data (RCDAT) buffers 322 for bufibring a memory block read from cache array 302 andlor received from local interconnect 114 via reload bus 323. The RCDAT buffer 322 assigned to each RC machine 312 is preferably constructed with connections and functionality corresponding to the memory access requests that may be serviced by the associated RC machine 312. RCDAT buffers 322 have an associated store data multiplexer M4 that selects data bytes from among its inputs for buffering in the RCDAT buffer 322 in response unillustrated select signals generated by arbiter 305.
[0028J in operation, processor store requests comprising a transaction type (ttype), target real address and store data are received from the affiliated processor core 200 within a store queue (STQ) 304. From STQ 304, the store data are transmitted to store data multiplexer M4 via data path 324, and the store type and target address are passed to multiplexer Mi. Multiplexer Mi also receives as inputs processor load requests from processor core 200 and directory write requests from RC machines 312. In response to unillustrated select signals generated by arbiter 305, multiplexer MI selects one of its input requests to forward to multiplexer M2, which additionally receives as an input a remote request received from local interconnect 114 via remote request path 326. tbiter 305 schedules local and remote memory access requests fbr processing and, based upon the scheduling, generates a sequence of select signals 328. In response to select signals 328 generated by arbiter 305, multiplexer M2 selects either the local request received from multiplexer Mi or the remote request snooped from local interconnect 114 as the next memory access request to he processed.
0029 Referring now to FG. 4, there is depicted a time-space diagram of an exempkry operation on the interconnect fabric of data processing system 100 of FIG. 1. The operation begins with a request phase 450 in which a master 400, for example, an RC machine 312 of an L2 cache 230, issues a request 402 on the intercoimect fabric. Request 402 preferably includes at least a transaction type indicating a type of desired access and a resource identifier (eg., real address) indicating a resource to be accessed by the request. Common types of requests preferably include those set forth below in Table L
TABLE I
Request RWITM (Read-Requests a unique copy of the image of a memory block with the rjth4flteflt_To. intent to update (modify) it and requires destruction of other copies, if__ifany DCLAIM (Data Requests authority to promote an existing query-only copy of Claim) memory block to a unique copy with the intent to update (modify) it and requires destruction cfe copies f any DCBZ (Data Cache Requests authority to create a new unique copy of a memory block Block Zero) without regard to its present state and subsequently modify its 9 S1SuiresdestTaaSlt2!?S1E9P..Y.: CASTOUT Copies the image of a memory block from ahigherievelofmemoiy to a lower level of memory in preparation for the destruction of the ____________ higher level copy __________________ WRTIE Requests authority to create a new unique copy of a memory block without regard to its present state and immediately copy the image of the memory block from a higher level memory to a lower level m moryjipreparation for the destruction of the higher level copy PARTIAL WRITE Requests authority to create a new unique copy of a partial memory block without regard to its present state and immediately copy the image of the partial memory block from a higher level memory to a lower level memory in preparation tbr fl destruction of the higher .. ..
[030 Request 402 is received by snoopers 404, for example, snoopers 311a-311m of L2 cache slice 230w in. general, with some exceptions, snoopers 311 in the same L2 cache 230a as the master of request 402 do not snoop request 402 (i.e.. there is generally no self-snooping) because a request 402 is transmitted on the interconnect fabric only if the request 402 cannot be serviced internally by a processing unit 104.
10031] The operation continues with a partial response phase 455. During partial response phase 455, snoopers 404 that receive and process requests 402 each provide a respective partial response 406 representing the response of at least that snooper 404 to request 402. A snooper 404 within an integrated memory controller 106 detennine.s the partial response 406 to provide based, for example, upon whether that snooper 404 is responsible for the request address and whether it has resources available to service the request. A snooper 404 of an L2 cache 230 may determire its partial response 406 based on, for example, the availability of its L2 cache directory 308, the availability of a snoop logic instance 311 within the snooper 404 to handle the request, and the coherency state associated with the request address in L2 cache directory 308.
[0032j The operation continues with a combined response phase 460, During combined response phase 460, the partial responses 406 of snoopers 404 are logically combined either in stages or all at once by one or more instances of response logic 422 to determine a system-wide combined response (referred to herein as CR" or "Cresp") 410 to request 402. In one preferred embodiment, which will be assumed hereinafter, the instance of response logic 422 responsible for generating combined response 410 is located in the processing unit 104 containing the master 400 that issued request 402, fbr example, in fabric controller 216. Response logic 422 provides combined response 410 to master 400 and snoopers 404 via the interconnect fabric to indicate the system-wide response (e.g., success, failure, retry, etc.) to request 402. If the CR 410 indicates success of request 402, CR 410 may indicate, for example, a data source for a requested memory block, a coherence state in which the requested memory block is to be cached by master 400, and whether "cleanup" operations invalidating the requested memory block in one or more 12 caches 230 are required.
100331 In response to receipt of combined response 410, one or more of master 400 and snoopers 404 typically peribmi one or more operations in order to service request 402. These operations may include supplying data to master 400, invalidating or otherwise updating the coherency state of data cached in one or more L2 caches 230, performing castout operations, writing back data to a system memory 108, etc. If required by request 402, a requested or target memory block may be transmitted to or from master 400 before or after the generation of combined response 410 by response logic 422.
I031 In the following description, the partial response 406 of a snooper 404 to a request 402 and the operations performed by the snooper 404 in response to the request 402 and/or its combined response 410 will be described with reference to whether that snooper is a Highest Point of Coherency (HPC), a Lowest Point of Coherency (LPC), or neither with respect to the request address specified by the request. An LPC is defined herein as a memory device or L'O device that serves as the ultimate repository for a meniory block. In the absence of a caching participant that holds a copy of the memory block, the LPC holds the only image of that memory block. In the absence of an HPC caching participant for the memory block, tile LPC has the sole authority to grant or deny requests to modify the memory l,lock. In addition, an LPC, when the LPC data is current and in the absence of a caching participant that can provide the data, provides that data to requests to either read or modify the memory block. If a caching participant has a more current copy of the data, but is unable to provide it to a request, the LPC does not provide stale data and the request is retried. For a typical request in the data processing system embodiment, the LPC will be the memory controller 106 ibr the system memory 108 holding the referenced memory block An HPC is defined herein as a uniquely identified device that caches a true image of the memory block (which may or may not be consistent with the corresponding memory block at the LPC) and has the authority to grant or deny a request to modify the memory block, Descriptively, the HPC (even if its copy is consistent with main memory behind the LPC) a'so provides a copy of the memory b'ock to a requestor in response to any request to read or modify the memory block (cache to cache transfers are faster than LPC to cache transfers).
Thus for a typical request in the data processing system embodiment, the HPC. if any, will be an L2 cache 230. Although other indicators may be utilized to desiiate an HPC for a memory block, a prefbrred embodiment designates the l-IPC, if any, for a memory block utilizing selected cache coherency state(s) within the L2 cache directory 308 of an L2 cache 230. in a preferred embodiment, the coherency states within the coherency protocol, in addition to providing (1) an indication of whether a cache is the HPC for a memory block, also indicate (2) whether the cached copy is unique (i.e., is the only cached copy systemwide), (3) whether and vhen the cache can provide a copy of the memory block to a master of a request for the memory block, and (4) whether the cached image of the memory block is consistent with the corresponding memory block at the LPC (system memory). These four attributes can he expressed, fix example, in an exemplary variant of the welhknown MESI (Modified, Exehasive, Shared, Invalid) protocol summarized below in Table II. Further information regarding the coherency protocol may be found, for example, in U.S. Patent No. 7,389,388, which is hereby incorporated by reference.
TABLE II
Coherence FIPC' Umqut' Da a source7 Consistent Legal corcurrent state with states ___ [PC? ________ ______ ys >es lyes beforcCR no I(&LPC) Me yes >es yes before CR ---yes ____ jjjC) T, Te yes unknown yes, after CR if none provided no S1., S. I (& LPC) before CR S no UnknOWn yes before CR urimown T, S I (& LPC; S no unknown no unknown T, S, 5, 1 (& I no n/a no n/a M, Me, T, 5b
________
10035] Of note in Table H above are the T, 5L and S states, which are all "shared" coherency states in that a cache memory may contemporaneously hold a copy of a cache line held in any of these states by another cache niemory. The T or Te state identifies an FIPC cache memory that fonnerLy held the associated cache line in one of the M or Mc states, respectively, and sourced a query-only copy of the associated cache line to another cache memory. As an HPC, a cache memory holding a cache line in the F or Te coherence stale has the authority to modify the cache line or to give such authority to another cache memory. A cache memory holding a cache line in the Tx state (e.g., T or Tc) serves as the cache data source of last resort (after Cresp) for query-only copies of that cache line in that the cache memory will only source a query-only copy to another cache memory if no cache memory holding the cache line in the Si state is available to serve as a data source (before Cresp), 100361 The S, state is tbrrned at a cache memory in response to that cache memory receiving a query-only copy of a cache line from a cache memory in the T coherence state.. Although the 5L state is not an HPC coherence state, a cache memory holding a cache line in the SL state has the ability to source a query-only copy of that cache line to another cache memory and can do so prior to receipt of Cresp. In response to sourning a query-only copy of a cache line to another cache memory (which assumes the 5L state), the cache memory sourcing the query-only copy of the cache line updates its coherency state for the cache line from SL to S. Thus, implementation of the 5L coherence state can cause numerous query-only copies of frequently queried cache lines to be created throughout a multiprocessor data processing system., advantageously decreasing latencies of query-only access to those cache lines.
[0037] Referring again to FIG. 4, the HPC, if any, for a memory block referenced in a request 402, or in the absence of an HPC. the LPC of the memory block, preferably has the responsibility of protecting the transfer of ownership of a memory block, if necessary, in response to a request 402, In the exemplary scenario shown in FIG. 4, a snooper 404n at the HPC (or in the absence of an HPC. the LPC) fur the memory block specified by the request address of request 402 protects the transfer of ownership of the requested. memory block to master 400 during a protection window 412a that extends from the time that snooper 404n determines its partial response 406 until snooper 304n receives combined response 410 and during a subsequent window extension 412b extending a programmable time beyond receipt by snooper 404n of combined response 410. During protection window 412a and window extension 412b, snooper 404n protects the transfer of ownership by providing partial responses 406 to other requests specifying the same request address that prevent other masters from obtaining ownership (e.g., a retry partial response) until ownership has been successfully transferred to master 400. Master 400 likewise initiates a protection window 413 to protect its ownership of the memory block requested in request 402 following receipt of combined response 410.
[0038] Because snoopers 404 all have limited resources for handling the CPU and 110 requests described above, several different levels of partial responses and corresponding CRs are possible.
For example, if a snooper within a memory controller 106 that is responsible for a requested memory block has a queue available to handle a request, the snooper may respond with a partial response indicating that it is able to serve as the LPC for the request. If, on the other hand, the snooper has no queue available to handle the request, the snooper may respond with a partial response indicating that it is the LPC for the memory block, but is unable to currently service the request. Similarly, a snooper 311 in an L2 cache 230 may require an available instance of snoop logic and access to 1.2 cache directory 406 in order to handle a request. Absence of access to either (or both) of these resources results in a partial response (and corresponding CR) signaling an inability to service the request due to absence of a required resource.
109391 As discussed above, read-type operations are generally prioritized over store-type operations in data processing. systems because the time critical path through a software program is generally determined by load latency. The prioritization of read-type operations over store-type operations can be expressed in a number of data processing system attributes, incLuding the hardware architecture, memory model and coherency protocol implemented by a given data processing system. For example, the coherency protocol summarized in Table II reflects this prioritization by favoring the formation of numerous distributed query-only (S1. or S) copies of a frequently queried cache line throughout a multiprocessor data proccssing system. While the availability of numerous sources of a query-only copy of a cache line reduces access latency of non-storage-modjfying operations, the presence of many copies of the cache line distributed throughout the data processing system can increase the access latency of storage--modifying accesses because any one of the cache memories holding a query-only copy of the cache Line arid servicing a request for query-only access can thrce a competing request for storage-modifying access to he retried. In certain cases, repeated retry of the storage-modifying access can slow or even halt forward progress of the program (eg.. if the storage-modifying access is required to release a highly contended lock).
100401 As described in detail below with reference to FIG. 5, performance issues associated with read prioritization can be addressed by reducing contention experienced by store-type operations for selected memory blocks. In particular, contention fbr store-type operations in a data processing system can he reduced by limiting the replication of shared copies of a cache line that is the target of competing read-type and store-type operations throughout the system by setting the coherence stale of the cache line to a coherence slate that indicates that cache memory holding the cache line cannot source additional copies of the cache line to other rcquestors. By reducing the replication of additional copies of the cache line, the probability that a store-type operation targeting the cache line will be forced to be retried by a snooping cachc memory is also reduced.
10041 Turning now to FIG. 5, there is illustrated a high level logical flowchart of an exemplary process by which a cache memory dynamically sets a coherence state for a cache line that is the target of competing read-type and store-type operations to limit replication of additional copies of the cache line. For clarity, the description of the flowchart will occasionally refer back to FIGS. 1-4.
[0042] The process depicted in FTC, 5 begins at block 500 and then proceeds to block 50!, which illustrates an RC machine 312 in an L2 cache memory 230 issuing a read-type request on local interconnect 114, for example, in response to a processor load request that misses in directory 308. The read-type operation may be a READ request as previously described or any other non-storage-modifying access to a target cache line. As depicted at block 502, while RC machine 312 awaits receipt of the combined response for its read-type request, RC machine 312 remains in a busy state. As described above with reference to FiG. 4, the read-type request is received by snoopers 404, each of which provides to response logic 422 a Presp indicating the ability of that snooper 404to service the read-type request. Response logic 422 generates a Cresp from the Presps received from snoopers 404 and provides the Cresp to the RC machine 312 and snoopers 404. The Cresp may designate, for example, a data source that will supply the requested (target) cache line to the requesting L2 cache memory 230 and. a coherence state to he associated with the che line at the requesting L.2 cache memory 236.
100431 In response to the RC machine 312 detecting a Cresp for the read'type request, the process proceeds to block 510, which is described below. While the Crcsp has not yet been received, the RC machine 312 monitors to detect any competing store-type operation (Le., any storage-modifying operation) directed to the same target cache line as the read-type request (block 506). If not such competing store is detected. the process continues at block 502, which has been described. Otherweise, if a competing store-type operation is detected at block 506 before a cresp is received at block 504, then the process continues at block 508.
[0044 At block 508, RC machine 312 sets an override flag to indicate that if an 5L coherence state is designated by the forthcoming Cresp, the designated S1. state should be overridden and an S coherence state should instead be associated with the target cache line at the requesting L2 cache 230. More generally, RC machine 312 sets a flag to be prepared to override any designated coherence state that indicates that its L2 cache memory 230 is pennitted to source copies of the target cache line to future requestors with a substitute coherence state that indicates that a cache line may not source copies of tile target cache line to future requestors, Holding the cache line in a coherence state that indicates that the L2 cache memory 230 cannot source copies of the cache line to requestors potentially reduces the number of shared cache snoopers that may intervene data before Cresp to ftiture requests and potentially cause the HPC cache to serve as an intervention data source of last resort. Following block 598, the process returns to block 502, which has been described.
[0045 Referring now to block 510, RC machine 312 determines whether or not the coherence state desiated for the requested cache line would allow the L2 cache memory 230 to source the cache line to future requestors before Cresp (e.g., is the 5L. state). If the Cresp does not desigt ate a coherence state that would allow the requesting L2 cache memory 230 to source the cache line to such future requestors, then RC machine 312 updates the entry in directory 308 corresponding to the target cache lin.e to the coherence state indicated by the Cresp (block 516). Thereafter, the process passes to block 518, which depicts RC machine 312 clearing the override flag, if required. After block 518, the process continues at block 520, and the RC machine 312 continues to process the read-type operation normally. For example, at block 520, the RC machine 312 installs the target cache line received in response to the read-type request in cache array 302. Following block 520, the RC machine 31.2 returns to art idk state, and the process ends at block 522.
100461 Returning to block 510, if the Cresp designates a coherence state for the requested cache line that indicates that the L2 cache memory 230 would be permitted to source a copy of the target cache line to another requestor (e.g., S1.), the process continues at block 512. At block 512, RC machine 312 determines whether the override flag was set at block 508. However, in some instances, block 508 will never be reached and, thus, the override flag;viH never be set. IL at block 512, the RC machine 312 determines the override flag was not set, then the process continues at block 516, which is described above. If. RC machine 312 determines at block 512 that the override flag was set, then the process continues at block 514, which illustrates the RC machine 312 setting the entry in directory 308 to a state that indicates that the L2 cache memory 230 cannot source copies of the cache line to requestors, for example, the Shared state. The process continues at block 518, which has been described above.
10047] As has been described, a multiprocessor data processing system includes a plurality of cache memories including a cache memory. The cache memory issues a read-type operation for a target cache line, While waning tbr receipt of the target cache line, the cache memory monitors to detect competing store-type operation for the target cache line. In response to receiving the target cache line, the cache memory installs the target cache line in the cache memory, and sets a coherency state of the target cache line installed in the cache memory based on whether the competing store-type operation is detected.
100481 While various embodiments have been particularly shown and described, it will be understood by those skilled in the art that various changes in form and detail may be made therein and these alternate implementations all fall within the scope of the appended claims.

Claims (13)

  1. CLAIMS1. A method in a multiprocessor data processing system including a plurality of cache memories including a cache memory, the method comprising: issuing, by the cache memory, a read-type operation for a target cache line; while waiting for receipt of the target cache line requested by the read-type operation, monitoring, by the cache memory, to detect a competing store-type operation for the target cache line; in response to receiving the target cache line: installing the target cache line in the cache memory; and setting a coherence state for the target cache line installed in the cache memory based on whether the competing store-type operation is detected.
  2. 2. The method of claim 1, wherein the coherence state is a first state indicating that the target cache lute can source copies of the target cache line to requestors.
  3. 3. The method of claim I or 2, thrther comprising: in response to issuing the read--type operation, the cache memory receiving a coherence message indicating the first state, wherein setting the coherence state for the target cache line courprises the cache memory setting the coherence state to the first state indicated by the coherence message if the competing store-type operation is not detected.
  4. 4. The method of claim 1, 2 or 3 wherein the coherence state is a second state indicating that the cache line cannot source copies of the cache line to requestors.
  5. 5. The method of claim 4, wherein setting the coherence state comprises the cache memory setting the coherence state to the second state if the competing store-type operation is detected.
  6. 6. The method of claim I. further comprising: receiving a system-wide coherence message indicating a first state for the target cache line, wherein the first state indicates that the target cache line can source copies of the target cache line to requesters, wherein setting the coherence slate comprises setting the coherence state to a second state indicating that the target cache line cannot source copies of the target cache line to requestors.
  7. 7, A cache memory frr a multiprocessor data processing system, the cache memory conipnsing: a data anay; a directory of contents of the data an-ny; and a Read-Claim (RC) machine that processes requests received from an associated processor core, wherein the RC machine: issues a read-type operation for a target cache line; while waiting for receipt of the target cache hne, monitors to detect a competing store--type operation for the target cache line; in response to receiving the target cache line: installs the target cache line in the cache memory; and sets a coherence state for the target cache line installed in the cache memory based on whether the competing store-type operation is detected.
  8. 8. The cache memory of claimS 7, wherein the coherence state is a first state indicating that the target cache line can source copies of the target cache line to requesters.
  9. 9. The cache memory of claim 8, wherein the RC rnathine. responsive to issuing the read-type operation, receives a coherence message indicating the first state, and wherein the RC machine sets the coherence state to the first state indicated by the coherence message if the competing store-type operation is not detected.
  10. 10. The c-ache memory of claim 7, 8 or 9 wherein the coherence state is a second state indicating that the cache line cannot source copies of the cache line to requestors.
  11. ii, The cache memory of claim 10, wherein the RC machine sets the coherence state to the second state if the competing store-type operation is detected.
  12. 12. The cache memory of claim 7, wherein the RC machine receives a system-wide coherence message indicating a first state for the target cache line, wherein the first state indicates that the target cache line can source copies of the target cache line to requestors, and wherein the RC machine sets the coherence state to a sccond state indicating that the target cache line cannot source copies of the target cache line to requestors.
  13. 13. A processing unit, comprising: a cache memory according to any of daims 7 to 12, and the associated processor core coupled to the cache memory.14.. A multi-processor data processing system, comprising: an interconnect fabric; and a plurality of processing units coupled to the interconnect fabric, wherein each of the plurality of processing units includes a respective one of a plurality of cache memories, wherein a cache memory among the plurality of cache memories includes a Read-Claim (RC) machine that processes operations received from an interconnect, wherein the RC machine: issues a read-type operation for a target cache line; while waiting for receipt of the target cache line, monitors to detect a competing store-type operation for the target cache line; in response to receiving the target cache line: installs the target cache line in the cache memory; and sets a coherence state for the target cache line installed in the cache memory based on whether the competing store-type operation is detected.15. The multi-processor data processing system of claim 14, wherein the coherence slate is a first state indicating that the target cache line can source copies of the target cache line to requestors.16. The multi-processor data processing system of claim 15, wherein the RC machine, responsive tc issuing the read-type operation, receives a coherence message indicating the first slate, and wherein the RC machine sets the coherence state to the first state indicated by the coherence message if the competing store-type operation is riot detected.17. The multi-processor data processing system of claim 4, 15 or 16 wherein the coherence state is a second state indicating that the cache line cannot source copies of the cache line to requestors.18. The multi-processor data processing systeni of claim 17, wherein the RC machine sets the coherence state to the second state if the competing store-type operation is detected.19. The multi-processor data processing system of claim 14, wherein the RC machine receives a system-wide coherence message indicating a first state for the target cache line, wherein the first state indicates that the target cache line can source copies of the target cache line to requestors. and wherein the RC machine sets die coherence state to a second state indicating that the target cache line cannot source copies of the target cache line to requestors.
GB1300936.0A 2012-02-08 2013-01-18 Forward progress mechanism for stores in the presence of load contention in a system favoring loads by state alteration Active GB2500964B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB201336898 2012-02-08

Publications (3)

Publication Number Publication Date
GB201300936D0 GB201300936D0 (en) 2013-03-06
GB2500964A true GB2500964A (en) 2013-10-09
GB2500964B GB2500964B (en) 2014-06-11

Family

ID=47843561

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1300936.0A Active GB2500964B (en) 2012-02-08 2013-01-18 Forward progress mechanism for stores in the presence of load contention in a system favoring loads by state alteration

Country Status (1)

Country Link
GB (1) GB2500964B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0923031A1 (en) * 1997-12-11 1999-06-16 BULL HN INFORMATION SYSTEMS ITALIA S.p.A. Method for reading data from a shared memory in a multiprocessor computer system
US20060265466A1 (en) * 2005-05-17 2006-11-23 Takashi Yasui Shared memory multiprocessor system
US20070083716A1 (en) * 2005-10-06 2007-04-12 Ramakrishnan Rajamony Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7546422B2 (en) * 2002-08-28 2009-06-09 Intel Corporation Method and apparatus for the synchronization of distributed caches
US7404046B2 (en) * 2005-02-10 2008-07-22 International Business Machines Corporation Cache memory, processing unit, data processing system and method for filtering snooped operations
US7447845B2 (en) * 2006-07-13 2008-11-04 International Business Machines Corporation Data processing system, processor and method of data processing in which local memory access requests are serviced by state machines with differing functionality

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0923031A1 (en) * 1997-12-11 1999-06-16 BULL HN INFORMATION SYSTEMS ITALIA S.p.A. Method for reading data from a shared memory in a multiprocessor computer system
US20060265466A1 (en) * 2005-05-17 2006-11-23 Takashi Yasui Shared memory multiprocessor system
US20070083716A1 (en) * 2005-10-06 2007-04-12 Ramakrishnan Rajamony Chained cache coherency states for sequential non-homogeneous access to a cache line with outstanding data response

Also Published As

Publication number Publication date
GB2500964B (en) 2014-06-11
GB201300936D0 (en) 2013-03-06

Similar Documents

Publication Publication Date Title
US8806148B2 (en) Forward progress mechanism for stores in the presence of load contention in a system favoring loads by state alteration
US8793442B2 (en) Forward progress mechanism for stores in the presence of load contention in a system favoring loads
TWI391821B (en) Processor unit, data processing system and method for issuing a request on an interconnect fabric without reference to a lower level cache based upon a tagged cache state
JP5078396B2 (en) Data processing system, cache system, and method for updating invalid coherency state in response to operation snooping
US8140770B2 (en) Data processing system and method for predictively selecting a scope of broadcast of an operation
US7389388B2 (en) Data processing system and method for efficient communication utilizing an in coherency state
US7584329B2 (en) Data processing system and method for efficient communication utilizing an Ig coherency state
US7467323B2 (en) Data processing system and method for efficient storage of metadata in a system memory
US8140771B2 (en) Partial cache line storage-modifying operation based upon a hint
US8108619B2 (en) Cache management for partial cache line operations
JP5105863B2 (en) Data processing system, method, and memory controller for processing flash operations in a data processing system having multiple coherency domains
US7484042B2 (en) Data processing system and method for predictively selecting a scope of a prefetch operation
US8117401B2 (en) Interconnect operation indicating acceptability of partial data delivery
US7454577B2 (en) Data processing system and method for efficient communication utilizing an Tn and Ten coherency states
US7958309B2 (en) Dynamic selection of a memory access size
US8230178B2 (en) Data processing system and method for efficient coherency communication utilizing coherency domain indicators
US20060179249A1 (en) Data processing system and method for predictively selecting a scope of broadcast of an operation utilizing a location of a memory
KR101072174B1 (en) System and method for implementing an enhanced hover state with active prefetches
US7366844B2 (en) Data processing system and method for handling castout collisions
US20090198910A1 (en) Data processing system, processor and method that support a touch of a partial cache line of data
JP5063059B2 (en) Method, data processing system, memory controller (data processing system and method enabling pipelining of I / O write operations and multiple operation ranges)
GB2500964A (en) Forward progress mechanism for stores in the presence of load contention in a system favouring loads by state alteration.

Legal Events

Date Code Title Description
746 Register noted 'licences of right' (sect. 46/1977)

Effective date: 20140619