CN1149494C - Method and system for keping uniforming of cache buffer memory - Google Patents

Method and system for keping uniforming of cache buffer memory Download PDF

Info

Publication number
CN1149494C
CN1149494C CNB001188542A CN00118854A CN1149494C CN 1149494 C CN1149494 C CN 1149494C CN B001188542 A CNB001188542 A CN B001188542A CN 00118854 A CN00118854 A CN 00118854A CN 1149494 C CN1149494 C CN 1149494C
Authority
CN
China
Prior art keywords
system bus
cache memory
cache
storage operation
directly writing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB001188542A
Other languages
Chinese (zh)
Other versions
CN1278625A (en
Inventor
J・M・努内斯
J·M·努内斯
彼得森
T·A·彼得森
沙利文
M·J·沙利文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Publication of CN1278625A publication Critical patent/CN1278625A/en
Application granted granted Critical
Publication of CN1149494C publication Critical patent/CN1149494C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0811Multiuser, multiprocessor or multiprocessing cache systems with multilevel cache hierarchies
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4063Device-to-bus coupling
    • G06F13/4068Electrical coupling
    • G06F13/4072Drivers or receivers

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method and system for maintaining cache coherency for write-through store operations in a data processing system. A write-through store operation is passed from a particular processor to the system bus through any caches of said multiple levels of cache which are interposed between the particular processor and the system bus. The write-through store operation is performed in any of the interposed caches in which a cache hit for the write-through store operation is obtained. All caches of said multiple levels of cache, which are not interposed between the particular processor and the system bus, are snooped from an external snoop path of the system bus with a data address of said write-through operation until the write-through operation is successful, wherein the cache coherency point for the memory hierarchy is set at the system bus for write-through store operations such that the write-through operation is completed successfully prior to completion of any other instructions to the same data address.

Description

The method and system that keeps cache coherence
Technical field
The present invention relates generally to improving one's methods and system of data processing, particularly in multi-processor data process system, keep improving one's methods and system of cache coherence (coherency).In addition, the present invention also is particularly related to the method and system that keeps cache coherence when directly writing storage operation in multicomputer system.
Background technology
Most modern high performance data handling system structure comprises the multistage cache memory in the storage system.Cache memory is used in the data handling system, with time of access system memory mutually specific energy visit the data of frequent use quickly, thereby improved overall performance.Cache memory at different levels are used in the longer access latency of progressive growth usually.At different levels during less cache memory faster is used in the storage system near one or more processors, and big slower cache memory is used at different levels near system storage.
In the symmetric multi processor (smp) data handling system of routine, all processors are identical usually, use all processors of identical common instruction set and the same communications protocols that similar hardware configuration is all arranged, and provide similar storage system usually.For example, Chang Gui SMP data handling system comprises system storage; A plurality of processing units, each processing unit comprise a processor and one or more levels cache memory; And processing unit intercoupled and be coupled to the system bus of system storage.Many this systems are included in the cache memory of sharing between two or more processors of one-level at least.To in the SMP data handling system, obtain effective execution result, importantly obtain consistent memory hierarchy, that is, and for all processors provide same memory contents.
Though be designed to by intercepting the consistance that (snooping) keeps cache memory, " retry " response can cause the processor operations mistake.Particularly, for directly writing storage, in case carried out writing renewal, allow load to read new data subsequently, the storage of directly writing that retry is identical has problem.
Therefore, need provide a kind of method that in multicomputer system, keeps cache coherence, cache coherence when particularly when having retry, keeping directly writing storage operation.
Summary of the invention
Therefore an object of the present invention is to provide improving one's methods and system of a kind of data processing.Therefore another object of the present invention provides a kind of improving one's methods and system of cache coherence that keep in multi-processor data process system.
A further object of the present invention provides a kind of the improving one's methods and system of cache coherence when keeping straight write operation in multicomputer system.
Can obtain the above object by the method and system of introducing here.Method and system of the present invention is used for keeping the consistance of cache memory when data handling system is directly write storage operation, wherein said data handling system comprises a plurality of processors that are coupled to system bus by storage system, and wherein storage system comprises multistage cache memory.By being inserted in any cache memory of the described multistage cache memory between par-ticular processor and the system bus, directly writing storage operation and be delivered to system bus by specific processor.Directly writing storage operation can carry out in the cache memory of directly being write any insertion that storage operation hits.All cache memories that are not inserted in the described multistage cache memory between the specific processor system bus are intercepted the data address that described straight write operation is intercepted in the path by the outside of system bus, up to straight write operation success, wherein the unanimity of the cache memory of storage system point is arranged on the system bus that is used for directly writing storage operation, successfully finishes straight write operation thus before any other instruction that is accomplished to the identical data address.
From following detailed explanation of the present invention more than and other purpose, characteristics and advantage will become obviously.
Description of drawings
Novel characteristics of the present invention is stated in subsidiary claims.Yet read in conjunction with the drawings with reference to following detailed description of illustrative embodiments, the present invention self and the preference pattern that uses and other purpose of the present invention and advantage are with fine understanding, wherein:
Fig. 1 shows the sequential chart that makes a mistake when directly writing storage instruction by the present technology of intercepting retry;
Fig. 2 shows the high-level block diagram according to multi-processor data process system of the present invention;
Fig. 3 shows with intercepting the sequential chart that technology is directly write the performance of storage instruction certainly; And
Fig. 4 shows the logic high level process flow diagram of directly writing the storage operation process.
Concrete ten thousand formulas of implementing
With reference now to accompanying drawing,,, shows high-level block diagram according to multi-processor data process system of the present invention particularly with reference to figure 2.As shown in the figure, data handling system 8 comprises a plurality of processor cores 10a-10ns paired with other a plurality of processor cores 11a-11n, each preferably includes the PowerPC line of a processor that can obtain from International Business Machines Corporation, except the register of routine, be used for the instruction stream logic and performance element of execution of program instructions, each processor cores 10a-10n and 11a-11n also comprise one-level (L1) cache memory 12a-12n and a 13a-13n on the plate, and can store provisionally may be by the instruction and data of relevant processor access.Though L1 cache memory 12a-12n and 13a-13n are shown as the one cache memory of storage instruction and data (hereinafter referred is data) in Fig. 2, one skilled in the art should appreciate that each L1 cache memory 12a-12n and 13a-13n also can realize with instruction and data cache memory two parts.
In order to reduce the stand-by period, data handling system 8 also comprises the cache memory of one or more extra levels, and for example secondary (L2) cache memory 14a-14n is used for data separating to L1 cache memory 12a-12n and 13a-13n.In other words, L2 cache memory 14a-14n plays the intermediate store between system storage 18 and L1 cache memory 12a-12n and the 13a-13n, usually can store than L1 cache memory 12a-12n and the much bigger data of 13a-13n capacity, but need long memory latency time.For example, L2 cache memory 14a-14n has the memory capacity of 256 or 512 kilobyte, and L1 cache memory 12a-12n and 13a-13n have the memory capacity of 64 or 128 kilobyte.As mentioned above, though Fig. 2 only shows two levels of cache, the storage system of data handling system 8 can enlarge, and comprises being connected in series or the extra level (L3, L4 etc.) of the cache memory of reserve.
As shown in the figure, data handling system 8 also comprises I/O (I/O) device 20, system storage 18 and nonvolatile memory 22, and each all is coupled to interconnection 16.I/O device 20 comprises conventional peripherals, for example display device, keyboard and the graphical indicators that is connected to interconnection 16 by the adapter of routine.The software of nonvolatile memory 22 storage operating systems and other, aforesaid operations system and software are packed in the volatibility system storage 18 when data handling system 8 powers on.Certainly, one skilled in the art should appreciate that data handling system can be included in many other parts that do not illustrate among Fig. 2, for example be used to be connected to the serial ports of network or auxiliary device and parallel port, management access to the Memory Controller of system storage 18 etc.
Comprise that the interconnection 16 of one or more buses of a system bus is used for the pipeline of the communication between L2 cache memory 14a-14n, system storage 18, I/O device 20 and the nonvolatile memory 22 as pipeline.Representative communication transaction in the interconnection 16 comprises the appointment recipient's of the source label in indication transaction source, definite transaction purpose label, address and/or data.Whether each device that is coupled to interconnection 16 is preferably intercepted interconnection all communication transactions on 16, upgrade because of transaction with the consistance of determining device.Preferably provide by each cache memory and intercept the path to the outside of the system bus of interconnection 16.
By the memory consistency agreement of use selecting, the memory organization that is consistent of MESI agreement for example.In the MESI agreement, the storage of consistent state indication is relevant with each consistance district group (for example, cache line or sector) of all higher levels (cache memory) storer at least.Each consistance district group can have modification (M), exclusive (E), share a kind of in (S) or invalid (I) this one of four states, can be by the dibit encoding in the cache directory.The state representation consistance district group of revising is only effective in the cache memory of the consistance district of memory modify group, and the value of the consistance district group of revising is not also write system storage.When consistance district group is expressed as when exclusive, consistance district group only resides in the cache memory of the consistance district group with exclusive state of all cache memories in the rank of storage system.Yet, consistent in the data in exclusive state and the system storage.If consistance district group is labeled as shared in cache directory, consistance district group resides in relevant cache memory and in other possible cache memory of the same stages of memory hierarchy, all copies of consistance district group are all consistent with system storage so.At last, invalid state representation data or the address tag relevant with consistance district group all do not reside in the cache memory.
The value field that each cache line (piece) of data preferably includes address mark field, mode bit field, includes bit field and be used to store actual instruction or data in smp system.Mode bit field and include bit field and be used for keeping cache coherence (it is effective to be illustrated in the value of storing in the cache memory) at multiprocessor computer system.Address mark is the subclass of the full address of corresponding stored piece.If be input as effective status, coupling after a mark in the address mark field is compared with the address of input, the expression cache memory " hits (hit) ".
In keeping cache coherence, before in cache memory, once storing, directly write cache memories store and do not distribute cache line or gain entitlement (E of MESI agreement or M state).Particularly, directly write or full store cache work, during the processor write operation, provide write operation, guarantee the data of cache memory and the consistance between the primary memory thus cache memory and primary memory.For keeping the consistance of cache memory, consistent directly write storage and must make on the processor any effective cache line invalid, processing is from outside the consistent cache line of putting that starts of particular cache, to guarantee obtaining new data updated from the follow-up load of all processors.
Usually, bus " intercept " technology be used to make from the consistent cache line of putting of cache memory invalid.Each cache memory preferably includes intercepts logic to intercept.As long as read or write, the address of data is propagated into all other cache memories of sharing a common bus by the processor cores that starts.Each is intercepted logical block and intercepts address from bus, and with address and the address mark array that is used for cache memory relatively.When hitting, listens for responsive is returned, and allows further to operate, and to keep the consistance of cache memory, for example the cache line in the mission is invalid.In addition, because cache memory has first modification that must release cache memory to duplicate or store the problem that prevents from suitably to intercept, so the listens for responsive of " retry " is sent by the bus snooping logic of cache memory.When retry, the processor cores of the data address that starts is with the retry read or write.
According to being the technology of intercepting of preferred embodiment alternatives, Fig. 1 shows the sequential chart that makes a mistake when retry is directly write storage instruction.In this embodiment, suppose that the SMP structure has a processor cores 0 and processor cores 1, the L1 cache memory relevant with each kernel and the L2 cache memory of being shared by two kinds of processor cores.Keep the point of the cache coherence of processor to be arranged on the L2 cache memory in this embodiment.For this routine purpose of in Fig. 1, not utilizing, can use the at different levels of additional processor cores and cache memory then.
For this example, pseudo-code sequence is:
Processor cores 0 Processor cores 1
Storage 2 to A circulations: load A
If A!=2 circulations
Storage 3 to A
If carry out the storage of processor cores 0, but carry out once more before the storage of processor cores 0, retry allows to proceed the input and the storage of processor cores 1, and the consistent store status of the gained of address A is 2, and is incorrect.
In first clock period 60, be presented in the sequential chart, addressing and data (RA) that bus is directly write storage operation thus by kernel 0 (kernel 0WTST) ruling are sent to the L2 cache memory.After this, at reference number 62, the data address of directly writing storage propagates into all non-kernels that starts (kernel 1) on system bus, rib interception data address in non-thus the starting.In addition, during the identical cycle, at reference number 64, data address and L2 mark array compare, so that whether the previous version of specified data resides in the L2 cache memory.In the period 3, at reference number 66, the L1 mark array in the address that listens to and the L1 cache memory relevant with kernel 1 relatively.In addition, L2 hits and returns in the L2 cache memory, shown in reference number 68.After this, be used in the pipeline L2 cache memory is updated to " A=2 ", carry out the L2 data and write, shown in reference number 70 by write command is write.Next, during the 4th clock period, the listens for responsive of the L1 cache memory of kernel 1 is returned as retry, shown in reference number 72.
Note,, before the listens for responsive of expression retry is returned, directly write storage update L2 cache memory particularly by the described not preferred technology of intercepting.Because comprise intercepting to hit the sector that is in the M state and intercept and hit the valid function of waiting in line, retry returns.When retry is returned by the L1 cache memory of kernel 1, kernel 0 retry is set directly writes storage operation.Kept the consistance of cache memory at the L2 cache memory, so before straight write operation was sent to bus, retry was directly write storage and upgraded the cache memory of any higher level in the L2 cache memory.
When " A!=2 " time, processor cores 1 is waited in circulation.When writing the L2 cache memory from the storage operation of kernel 0, even retry is arranged in the kernel 0, the bus that kernel 1 ruling is loaded is also propagated the data address shown in the reference number 74.Next, the L2 mark array of address and L2 cache memory compares, shown in reference number 76.After this, receive the L2 cache-hit, shown in reference number 78.At last, carry out reading of data in the L2 cache memory, wherein " A=2 " is shown in reference number 80.Through reading of data after the delay period 81, kernel 1 is ended circulation, carries out the storage operation of " storing A into 3 ".
Kernel 1 ruling bus transmits directly writes storage operation, wherein propagates the data address of directly writing storage, shown in reference number 82.Next, carry out the L2 mark relatively, shown in reference number 84.After this, receive the L2 cache-hit, shown in reference number 86.At last, the pipeline that data are submitted to the L2 cache memory writes as " A=3's ", shown in reference number 88.
Because from the loading and the local bus of storage operation ruling of kernel 1, the retry that postpones kernel 0 " storage 2 is to A " operation gets off and can use up to bus interface.Kernel 0 sends again by what the L2 cache memory received and directly writes storage operation, shown in reference number 90.Send data address partly, intercept kernel 1 thus, shown in reference number 92.After this, in the L1 of kernel 1 cache memory, compare the L1 mark, shown in reference number 94.Next, in the L2 cache memory, compare the L2 mark, shown in reference number 96.Cache-hit is returned by L2, shown in reference number 98.At last, data write the L2 cache memory again, and " A=2 " is presented at reference number 100 thus.
As mentioned above, if intercept partly and directly write storage and retry storage, another processor cores of ruling bus can once load, and referring to the more new data in the L2 cache memory, once directly writes storage before the ruling that former storage reception is carried out once more.First directly writes the data that second storage of first storage is depended on covering in storage.
One of problem shown in Fig. 1 may solution be to postpone L2 data and address pipeline, and the data retry stage is followed by presentation stage thus.Carry out described solution, L2 reads and separates writing with L2, or L2 reads delay.In first kind of situation, the complicacy of L2 ruling will significantly increase.In second kind of situation, add 2 additional cycle to all L2 cache-hit condition, cause undesirable performance loss.
Another solution is the L2 renewal that refreshes submission by the similar fashion that the original state with straight write operation uses with the register renaming scheme, and this is known in the art.For cache memory, described solution can increase extra undesirable complicacy, can reduce the speed of cache memory.
With reference now to Fig. 3,, according to a preferred embodiment of the present invention, shows the sequential chart of intercepting the performance of directly writing storage instruction of technology with certainly.Fig. 3 shows processor operations also shown in Figure 1, yet, in Fig. 3, intercept certainly and be used to eliminate the mistake that retry causes.Kernel 0 is emitted in that the L2 cache memory receives directly writes storage operation, carries out the ruling of L2 cache memory thus, shown in reference number 110.Next, carry out the comparison of L2 mark and L2 mark array, shown in reference number 112.Next, be received in the cache-hit of tape label in the L2 mark array, shown in reference number 114.Thus, the data that are written to the L2 cache memory are arranged on the pipeline that is used for carrying out, shown in reference number 116.Postpone after 117, during directly writing storage operation and decideing as the system bus that writes primary memory, ruling is intercepted certainly along system bus, shown in reference number 118.In Fig. 1, cache memory is consistent to be put is L2 cache memory being used for directly writing storage operation, yet in the present embodiment, cache memory is consistent to be put is system bus being used for directly writing storage operation.Put at the system bus that is used for directly writing storage operation for cache memory is consistent, if between listen period, proposing retry, so straight write operation is intercepted on system bus repeatedly as required, up to there not being return signal to return, no matter other instruction is waited for.Particularly, system bus comprises bus ruling logic, guarantee that interception device continues access bus and finishes in all cache memories up to the storage unanimity of directly writing storage, so data can write primary memory.
Except intercepting certainly, directly write the local data address of storage operation and intercept propagated along the outside to the non-kernel that starts, shown in reference number 120.After this, carry out L1 mark and L1 mark array relatively, shown in reference number 122.In following one-period, the response of L1 mark comparison is returned, shown in reference number 124.If response is retry, so directly write address stored and will continue ruling from the system bus of intercepting, return non-retry response up to the L1 cache memory.
In case non-retry response is returned, kernel 1 ruling local bus once loads, shown in reference number 126.Then, in another embodiment, kernel 1 loads not to be needed to wait for, has been submitted to system bus without retry up to storage.For example, hit in the L2 cache memory and submit to, begin the loading of kernel 1 after the L2 data shown in reference number 116 write, can not destroy the consistance of data if be loaded in.After this, carry out L2 mark and L2 mark array relatively, shown in reference number 128.Next, the L2 of tape label hits and returns in the L2 mark array, shown in reference number 130.After this, from the L2 reading of data, shown in reference number 132.Postpone after 133, kernel 1 ruling is used for directly writing the local bus of storage, shown in reference number 134.After this, carry out L2 mark and L2 mark array relatively, shown in reference number 136.Next, the L2 of tape label hits and returns in the L2 mark array, shown in reference number 138.After this, submit to the L2 data to write, shown in reference number 140.As directly writing shown in the storage of kernel 1, after the L2 data shown in the reference number 140 write, directly write storage operation and will proceed to the system bus that will upgrade in the primary memory, thus consistance by being undertaken by system bus keeping cache memory from intercepting.
Fig. 4 shows the logic high level process flow diagram of directly writing the storage operation process.Process starts from square frame 150, after this proceeds to square frame 152.Square frame 152 shows ruling processor cores and local bus will directly write the lower floor that the address of storage operation sends to cache memory.After this, square frame 154 shows compare address and mark array in lower floor's cache memory.Next square frame 156 shows determines whether hitting of tape label is arranged in lower floor's cache memory.If hitting of tape label arranged in lower floor's cache memory, process is delivered to square frame 158 so.Square frame 158 shows in the low layer cache memory data and is submitted to and writes.After this process is delivered to square frame 160.Turn back to square frame 156, if there be not hitting of tape label in lower floor's cache memory, process is delivered to square frame 160 so.Though not shown, can on multistage lower floor's cache memory, carry out in the process shown in square frame 154,156 and 158.
Square frame 160 shows and is delivered to system bus with directly writing storage operation.Next, square frame 162 shows the ruling system bus and will directly write storage operation and send to storer and carry out intercepting certainly of system bus.After this, square frame 164 shows and intercepts the path by the outside intercept the address in the cache memory of process not.For example, the cache memory of any not process is not for providing the processor cores of directly writing storage operation by the starting path to the path of system bus.Next, square frame 166 shows in the cache memory of process not and relatively intercepts address and mark array.After this, square frame 168 shows to determine to intercept whether return retry.Return retry if intercept, process is delivered to square frame 162 so.If do not return retry if intercept, process is delivered to square frame 170 so.Square frame 170 shows and is submitted to primary memory with directly writing storage.After this, square frame 172 shows system bus is discharged into next operation that process is after this returned.
Though show and introduced the present invention with reference to preferred embodiment, it should be appreciated by those skilled in the art and to carry out various changes and not break away from spiritual nuclear scope of the present invention form and details.For example, alternative embodiment allows the pipeline of system bus request, submit (obtaining non-retry response) thus in the request of waiting for to or finish (reading or write relevant data) before, as long as submit request to and also keep the data order with the order identical, can decide the request that identical address is waited for as request subsequently so with the request that occurs on the system bus.

Claims (8)

1. method that keeps cache coherence when in data handling system, directly writing storage operation, wherein said data handling system comprises a plurality of processors and the memory organization that is coupled to system bus, wherein said memory organization comprises some cache memories at different levels, said method comprising the steps of:
By being inserted in the cache memory between described processor and the described system bus, will directly writing storage operation and be sent to described system bus by a processor;
In any one of the cache memory of the described insertion that obtains the described cache-hit of directly writing storage operation, carry out the described storage operation of directly writing; And
Data address with described straight write operation, the path is intercepted in outside by described system bus, intercept the cache memory that is not inserted between described processor and the described system bus, kept the consistance of cache memory up to described straight write operation success thus.
2. according to the method that keeps cache coherence when directly writing storage operation of claim 1, described will directly to write storage operation further comprising the steps of by the step that a processor is sent to described system bus by being inserted in cache memory between described processor and the described system bus:
Decide a local bus, the described described data address of directly writing storage operation is sent to the cache memory of described insertion for described processor.
3. according to the method that when directly writing storage operation, keeps cache coherence of claim 1, carry out the described step of directly writing storage operation in any one of wherein said cache memory in the described insertion that obtains the described cache-hit of directly writing storage operation, further comprising the steps of:
With each address mark array in the cache memory of the described data address of described straight write operation and described insertion relatively; And
If described data address is mated any mark in the described address mark array, return cache-hit so.
4. according to the method that when directly writing storage operation, keeps cache coherence of claim 1, the data address of the described straight write operation of described usefulness, intercept the path by the outside of described system bus and intercept the cache memory that is not inserted between described processor and the described system bus up to described straight write operation success, kept the conforming step of cache memory thus, further comprising the steps of:
Ruling is used for the described described system bus of directly writing storage operation;
The path is intercepted in the described outside that the described data address of described straight write operation is sent to described system bus;
With described data address and be not inserted in described processor and described system bus between cache memory in each address mark array compare;
In response to any retry response that turns back to described system bus, intercept the path along described outside and keep described data address; And
Intercept in response to the described of retry condition that do not have that turns back to described system bus, finish the interior described storage operation of directly writing of system storage of described storage system.
5. system that keeps cache coherence when in data handling system, directly writing storage operation, wherein said data handling system comprises a plurality of processors and the memory organization that is coupled to system bus, wherein memory organization comprises some cache memories at different levels, and described system comprises:
To directly write storage operation is sent to described system bus by a processor device by the cache memory that is inserted between described processor and the described system bus;
In any one of the cache memory of the described insertion that obtains the described cache-hit of directly writing storage operation, carry out the described device of directly writing storage operation; And
Data address with described straight write operation, the path is intercepted in outside by described system bus, intercept the cache memory that is not inserted between described processor and the described system bus, kept the conforming device of cache memory up to described straight write operation success thus.
6. according to the system that when directly writing storage operation, keeps cache coherence of claim 5, will directly write storage operation and also comprise by the described device that a processor is sent to described system bus by being inserted in cache memory between described processor and the described system bus:
The ruling device is used to described processor ruling local bus, the described described data address of directly writing storage operation is sent to the cache memory of described insertion.
7. according to the system that when directly writing storage operation, keeps cache coherence of claim 5, in any one of the cache memory of the described insertion that obtains the described cache-hit of directly writing storage operation, carry out the described described device of directly writing storage operation and also comprise:
The device that each address mark array in the cache memory of the described data address of described straight write operation and described insertion is compared; And
If described data address is mated any mark in the described address mark array, return the device of cache-hit so.
8. according to the system that when directly writing storage operation, keeps cache coherence of claim 5, wherein, data address with described straight write operation, the path is intercepted in outside by described system bus, intercept the cache memory that is not inserted between described processor and the described system bus, up to described straight write operation success, kept the conforming described device of cache memory thus, also comprise:
Ruling is used for the described device of directly writing the described system bus of storage operation;
The device in path is intercepted in the described outside that the described data address of described straight write operation is sent to described system bus;
With described data address and be not inserted in described processor and described system bus between cache memory in the device that compares of each address mark array;
In response to any retry response that turns back to described system bus, intercept the device that the path keeps described data address along described outside; And
Intercept in response to the described of retry condition that do not have that turns back to described system bus, finish the interior described device of directly writing storage operation of system storage of described memory organization.
CNB001188542A 1999-06-18 2000-06-15 Method and system for keping uniforming of cache buffer memory Expired - Fee Related CN1149494C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US33651699A 1999-06-18 1999-06-18
US09/336516 1999-06-18
US09/336,516 1999-06-18

Publications (2)

Publication Number Publication Date
CN1278625A CN1278625A (en) 2001-01-03
CN1149494C true CN1149494C (en) 2004-05-12

Family

ID=23316452

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB001188542A Expired - Fee Related CN1149494C (en) 1999-06-18 2000-06-15 Method and system for keping uniforming of cache buffer memory

Country Status (4)

Country Link
JP (1) JP2001043133A (en)
KR (1) KR100380674B1 (en)
CN (1) CN1149494C (en)
TW (1) TW548547B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1320464C (en) * 2003-10-23 2007-06-06 英特尔公司 Method and equipment for maintenance of sharing consistency of cache memory
US7788451B2 (en) 2004-02-05 2010-08-31 Micron Technology, Inc. Apparatus and method for data bypass for a bi-directional data bus in a hub-based memory sub-system
US7257683B2 (en) 2004-03-24 2007-08-14 Micron Technology, Inc. Memory arbitration system and method having an arbitration packet protocol
US7725619B2 (en) * 2005-09-15 2010-05-25 International Business Machines Corporation Data processing system and method that permit pipelining of I/O write operations and multiple operation scopes
US7568073B2 (en) * 2006-11-06 2009-07-28 International Business Machines Corporation Mechanisms and methods of cache coherence in network-based multiprocessor systems with ring-based snoop response collection
GB0623276D0 (en) * 2006-11-22 2007-01-03 Transitive Ltd Memory consistency protection in a multiprocessor computing system
US20120254541A1 (en) * 2011-04-04 2012-10-04 Advanced Micro Devices, Inc. Methods and apparatus for updating data in passive variable resistive memory
US10970225B1 (en) * 2019-10-03 2021-04-06 Arm Limited Apparatus and method for handling cache maintenance operations

Also Published As

Publication number Publication date
JP2001043133A (en) 2001-02-16
TW548547B (en) 2003-08-21
KR100380674B1 (en) 2003-04-18
KR20010015008A (en) 2001-02-26
CN1278625A (en) 2001-01-03

Similar Documents

Publication Publication Date Title
US5155832A (en) Method to increase performance in a multi-level cache system by the use of forced cache misses
US5551005A (en) Apparatus and method of handling race conditions in mesi-based multiprocessor system with private caches
CN101097544B (en) Global overflow method for virtualized transactional memory
US8706973B2 (en) Unbounded transactional memory system and method
US5751995A (en) Apparatus and method of maintaining processor ordering in a multiprocessor system which includes one or more processors that execute instructions speculatively
US5095424A (en) Computer system architecture implementing split instruction and operand cache line-pair-state management
JP3067112B2 (en) How to reload lazy push into copy back data cache
EP0514024B1 (en) Method and apparatus for an improved memory architecture
JP3285644B2 (en) Data processor with cache memory
US5634027A (en) Cache memory system for multiple processors with collectively arranged cache tag memories
CN101470629A (en) Mechanism for strong atomicity in a transactional memory system
CN1279456C (en) Localized cache block flush instruction
CN103150206A (en) Efficient and consistent software transactional memory
JPH10283261A (en) Method and device for cache entry reservation processing
JPH0619786A (en) Method and apparatus for maintenance of cache coference
US5155828A (en) Computing system with a cache memory and an additional look-aside cache memory
US5293602A (en) Multiprocessor computer system with dedicated synchronizing cache
CN1149494C (en) Method and system for keping uniforming of cache buffer memory
US20060149940A1 (en) Implementation to save and restore processor registers on a context switch
KR20060102565A (en) System and method for canceling write back operation during simultaneous snoop push or snoop kill operation in write back caches
US6704833B2 (en) Atomic transfer of a block of data
US11436150B2 (en) Method for processing page fault by processor
EP0271187B1 (en) Split instruction and operand cache management
US20040064643A1 (en) Method and apparatus for optimizing line writes in cache coherent systems
JP3006204B2 (en) Information processing device

Legal Events

Date Code Title Description
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C06 Publication
PB01 Publication
C14 Grant of patent or utility model
GR01 Patent grant
C19 Lapse of patent right due to non-payment of the annual fee
CF01 Termination of patent right due to non-payment of annual fee