GB2216305A - Cache block transfer in a computer system - Google Patents

Cache block transfer in a computer system Download PDF

Info

Publication number
GB2216305A
GB2216305A GB8903959A GB8903959A GB2216305A GB 2216305 A GB2216305 A GB 2216305A GB 8903959 A GB8903959 A GB 8903959A GB 8903959 A GB8903959 A GB 8903959A GB 2216305 A GB2216305 A GB 2216305A
Authority
GB
United Kingdom
Prior art keywords
information
cache
block
data
recited
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB8903959A
Other versions
GB8903959D0 (en
Inventor
Jon Rubinstein
Glen S Miranker
Richard Lowenthal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ardent Computer Corp
Original Assignee
Ardent Computer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ardent Computer Corp filed Critical Ardent Computer Corp
Publication of GB8903959D0 publication Critical patent/GB8903959D0/en
Publication of GB2216305A publication Critical patent/GB2216305A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0855Overlapped cache accessing, e.g. pipeline
    • G06F12/0859Overlapped cache accessing, e.g. pipeline with reload from main memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0815Cache consistency protocols
    • G06F12/0831Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0844Multiple simultaneous or quasi-simultaneous cache accessing
    • G06F12/0846Cache with multiple tag or data arrays being simultaneously accessible
    • G06F12/0851Cache with interleaved addressing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Description

_216305 METHOD FOR INCREASING CACHE BLOCK SIZE IN A COMPUTER SYSTEM_
BACKGROUND OF THE INVENTION.
1. Field of the Invention.
This invention relates to the field of methods
5. and apparatus for enforcing cache conqistency across multiple devices in a computer system.
2. Prior Art.
In computer systems to speed access to certain often used data and programs, cache memory may be utilized. A cache memory is a high speed memory, typically capable of keeping up with the speed of the CPU. It acts as a buffer between the CPU and the slower main memory. Typically, often used data is kept in the cache memory and accessed by the CPU during read operations. A write operation will cause a write to the cache memory and, at some point in time, a write to the main memory. There are two generally utilized methods of updating the main memory from the cache memory; write-through cache and write-back cache.
In a write through cache mechanism, any write to cache memory will cause a corresponding write to main memory at approximately the same time as the write to cache memory. In a write-back cache scheme, main memory is updated with the new data at a point in time not necessarily correlated with the write to cache memory. An example of write-back cache will be discussed below in connection with smart cache protocols.
The use of cache memory requires that at some point in time the cache memory and main memory correspond exactly. In the case of a single CPU system having no 2 co-execution units, this is not a major issue. However, when several memory mutators (units, such as CPUs, which are capable of modifying memory) share a single memory, cache consistency becomes a major problem. For example, if data is read by processor 1 into its cache memory and -is subsequently read by processor 2 into its cache memory and then the data is updated by processor 1 it is necessary that processor 2 update or invalidate its version of the data or the data will not be consistent between processor 1 and processor 2.
Several methods are well known for enforcing cache consistency where multiple processors share a single main memory and have separate cache memories. One method is to have software enforce the cache consistency. Typically, software algorithms for enforcing cache consistency utilize time stamps on the data in main memory and in cache. Software algorithms suffer from a number of shortcomings including using clock cycles to enforce cache consistency, adding complexity to the system software and adding the overhead of a time stamp or similar marking to each block of data in memory.
A second method utilizes bus protocols or smart cache protocols that insure that all caches in the system remain consistent by keeping track of all entries in cache that are dirty (i.e. have been updated). There may only be one dirty version of a particular unit of data in thesystem. If a processor which does not own the dirty version attempts to read the data, the system must enforce that the dirty copy is written to main 3 memory and then the second processor is allowed to read the data. This leads to several complications such as requiring the system to know where each copy of the data resides and requiring enforcement of rules such as allowing no reads of data from cache after a dirty version of the data exists. Additional complexity is added by the use of multiple data buses and bus masters without caches.
A third method of enforcing cache consistency is the use of bus watchers with write-through cache. Typically, a bus watcher is associated with each CPU which has cache memory. The bus watcher is responsible for watching write transactions on the system bus and determining whether the write transaction should update or invalidate data in its processors cache memory.
Further, in a typical computer system utilizing cache memory it is advantageous to fetch multiple words from main memory for placement into cache memory with a single fetch instruction. Such an arrangement typically provides advantages in bus and memory management. For example, instructions are typically accessed sequentially during a programs execution and branch instructions are relatively uncommon in comparison to sequentially executed instructions. As such, fetching a block of instructions leads to system efficiency.
Design of systems which allow fetching of blocks of data into cache are well known. However, with the progress of microprocessors, control of the cache memory is moving inside the microprocessors. In a system utilizing such a microprocessor, it is typical for the 4 microprocessor to fetch a single word at a time. It is desired to develop a method and apparatus which allows use of a standard microprocessor in a computer system, while performing efficient caching operations.
In addition, in a computer system It is known to utilize a bus with a first bandwidth, for example a 64 bit bus, when communicating between modules in a computer system, while utilizing a bus with a second bandwidth, for example 32 bits, when communicating within a module.
Insuch computer system, funnelling data from the bus of the larger bandwidth to the bus of the smaller bandwidth often becomes a bottleneck in the system.
SUMMARY OF THE MVENTTON
It is therefore an object of the present invention to allow data for cache memory to be fetched in blocks of data utilizing a processor to control access to the cache memory.
The present invention discloses a method and apparatus for accessing information in main memory of a computer system and storing such information in a cache memory associated with a processor. The method comprises the microprocessor requesting a unit of information from the main memory. Such a microprocessor employs a memory retry mechanism which may be utilized by the computer system if the data requested by the microprocessor is received in error.
The present invention further comprises a computer system in which a system bus is utilized for communicating information between the main memory and the processor. The system bus is some multiple n words wide. In the present invention, the processor may request a single word of data. The memory supplies the requested word of data as one element of a block of data, the block of data being n units of data long.
The block of data is received and placed in a buffer or on the system bus such that each word of data is delivered in a separate time slice, each word sharing the system bus during a time slice with words from other blocks of data. The block of data is received one word at a time and placed by the processor into a cache memory. The processor is then informed the wrong data has been supplied and the memory retry mechanism is employed to receive the next word of information. The process of receiving a word of information, storipg the word in cache memory, and retrying memory access is 5 continued for n-1 words of data.
Data is organized on the bus or by the cache controller such that the nth word received is the requested word. This word is received by the processor and written to cache memory. Processing is then allowed to continue. Utilizing this method of the present invention, blocks of data may be transmitted from main memory and stored in cache memory as a result of a single memory access request.
Further, data may be supplied in a time-staggered manner which allows a system bus of a first bandwidth and a data pipeline on a processor board of a second bandwidth, smaller than the first bandwidth, to be utilized without utilizing a separate buffer or creating a bottleneck in the system.
7 BRIEF DESCRIPTTON OF THE DRAWTNCR Figure 1 is a block diagram illustrating a computer system comprising multiple central processing units with cache memory and bus watchers coupled with a cent. al system bus as may be utilized by the present invention.
Figure 2 is a flowchart illustrating a method of reading data and updating cache memory where the processor does not notify the system whether the read is a cache read as may be utilized by the present invention.
Figure 3 is a flowchart illustrating a method of insuring cache consistency during write transactions in a system where the processor does not notify the system of whether the write transaction is a cache write as may be utilized by the present invention.
Figure 4 is a flowchart illustrating a method of monitoring write transactions from foreign processors to insure cache consistency as may be utilized by the present invention.
Figure 5 is a flowchart illustrating a method of supplying blocks of data to a cache memory when a single word is requested by a processor as may be utilized by the present invention.
Figure 6 is a schematic diagram illustrating data organization on a bus as may be utilized by the present invention.
1 DETATLED DESCRTPTTON OF JHE SENT INVENTTON A cache consistency method and apparatus is described. In the following description, numerdiis specific details are set forth such as bandwidths, types -of processors, etc. in order to provide a thorough understanding of the present invention. It will be obvious, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well known circuits and techniques have not been shown in detail in order not to unnecessarily obscure the present invention.
The present invention describes a method and apparatus for ensuring cache consistency in a computer system with multiple processors having separate writethrough cache memories. Such a computer system may have any number of devices capable of accessing a system bus and updating memory (memory mutators). Each of these memory mutators may optionally have a cache memory associated with it.
In general, cache memories are utilized in computer systems as a high speed memory coupled directly with a central processing unit in order to buffer read requests for frequently accessed data and instructions. The processor unit of the preferred embodiment of the present invention comprises an integer processor unit UPU) with both an instruction and data cache and an interface to the system bus. In addition, the processor may have a floating point processor unit (FPU). The I I 9 instruction cache of the IPU is a read only and the data cache is write through with write buffering.
In the computer system of the preferred embodiment there may be more than one IPU and each IPU may have its S own data cache. In addition, the FPU associated with each IPU is capable of modifying memory and does not reference cache memory. Therefore, a problem of cache consistency among the various data caches in the system exists. The preferred embodiment maintains cache consistency among the data caches by a form of bus watching on the system bus. Bus watching ensures that any data which resides in cache matches the data stored in main memory.
Figure 1 illustrates a system bus 10 and a IS processor unit 11 as may be utilized by the present invention. A system bus 10 of the preferred embodiment is a 64 bit bus and is coupled with a processor 11 by a bus interface unit 14. The bus interface unit 14 is coupled with a bus watcher 15 through address lines 24.
The bus watcher IS comprises an even address tag array 16 and an odd address tag array 17. Each entry in the even address tag array 16 and the odd address tag array 17 represents an address associated with 32 bits of data. Data in the system is transferred over a 64 bit bus. Therefore, data in cache and corresponding entries in the address tag arrays may be updated 64 bits (two words) per clock cycle. If a double word bus transfer occurs both the even address tag array 16 and the odd address tag array 17 are checked to determine if there is a match. If a single word or a sub-word transfer occurs, only one of the address tag arrays 16 or 17 is checked. The particular address tag array to be checked is determined by whether the address starts on an even word.boundary or an odd word boundary.
The bus watcher 15 is coupled with a data cache memory 18 through line 26. In the preferred embodiment, the data cache memory is 16K bytes in size. The cache memory 18 is comprised of an even address tag array 19, an odd address tag array 20, an even data array 21 and an odd data array 22. The data cache is direct mapped (i.e. an'entry in address tag array 19 or 20 has a one to one correspondence with an entry in data array 21 or 22).
The data cache 18 is coupled with a processor 23 through line 27. The preferred embodiment of the present invention uses a processor manufactured by MIPS Computers of Sunnyvale, California. The processor 23 is coupled with the bus interface unit 14 through line 25. Line 25 is a bidirectional line allowing two directional communication between the processor 23 and the bus interface 14. In addition to the processor 23, a floating point processor unit (FPU) 30 may be coupled with the bus interface unit 14. The system bus 10 may have various 25 other devices coupled with it, 12 and 13. The FPU 30 and each of the other devices 12 and 13 are capable of updating main memory, although in the preferred embodiment they do not have separate cache memories. Such devices, whether they have cache memory or not may be referred to as memory mutators.
Figure 2 more fully discloses a typical read operation. During a read operation the processor may first attempt a cache read, block 40, by supplying a physical address to the data cache memory. The data cache memory compares the supplied physical address with the physical addresses in either its even or odd address tag array. The determination of which address tag array to compare with is made by determining whether the physical address begins on an even or odd word boundary.
If a cache hit occurs, i.e. there is a successful cache read, block 41, branch 42 is taken and the data is supplied to the microprocessor from the data cache arrays.
If the data does not exist in the data cache memory a memory fetch is executed by the microprocessor. The MIPS Computers microprocessor utilized by the preferred embodiment does not supply information on whether the data being requested by the memory fetch will be stored in cache memory after a successful memory access.
Therefore, any time a memory fetch is done by the processor, the bus watcher updates its address tags with the address of the data to be fetched. This may cause some other entry in the bus watcher address tag arrays to be overwritten. If so, corresponding data in the data cache array is invalidated, block 45. Data may then be received from memory, block 46.
The processor then determines whether cache is to be updated, block 47. If cache is to be updated, branch 48 is taken and cache is updated, block 49. Otherwise cache is not updated, branch 50. In the case of cache 12 being updated, block 49, the data in cache continues to match the address tags in the bus watcher. In the case of cache not being updated, branch 50, the address tags in the bus watcher are a superset of the address tags in cache. This method ensures that the bus watcher is able to determine whether or not the data exists in its cache memory.
Referring briefly to Figures 5 and 6, the method utilized by the preferred embodiment of the present invention for loading information from main memory into cache memory is described in more detail.
In the preferred embodiment, the system bus has a two word (64 bits, 32 bits = 1 word) bandwidth. Data is organized on the system bus 10 as described in Figure 6.
In a given clock cycle, two words are communicated over the system bus 10, a high order word 83 and a low order word 81. Information is communicated over the bus such that low order word 81 may be communicated in clock cycle 0 and corresponding high order word 82 may be communicated in clock cycle 1. This scheme offers several inventive advantages. For example, in a computer system it may be advantageous to request information from main memory in blocks of words rather than a single word at a time. In such a system, access to main memory may be controlled by a microprocessor which requests information a single word at a time. It is desired to provide information to a cache memory associated with such a microprocessor in blocks of words.
13 - The preferred embodiment comprises a MIPS Computer microprocessor 23 with a data cache memory 18 as described in connection with Figure 1. The microprocessor may make a request for data and a cache miss may occur 90 requiring main memory to be accessed. If the microprocessor makes a request for a word aligned on a double-word boundary (i.e. even addressed word), branch 91, a read double-word instruction is initiated on the system bus, block 92. A double word may correspond to high order word 82 and low order word 81.
The system will then select the odd data cache array 22 as the cache to be written into, block 94, and low order word 81 is received during clock cycle 0 and written to cache memory, block 95. The processor had originally requested a word aligned on a double-word boundary, that is, high order word 82. The processor is instructed it has received incorrect data and a memory access retry will occur, block 96.
The even data cache array 21 is then selected, block 97 and high order word 82 is received during clock cycle 1. High order word 82 is then written to cache memory, block 98. The processor may then utilize the requested information, high order word 82. It will be obvious to one of ordinary skill that the particular ordering of words returned may be modified in other embodiments of the present invention. In particular, in the computer system of the preferred embodiment byte ordering is such that byte 0 is the left most byte.
This may be referred to as a big-endian system and is compatible with Motorola 68000 processor conventions.
- 14 The methods and apparatus of the present invention are equally applicable to a computer system in which byte ordering Is such that byte 0 is the right most byte. This may be referred to as a little-endian system. In a little-endian system words may be returned in reverse order of the method described for the preferred embodiment.
As a computer system often requires accessing information sequentially, 'when request is subsequently made for low order word 61, a cache hit will occur.
In the preferred embodiment, a main memory access requires 10 clock cycles. Utilizing the method of the present invention, 11 clock cycles are required for the double word access, an additional clock cycle being necessary due to the retry. However, assuming the processor requires access to low order word 81, 9 clock cycles are saved.
Further, the present invention may utilize bus bandwidths of 32 bits (1 word) on boards in the system, such as processor board 11, while utilizing a system bus with a bandwidth of some multiple number of words. As such, a bottleneck may develop in the system as information is moved from the system bus onto the boards. The retry method of the present invention described above allows sequential words in a block of data to be delivered to a board In the system in a time staggered manner. This alleviates bottleneck problems when data path width is reduced without compromising system bus performance.
It will be obvious to one of ordinary skill in the art the present invention may be utilized in computer systems comprising system bus bandwidths of other than 2 words. For example, a computer system may utilize a system bus bandwidth of 128 bits (4 words). In such a system, one to four data cache memory arrays may be utilized and data may be requested in blocks of four words. The data may be time-staggered over four clock cycles, one word per clock cycle and placed into cache memory.
As an alternative, a block data may be received from the bus and placed in a buffer. The data is then received into the cache memory from the buffer.
Figure 3 discloses a method of bus watchers processing write transactions from their own processor, as may be utilized by the present invention. Since the path from the bus watcher to cache is a single directional path, the bus watcher is not notified directly when a write to cache occurs. Instead, when the processor sends a write transaction to a write buffer, block 70, the write buffer is then responsible for storing information stating a cache write has occurred 75 and sending the write transaction to the bus interface, block 71. In parallel with sending the write transaction to the bus interface block 71, cache may be updated with the new data if this is a write to cache transaction, block 72.
The write buffer of the preferred embodiment has a small queue to avoid stopping the processor when the system bus is not available. The bus interface then 16 notifies the bus watcher that a cache write has occurred, block 73, and places the transaction on the system bus when the system bus and memory are available. The bus watcher is then responsible for updating its address tag arrays with the address being written to memory, block 74.
Figure 4 illustrates a method for invalidating cache entries when another processor, FPU, 1/0 device or graphics device writes information to main memory. In the preferred embodiment of the present invention, the processors utilize a write through cache mechanism in which data is written to main memory in direct correlation with data being written to cache memory. In other words, data is not kept in cache memory as dirty data until some future point in time when the cache memory is flushed to main memory, but rather the data is written to main memory as soon as possible after a write transaction to cache occurs. In the preferred embodiment, the computer system may have several processor units, each with separate cache, each of which must be kept consistent with each other. Therefore, when a write transaction occurs by a memory mutator the corresponding cache entries must be invalidated.
In the preferred embodiment, the bus watcher monitors the system bus for write transactions and examines the address to be written, block 60. The bus watcher determines whether the address matches an address tag in this bus watcher, block 61. It does this by comparing the address with either its even or odd address tag arrays depending on whether the address - 17 begins at an even or odd word boundary. Again, some transactions may be double word transactions and the bus watcher must check both the even and odd address tag arrays. Since the address tag arrays of the bus watcher contain a superset of the data contained in the data cache for the processor, if the address is not found in the address tag arrays of the bus watcher the address does not exist in the data cache for the processor. In that case, branch 62 is taken and this write transaction may be ignored.
Otherwise, branch 63 is taken and the corresponding entry in the bus watcher is invalidated, block 64. The entry in the data cache memory is also invalidated, block 65. The next time the processor requests a read of this data there will not be a cache hit and the data will need to be fetched from memory as described in conjunction with Figure 2.
Thus, a method for ensuring cache consistency in a computer system having several processing units with independent cache memories is described.
- 18 MATMA 1. A method for communication of information in a computer system comprising the steps of: requesting a first word of information; placing a block of information on a system bus in response to said request, said block of information comprising n words of information; receiving each of said n words of information; whereby, a block of information is received in response to a request for a single word.

Claims (1)

  1. 2. The method, as recited by Claim 1, wherein each word in said n words
    are received in a time staggered manner.
    3. The method, as recited by Claim 2, wherein said block of information is stored in a cache memory as it is received.
    4. The method, as recited by Claim 3, wherein said n words are sequential words of information.
    5. The method, as recited by Claim 4, wherein said cache memory comprises n cache arrays.
    6. The method, as recited by Claim 5, wherein n is 2.
    4 0 1 1 7. The method, as recited by Claim 1, wherein said n words are received in a block and placed in a buffer.
    8. The method, as recited by Claim 7, wherein each word in said block Is fetched In a time staggered manner from said buffer.
    9. In a method for communicating information in a computer system, said computer system comprising a first means for requesting information coupled with a system bus and a second means for supplying information coupled with said system bus, an improvement comprising the steps of: said first means requesting a first word of information; said second means supplying a block of information on said system bus in response to said request, said block of information comprising n words of contiguous information; said first means receiving said block of information; whereby a block of information is received in response to a request for a single word.
    10. The improvement, as recited by Claim 9, wherein said block of information is received in a cache memory.
    11. The improvement, as recited by Claim 10, wherein words of said block of Information are accepted by said first means in response to a retry instruction.
    - 12. The improvement, as recited by Claim Ile wherein said cache memory comprises n cache memory arrays, each of said n cache memory arrays for receiving a single word of information in response to said request.
    13. The improvement, as recited by Claim 12, wherein said first means is a processor module.
    14. The improvement, as recited by Claim 13, wherein said second means is a memory.
    15. A method for supplying information in a computer system, said computer system comprising a first means for receiving information coupled with a system bus, said system bus for supplying said information, said first means capable of receiving n bits of information per clock cycle, Paid system bus capable of supplying m times n bits of information per clock cycle, comprising the step of: arranging said information on said bus such that n bits are supplied to said first means in each of m clock cycles.
    16. A method for communication of information in a computer system substantially as hereinbefore described.
    Publishedl.989atThe patent oalce, state nouse,68"71 High Holborn, London WIR4TP.Purther -opiesiasybe obtained from The Patent office Sales Branch, St Mary Cray, Orpington, Kent BES 3RD. Printed by MUltiplex techniques ltd, St Max7 Cray, Kent, con. 1/87
GB8903959A 1988-03-01 1989-02-22 Cache block transfer in a computer system Withdrawn GB2216305A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US16274988A 1988-03-01 1988-03-01

Publications (2)

Publication Number Publication Date
GB8903959D0 GB8903959D0 (en) 1989-04-05
GB2216305A true GB2216305A (en) 1989-10-04

Family

ID=22586982

Family Applications (1)

Application Number Title Priority Date Filing Date
GB8903959A Withdrawn GB2216305A (en) 1988-03-01 1989-02-22 Cache block transfer in a computer system

Country Status (6)

Country Link
JP (1) JPH0210449A (en)
AU (1) AU3075589A (en)
DE (1) DE3906277A1 (en)
FR (2) FR2628235A1 (en)
GB (1) GB2216305A (en)
IT (1) IT1229127B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4131940A (en) * 1977-07-25 1978-12-26 International Business Machines Corporation Channel data buffer apparatus for a digital data processing system
GB2011679A (en) * 1977-12-22 1979-07-11 Honeywell Inf Systems Cache memory in system in data processor
EP0054888A2 (en) * 1980-12-23 1982-06-30 Hitachi, Ltd. Data-processing system with main and buffer storage control
US4445172A (en) * 1980-12-31 1984-04-24 Honeywell Information Systems Inc. Data steering logic for the output of a cache memory having an odd/even bank structure
WO1985004737A1 (en) * 1984-04-11 1985-10-24 American Telephone & Telegraph Company Interleaved set-associative memory
US4724518A (en) * 1983-07-29 1988-02-09 Hewlett-Packard Company Odd/even storage in cache memory

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4131940A (en) * 1977-07-25 1978-12-26 International Business Machines Corporation Channel data buffer apparatus for a digital data processing system
GB2011679A (en) * 1977-12-22 1979-07-11 Honeywell Inf Systems Cache memory in system in data processor
EP0054888A2 (en) * 1980-12-23 1982-06-30 Hitachi, Ltd. Data-processing system with main and buffer storage control
US4445172A (en) * 1980-12-31 1984-04-24 Honeywell Information Systems Inc. Data steering logic for the output of a cache memory having an odd/even bank structure
US4724518A (en) * 1983-07-29 1988-02-09 Hewlett-Packard Company Odd/even storage in cache memory
WO1985004737A1 (en) * 1984-04-11 1985-10-24 American Telephone & Telegraph Company Interleaved set-associative memory

Also Published As

Publication number Publication date
DE3906277A1 (en) 1989-09-14
GB8903959D0 (en) 1989-04-05
IT1229127B (en) 1991-07-22
AU3075589A (en) 1989-09-07
FR2628235A1 (en) 1989-09-08
FR2631472A1 (en) 1989-11-17
IT8919609A0 (en) 1989-03-01
JPH0210449A (en) 1990-01-16

Similar Documents

Publication Publication Date Title
CA1322058C (en) Multi-processor computer systems having shared memory and private cache memories
US5325504A (en) Method and apparatus for incorporating cache line replacement and cache write policy information into tag directories in a cache system
US4959777A (en) Write-shared cache circuit for multiprocessor system
US7032074B2 (en) Method and mechanism to use a cache to translate from a virtual bus to a physical bus
US5740400A (en) Reducing cache snooping overhead in a multilevel cache system with multiple bus masters and a shared level two cache by using an inclusion field
US5537575A (en) System for handling cache memory victim data which transfers data from cache to the interface while CPU performs a cache lookup using cache status information
KR100194253B1 (en) How to Use Mesh Data Coherency Protocol and Multiprocessor System
US5784590A (en) Slave cache having sub-line valid bits updated by a master cache
JP3067112B2 (en) How to reload lazy push into copy back data cache
EP0347040B1 (en) Data memory system
US5778422A (en) Data processing system memory controller that selectively caches data associated with write requests
US5524208A (en) Method and apparatus for performing cache snoop testing using DMA cycles in a computer system
EP0434250A2 (en) Apparatus and method for reducing interference in two-level cache memories
GB2193356A (en) Cache directory and control
EP0464994A2 (en) Cache memory exchange protocol
US6065099A (en) System and method for updating the data stored in a cache memory attached to an input/output system
US6223266B1 (en) System and method for interfacing an input/output system memory to a host computer system memory
EP0474450A2 (en) Processor system with improved memory transfer means
US5717894A (en) Method and apparatus for reducing write cycle wait states in a non-zero wait state cache system
EP0738977B1 (en) Method and apparatus for quickly initiating memory accesses in a multiprocessor cache coherent computer system
US5920889A (en) Apparatus and method for write miss processing in a copy-back data cache with an allocating load buffer and a non-allocating store buffer
EP0741356A1 (en) Cache architecture and method of operation
US6918021B2 (en) System of and method for flow control within a tag pipeline
US6134635A (en) Method and apparatus of resolving a deadlock by collapsing writebacks to a memory
US6021466A (en) Transferring data between caches in a multiple processor environment

Legal Events

Date Code Title Description
732 Registration of transactions, instruments or events in the register (sect. 32/1977)
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)