US20070186051A1 - Memory system and method for controlling the same, and method for maintaining data coherency - Google Patents
Memory system and method for controlling the same, and method for maintaining data coherency Download PDFInfo
- Publication number
- US20070186051A1 US20070186051A1 US11/276,004 US27600406A US2007186051A1 US 20070186051 A1 US20070186051 A1 US 20070186051A1 US 27600406 A US27600406 A US 27600406A US 2007186051 A1 US2007186051 A1 US 2007186051A1
- Authority
- US
- United States
- Prior art keywords
- data
- memory
- bus
- cache
- buffer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000015654 memory Effects 0.000 title claims abstract description 107
- 238000000034 method Methods 0.000 title claims description 9
- 230000004044 response Effects 0.000 claims description 8
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000012423 maintenance Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 6
- 238000012546 transfer Methods 0.000 description 6
- 230000007423 decrease Effects 0.000 description 4
- 101000860173 Myxococcus xanthus C-factor Proteins 0.000 description 3
- 238000007796 conventional method Methods 0.000 description 2
- 238000001693 membrane extraction with a sorbent interface Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0815—Cache consistency protocols
- G06F12/0831—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means
- G06F12/0835—Cache consistency protocols using a bus scheme, e.g. with bus monitoring or watching means for main memory peripheral accesses (e.g. I/O or DMA)
Definitions
- the present invention generally relates to a memory system and a method for controlling the memory system and, in particular, to a method for improving the efficiency of write access to a memory through a bus while maintaining data coherency.
- a CPU and a memory are interconnected through a bus.
- Each device acts as a master device (bus master) to access the memory in which data is stored.
- memories system memories
- DRAMs memory
- a cache memory hereinafter “cache”
- SRAM static random access memory
- FIG. 1 is a diagram for illustrating a conventional snoop operation.
- a CPU bus 1 and a system bus 2 are interconnected through a bus bridge 3 .
- CPU #0 and CPU #2 are coupled onto CPU bus 1 .
- Each of the two CPUs has a cache.
- Coupled onto system bus 2 are a device #2, a memory controller, and a memory.
- CPU #0 having a cache watches (snoops 5 ) for the address of data access 4 from another device #2 (master device) ( FIG. 1 ( a )).
- CPU #0 issues a retry request 6 only if the access address matches the address of data in the cache of CPU #0 and the state of the data has been changed (updated) in accordance with a protocol such as the standard MESI protocol ( FIG. 1 ( b )).
- a protocol such as the standard MESI protocol
- in-progress access from the master device #2 is aborted ( FIG. 1 ( b )).
- a cache line consisting of multiple data at contiguous addresses, including a matching address, in the cache is first written back to the memory (FIGS. 1 ( c ) and 1 ( d )).
- master device #2 accesses the memory again to transfer data, thereby maintaining the coherency of the data (FIGS. 1 ( e ) and 1 ( f )).
- An object of the present invention is to improve the efficiency of memory access, including write access, while maintaining data coherency.
- Another object of the present invention is to alleviate problems such as operation delay and decrease in bus utilization rate due to operational latency occurring during an access retry when a cache hit (snoop hit) occurs on write access in snoop mode.
- the present invention provides a memory system including: a bus; and a memory, a memory controller, a first device having a cache, and a second device which are connected to the bus; wherein the memory controller includes a buffer for temporarily storing cache data and write data that the second device writes in the memory.
- the present invention can avoid a write access retry by a second device while maintaining data coherency by temporarily storing, in the buffer of the memory controller, cache data and write data to be written on write access to a memory by the second data.
- the present invention can avoid write access retries and accordingly can alleviate operational delay and the concomitant decrease in bus utilization rate caused by access retry operations.
- FIG. 1 illustrates a schematic diagram of a prior art snoop operation for a computer system with distributed memory.
- FIG. 2 depicts a schematic diagram illustrating a sequence of operations of a memory system according to a preferred embodiment.
- FIG. 3 depicts a functional block diagram of illustrating a configuration of a memory system according to the preferred embodiment.
- FIG. 2 is a diagram for illustrating an overview of a method (operation) of the present invention.
- a CPU bus 10 and a system bus 12 is interconnected through a bus bridge 13 .
- Coupled onto the CPU bus 10 are CPU #0 ( 13 ) and CPU #2 ( 14 ).
- Each of the two CPUs has a cache.
- Coupled onto the system bus 12 are a device #2 ( 15 ), a memory controller ( 16 ) and a memory ( 17 ).
- the memory ( 17 ) is a system memory such as a DRAM.
- the memory controller ( 16 ) has a buffer 20 for temporarily storing data. While the configuration in FIG.
- the bus 2 includes the two buses, the CPU bus 10 and system bus 12 , a configuration in which devices are coupled onto one system bus may be used. Furthermore, any number of devices may be connected to a bus, provided that at least two master devices that can occupy the bus are connected to the bus.
- CPU #0 ( 13 ) having a cache monitors (snoops 19 ) for the address of data access 18 from another master device #2 ( 15 ) ( FIG. 2 ( a )). If the access address matches the address of data in the cache of the CPU #0 ( 13 ) and the state of the data has been changed (updated) in accordance with a protocol such as the standard MESI protocol, CPU #0 ( 13 ) issues a retry request 6 . However, master device #2 ( 15 ) does not abort in-progress access. Device #2 ( 15 ) writes write data in the buffer 20 in the memory controller ( FIG. 2 ( b )).
- the present invention does not require termination of bus access associated with a retry request due to a snoop hit on write access.
- Data coherency is maintained by temporarily storing cache data in the buffer of the memory controller before writing back the cache data. This can reduce the number of arbitration and address phases on the system bus 11 , as compared with the conventional method shown in FIG. 1 .
- the number of RAS address transfer periods, which would otherwise be three, the number of CAS address transfer periods, which would be two, and the number of data transfer periods, which would be two (when a transfer period of successive data is considered as one period) on the memory bus between the memory controller 16 and the memory 17 can each be reduced to one.
- the access time between the start and completion of write access can be reduced by approximately 20 to 30%, depending on the bus architecture and memory speed.
- FIG. 3 is a block diagram showing a configuration of the present invention.
- FIG. 3 contains a retry control circuit 101 , an arbitration circuit 102 , a memory controller 104 , a tag control circuit 106 , and a buffer 108 .
- the tag control circuit 106 and the buffer 108 are contained in the memory controller 104 in practice, they are indicated as separate blocks for purposes of illustration. All of these circuits are coupled onto a system bus ( 11 in FIG. 2 ).
- the retry control circuit 101 watches for an address retry signal and delivers it to another device. If a retry signal is input due to a snoop hit on write access, a Retry-Hold signal and its associated Priority signal are asserted without asserting a retry output. If a snoop hit does not occur on write access, a retry input is outputted as a retry output without change.
- the arbitration circuit 102 has the function of giving the highest priority to a request from a device in response to a Priority signal from the retry control circuit 101 . In the absence of a Priority signal, the arbitration circuit 102 performs normal arbitration.
- the memory controller 104 provides timing control for the memory.
- the memory controller 104 may be a conventional DRAM control circuit. After the completion of access, the memory controller 104 outputs an Access Complete signal.
- the tag control circuit 107 records the location in the buffer 108 of write data when a snoop hit occurs.
- the tag control circuit 106 also generates a Write Strobe signal for writing the subsequent data from the cache (cache-out data) in addition to the write data in the buffer 108 .
- the tag control circuit 106 also generates an Output Select signal for writing cache-out data in the memory after the cache-out data is latched in the buffer 108 .
- the buffer 108 latches data (write data or cache-out data) on the bus in response to a Write Strobe signal from the tag control circuit 106 .
- the buffer 108 provides the latched data to the memory in response to the Output Select signal.
- Retry-Hold Indicates that a retry on write access has been accepted. This signal is cleared on the completion of cache-out access.
- Priority-X When a retry on a write access is accepted, this signal gives the highest priority to the device X that issued the retry so that the device X accesses next.
- Access Complete Indicates the end of a memory access cycle.
- Data Strobe A timing signal for data input from the bus and data output to the memory.
- Data DIR indicates a data transfer direction.
- Write Strobe Specifies the byte to be latched by an address and byte-enable.
- Output Select Specifies data to be output on a memory write.
- Bus Request A bus request signal from a device.
- Bus Grant A bus grant signal to a device.
- Retry from X A retry signal from device X having a cache.
- Retry to X A retry signal to device X.
- Bus Request C 110
- Bus Grant C 112
- It also provides an address (Bus Address 114 ) onto the bus in order to write data in the memory.
- the cache of each of the devices such as device A (CPU #0) watches (snoops) for an address on the bus (Bus address 114 ).
- device A If an address in the cache of device A (CPU #0) is hit (snoop hit), device A (CPU #0) activates a Retry from A signal ( 116 ) to the retry control circuit 101 .
- the retry control circuit 101 receives from the memory controller 104 a Write access signal ( 118 ) indicating that the access from device C (Device #2) is a write access. Even though the retry control circuit 101 receives the Retry from A signal ( 116 ), the retry control circuit 101 does not activate a Retry to C signal ( 120 ), which is a signal for aborting write access from device C.
- the retry control circuit 101 activates a Retry Hold signal ( 122 ) to the tag control circuit 106 .
- the retry control circuit 101 also sends (activates) to the arbitration circuit 102 a Priority-A signal ( 124 ) associated with device A (CPU #0) from which it received the Retry from A signal ( 116 ).
- Bus Address signal ( 114 ), a Byte Enable signal ( 126 ), and a Bus Control signal ( 128 ) to control the memory controller 104 to write write data provided on the Bus Data In ( 130 ) into the buffer 108 .
- a write location is specified by a Write Strobe signal ( 136 ), which is provided from the tag control circuit 106 in response to a Data Strobe signal and a Data DIR signal ( 134 ) from the memory controller 104 .
- the write location depends on the low-order 5 bits (in the case of a 32-byte cache line) of the Bus Address ( 138 ) and the data width of a Byte Enable signal ( 140 ). Byte position information written in the tag control circuit 106 is recorded at this point of time.
- the memory controller 104 waits for cache (data) out from the cache without writing data into the memory. On the completion of the write to the buffer 108 by device C (Device #2), the memory controller 104 activates an Access Complete signal ( 144 ).
- Device A (CPU #0), which has requested a Retry, requests cache out.
- the arbitration circuit 102 which has received the Priority-A signal ( 124 ), gives the highest priority to the Bus Request A ( 146 ) from device A (CPU #0).
- the cache out request from device A (CPU #0) is accepted immediately after access by device C (Device #2).
- Device A (CPU #0) caches out the snoop hit data into the buffer 108 .
- the write location is determined in such a manner that the data previously written by device C is not overwritten by the Write Strobe signal 136 .
- the memory controller 104 On the completion of write to the buffer 108 by device A (CPU #0), the memory controller 104 provides an Access Complete signal ( 144 ) to the retry control circuit 101 .
- the retry control circuit 101 inactivates the activated Retry-Hold signal ( 122 ) and Priority-A signal ( 124 ).
- the memory controller 104 writes data (write access data and cache-out data) latched in the buffer 108 into the memory as a sequence of data on the basis of address (TAG) information from the tag control circuit 106 . This completes the write access operation while maintaining the data coherency.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A memory system including a bus 10, 11, a memory 17, a memory controller 16, a first device 13 having a cache, and a second device 15, all connected to the bus, wherein the memory controller includes a buffer 20 for temporarily storing cache data and write data that the second device writes in the memory. The buffer of the memory controller temporarily stores cached data and the write data to be written on write access to the memory by the second device, which enables maintenance of data coherency while avoiding a write access retry by the second device.
Description
- The present invention generally relates to a memory system and a method for controlling the memory system and, in particular, to a method for improving the efficiency of write access to a memory through a bus while maintaining data coherency.
- In personal computers systems (PCs), a CPU and a memory (such as a DRAM) are interconnected through a bus. Each device acts as a master device (bus master) to access the memory in which data is stored. While such memories (system memories) configured as DRAMs have a large storage capacity, they provide slower access performance. In order to achieve faster access to frequently used data, a CPU uses a cache memory (hereinafter “cache”) implemented by a memory such as an SRAM. Although a cache has a smaller storage capacity than a DRAM system memory, it can provided faster access then DRAM system memory.
- In a system having a cache, coherency between the cache and the main memory (data consistency) must be maintained. One algorithm for maintaining data coherency is a snooping algorithm.
FIG. 1 is a diagram for illustrating a conventional snoop operation. InFIG. 1 , aCPU bus 1 and asystem bus 2 are interconnected through abus bridge 3.CPU # 0 andCPU # 2 are coupled ontoCPU bus 1. Each of the two CPUs has a cache. Coupled ontosystem bus 2 are adevice # 2, a memory controller, and a memory. - According to the snooping algorithm,
CPU # 0 having a cache watches (snoops 5) for the address ofdata access 4 from another device #2 (master device) (FIG. 1 (a)).CPU # 0 issues a retry request 6 only if the access address matches the address of data in the cache ofCPU # 0 and the state of the data has been changed (updated) in accordance with a protocol such as the standard MESI protocol (FIG. 1 (b)). In response to the retry request 6, in-progress access from themaster device # 2 is aborted (FIG. 1 (b)). Furthermore, a cache line consisting of multiple data at contiguous addresses, including a matching address, in the cache is first written back to the memory (FIGS. 1(c) and 1(d)). Then,master device # 2 accesses the memory again to transfer data, thereby maintaining the coherency of the data (FIGS. 1(e) and 1(f)). - As can be seen from the operation shown in
FIG. 1 , if a retry request is issued from a watched (snooped) device, a device that is transferring data must abort the access and then make access again. This means that additional operational delay due to a snoop hit on the write access decreases the bus utilization rate and increases the latency for the device and the performance of the memory system a whole. - A conventional technique for increasing memory access rate in a multiprocessor system using the snooping approach has been disclosed in Japanese Published Unexamined Patent Application No. 06-222993, for example, which is incorporated herein by reference. However, the published Unexamined Patent Application does not disclose a technique for reducing operation delay or alleviating decrease in bus utilization rate due to an access retry on a snoop hit.
- An object of the present invention is to improve the efficiency of memory access, including write access, while maintaining data coherency.
- Another object of the present invention is to alleviate problems such as operation delay and decrease in bus utilization rate due to operational latency occurring during an access retry when a cache hit (snoop hit) occurs on write access in snoop mode.
- The present invention provides a memory system including: a bus; and a memory, a memory controller, a first device having a cache, and a second device which are connected to the bus; wherein the memory controller includes a buffer for temporarily storing cache data and write data that the second device writes in the memory.
- The present invention can avoid a write access retry by a second device while maintaining data coherency by temporarily storing, in the buffer of the memory controller, cache data and write data to be written on write access to a memory by the second data.
- The present invention can avoid write access retries and accordingly can alleviate operational delay and the concomitant decrease in bus utilization rate caused by access retry operations.
- The novel features believed to be characteristic of this invention are set forth in the appended claims. The invention itself, however, as well as other objects and advantages thereof, may be best understood by reference to the following detailed description of an illustrated preferred embodiment to be read in conjunction with the accompanying drawings.
-
FIG. 1 illustrates a schematic diagram of a prior art snoop operation for a computer system with distributed memory. -
FIG. 2 depicts a schematic diagram illustrating a sequence of operations of a memory system according to a preferred embodiment. -
FIG. 3 depicts a functional block diagram of illustrating a configuration of a memory system according to the preferred embodiment. - The present invention will be described with reference to the accompanying drawings.
FIG. 2 is a diagram for illustrating an overview of a method (operation) of the present invention. InFIG. 2 , aCPU bus 10 and a system bus 12 is interconnected through abus bridge 13. Coupled onto theCPU bus 10 are CPU #0 (13) and CPU #2 (14). Each of the two CPUs has a cache. Coupled onto the system bus 12 are a device #2 (15), a memory controller (16) and a memory (17). The memory (17) is a system memory such as a DRAM. The memory controller (16) has abuffer 20 for temporarily storing data. While the configuration inFIG. 2 includes the two buses, theCPU bus 10 and system bus 12, a configuration in which devices are coupled onto one system bus may be used. Furthermore, any number of devices may be connected to a bus, provided that at least two master devices that can occupy the bus are connected to the bus. - In a snooping algorithm, CPU #0 (13) having a cache monitors (snoops 19) for the address of
data access 18 from another master device #2 (15) (FIG. 2 (a)). If the access address matches the address of data in the cache of the CPU #0 (13) and the state of the data has been changed (updated) in accordance with a protocol such as the standard MESI protocol, CPU #0 (13) issues a retry request 6. However, master device #2 (15) does not abort in-progress access. Device #2 (15) writes write data in thebuffer 20 in the memory controller (FIG. 2 (b)). Data at the matching address in the cache ofCPU # 0 is written back into the buffer 20 (FIGS. 2(c) and 2(d)). Then, the write data and the cache data in thebuffer 20 are written in the memory (17) as a single piece of contiguous data (FIG. 2 (d)). - In this way, the present invention does not require termination of bus access associated with a retry request due to a snoop hit on write access. Data coherency is maintained by temporarily storing cache data in the buffer of the memory controller before writing back the cache data. This can reduce the number of arbitration and address phases on the
system bus 11, as compared with the conventional method shown inFIG. 1 . Furthermore, the number of RAS address transfer periods, which would otherwise be three, the number of CAS address transfer periods, which would be two, and the number of data transfer periods, which would be two (when a transfer period of successive data is considered as one period) on the memory bus between thememory controller 16 and thememory 17 can each be reduced to one. The access time between the start and completion of write access can be reduced by approximately 20 to 30%, depending on the bus architecture and memory speed. -
FIG. 3 is a block diagram showing a configuration of the present invention.FIG. 3 contains aretry control circuit 101, anarbitration circuit 102, amemory controller 104, atag control circuit 106, and abuffer 108. It should be noted that while thetag control circuit 106 and thebuffer 108 are contained in thememory controller 104 in practice, they are indicated as separate blocks for purposes of illustration. All of these circuits are coupled onto a system bus (11 inFIG. 2 ). - The
retry control circuit 101 watches for an address retry signal and delivers it to another device. If a retry signal is input due to a snoop hit on write access, a Retry-Hold signal and its associated Priority signal are asserted without asserting a retry output. If a snoop hit does not occur on write access, a retry input is outputted as a retry output without change. Thearbitration circuit 102 has the function of giving the highest priority to a request from a device in response to a Priority signal from the retrycontrol circuit 101. In the absence of a Priority signal, thearbitration circuit 102 performs normal arbitration. Thememory controller 104 provides timing control for the memory. Thememory controller 104 may be a conventional DRAM control circuit. After the completion of access, thememory controller 104 outputs an Access Complete signal. - The
tag control circuit 107 records the location in thebuffer 108 of write data when a snoop hit occurs. The unit of data in thebuffer 108 is equal to the size of a cache line (32 bytes, for example). Accordingly, if the size of a cache line is 32 bytes, a corresponding position of the low-order 5 bits (the fifth power of 2=32) is recorded as the data location. Thetag control circuit 106 also generates a Write Strobe signal for writing the subsequent data from the cache (cache-out data) in addition to the write data in thebuffer 108. Thetag control circuit 106 also generates an Output Select signal for writing cache-out data in the memory after the cache-out data is latched in thebuffer 108. Thebuffer 108 latches data (write data or cache-out data) on the bus in response to a Write Strobe signal from thetag control circuit 106. Thebuffer 108 provides the latched data to the memory in response to the Output Select signal. - Description of the signals shown in
FIG. 3 is given below. - Retry-Hold: Indicates that a retry on write access has been accepted. This signal is cleared on the completion of cache-out access.
- Priority-X: When a retry on a write access is accepted, this signal gives the highest priority to the device X that issued the retry so that the device X accesses next.
- Access Complete: Indicates the end of a memory access cycle.
- Data Strobe: A timing signal for data input from the bus and data output to the memory.
- Data DIR: indicates a data transfer direction.
- Write Strobe: Specifies the byte to be latched by an address and byte-enable.
- Output Select: Specifies data to be output on a memory write.
- Bus Request: A bus request signal from a device.
- Bus Grant: A bus grant signal to a device.
- Retry from X: A retry signal from device X having a cache.
- Retry to X: A retry signal to device X.
- Operation of the present invention in the configuration shown in
FIG. 3 will be described below. - Device C (Device #2) provides a Bus Request C signal (110) to the
arbitration circuit 102 and receives a Bus Grant C signal (112) in response. It also provides an address (Bus Address 114) onto the bus in order to write data in the memory. - The cache of each of the devices such as device A (CPU #0) watches (snoops) for an address on the bus (Bus address 114).
- If an address in the cache of device A (CPU #0) is hit (snoop hit), device A (CPU #0) activates a Retry from A signal (116) to the retry
control circuit 101. - The retry
control circuit 101 receives from the memory controller 104 a Write access signal (118) indicating that the access from device C (Device #2) is a write access. Even though the retrycontrol circuit 101 receives the Retry from A signal (116), the retrycontrol circuit 101 does not activate a Retry to C signal (120), which is a signal for aborting write access from device C. - The retry
control circuit 101 activates a Retry Hold signal (122) to thetag control circuit 106. The retrycontrol circuit 101 also sends (activates) to the arbitration circuit 102 a Priority-A signal (124) associated with device A (CPU #0) from which it received the Retry from A signal (116). - Device C (Device #2) uses a Bus Address signal (114), a Byte Enable signal (126), and a Bus Control signal (128) to control the
memory controller 104 to write write data provided on the Bus Data In (130) into thebuffer 108. A write location is specified by a Write Strobe signal (136), which is provided from thetag control circuit 106 in response to a Data Strobe signal and a Data DIR signal (134) from thememory controller 104. In particular, the write location depends on the low-order 5 bits (in the case of a 32-byte cache line) of the Bus Address (138) and the data width of a Byte Enable signal (140). Byte position information written in thetag control circuit 106 is recorded at this point of time. - Because the
memory controller 104 has received the Retry Hold signal (122), thememory controller 104 waits for cache (data) out from the cache without writing data into the memory. On the completion of the write to thebuffer 108 by device C (Device #2), thememory controller 104 activates an Access Complete signal (144). - Device A (CPU #0), which has requested a Retry, requests cache out. The
arbitration circuit 102, which has received the Priority-A signal (124), gives the highest priority to the Bus Request A (146) from device A (CPU #0). The cache out request from device A (CPU #0) is accepted immediately after access by device C (Device #2). - Device A (CPU #0) caches out the snoop hit data into the
buffer 108. The write location is determined in such a manner that the data previously written by device C is not overwritten by the Write Strobe signal 136. - On the completion of write to the
buffer 108 by device A (CPU #0), thememory controller 104 provides an Access Complete signal (144) to the retrycontrol circuit 101. - The retry
control circuit 101 inactivates the activated Retry-Hold signal (122) and Priority-A signal (124). - The
memory controller 104 writes data (write access data and cache-out data) latched in thebuffer 108 into the memory as a sequence of data on the basis of address (TAG) information from thetag control circuit 106. This completes the write access operation while maintaining the data coherency. - While the invention has been described with reference to a preferred embodiment or embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment disclosed as the best mode contemplated for carrying out this invention, but that the invention will include all embodiments falling within the scope of the appended claims.
Claims (17)
1. A memory system comprising:
a bus; and
a memory, a memory controller, a first device having a cache, and a second device which are connected to the bus,
wherein the memory controller includes a buffer for temporarily storing a cache data and a write data that the second device writes in the memory.
2. The memory system according to claim 1 , further comprising a control circuit for causing the write data to be temporarily stored in the buffer if a cache hit occurs in which the write data matches the cache data.
3. The memory system according to claim 2 , wherein the control circuit comprises a retry control circuit for preventing the second device from performing a retry in response to an access retry request from the first device if the cache hit occurs.
4. The memory system according to claim 3 , further comprising a tag control circuit for storing a write location of the write data written in the buffer and causing the cache data to be temporarily stored in the buffer without overwriting the write data in the write location in the buffer.
5. The memory system according to claim 4 , wherein the cache data to be temporarily stored in the buffer is data which has been updated in the cache.
6. The memory system according to claim 3 , wherein the tag control circuit causes the write data and the cache data stored in the buffer to be stored in the memory as sequential data.
7. The memory system according to claim 1 , wherein the bus includes a CPU local bus and a system bus which are interconnected through a bus bridge, and the first device includes a CPU connected to the CPU local bus.
8. In a memory system comprising a bus, and a memory, a memory controller, a first device having a cache, and a second device which are connected to the bus, a method for controlling the memory system when the second device makes write access to the memory, comprising the steps of:
(a) comparing the address of a write data with the address of data in the cache;
(b) if the address of the write data and the address of the cache data match each other, determining whether or not data stored at the matching address in the cache has been changed;
(c) if the data has been changed, temporarily storing the write data in the buffer without allowing the second device to make the retry access;
(d) temporarily storing the changed data contained in the cache into the buffer without overwriting the write data temporarily stored in the buffer; and
(e) writing the changed data and the write data which are temporarily stored in the buffer into the memory as sequential data.
9. The method according to claim 8 , wherein the comparing step (a) comprises the step of the cache of the first device monitoring whether the second device performs a write access.
10. In a system in which a memory, a memory controller having a buffer, a plurality of bus masters, and a cache memory are interconnected through a bus, a method for maintaining data coherency by using a snooping algorithm, comprising the step of:
if a write access by a bus master results in a snoop hit and the hit data in the cache memory has been updated, storing temporarily the write data of the bus master and the updated data in the cache memory into the buffer and then writing the write data and the updated data in the memory as sequential data, without executing an access retry by the bus master.
11. A distributed memory system, comprising:
a first device having a cache memory;
a first bus coupled to the first device;
a second bus adapted to interface with a plurality of devices;
a bus bridge interconnecting the CPU bus and the system bus;
a system memory coupled to the system bus;
a second device coupled to the system bus; and
a memory controller coupled to the system bus and including a buffer for temporarily storing a cache data and a write data the second device writes in the system memory.
12. The distributed memory system according to claim 11 , further comprising a control circuit for causing the write data to be temporarily stored in the buffer if a cache hit occurs in which the write data matches the cache data.
13. The memory system according to claim 12 , wherein the control circuit comprises a retry control circuit for preventing the second device from performing a retry in response to an access retry request from the first device if the cache hit occurs.
14. The memory system according to claim 13 , further comprising a tag control circuit for storing a write location of the write data written in the buffer and causing the cache data to be temporarily stored in the buffer without overwriting the write data in the write location in the buffer.
15. The memory system according to claim 14 , wherein the cache data to be temporarily stored in the buffer is data which has been updated in the cache.
16. The memory system according to claim 13 , wherein the tag control circuit causes the write data and the cache data stored in the buffer to be stored in the memory as sequential data.
17. The memory system according to claim 11 , wherein the first bus comprises a CPU local bus and the second bus comprises a system bus interconnected through the bus bridge, and the first device includes a CPU connected to the CPU local bus.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/276,004 US7406571B2 (en) | 2006-02-09 | 2006-02-09 | Memory system and method for controlling the same, and method for maintaining data coherency |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US11/276,004 US7406571B2 (en) | 2006-02-09 | 2006-02-09 | Memory system and method for controlling the same, and method for maintaining data coherency |
Publications (2)
Publication Number | Publication Date |
---|---|
US20070186051A1 true US20070186051A1 (en) | 2007-08-09 |
US7406571B2 US7406571B2 (en) | 2008-07-29 |
Family
ID=36263486
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/276,004 Expired - Fee Related US7406571B2 (en) | 2006-02-09 | 2006-02-09 | Memory system and method for controlling the same, and method for maintaining data coherency |
Country Status (1)
Country | Link |
---|---|
US (1) | US7406571B2 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7406571B2 (en) * | 2006-02-09 | 2008-07-29 | International Business Machines Corporation | Memory system and method for controlling the same, and method for maintaining data coherency |
US20150143049A1 (en) * | 2013-11-20 | 2015-05-21 | Electronics And Telecommunications Research Institute | Cache control apparatus and method |
WO2016209268A1 (en) * | 2015-06-26 | 2016-12-29 | Hewlett Packard Enterprise Development Lp | Self-tune controller |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10599339B2 (en) * | 2018-07-30 | 2020-03-24 | International Business Machines Corporation | Sequential write management in a data storage system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5355467A (en) * | 1991-06-04 | 1994-10-11 | Intel Corporation | Second level cache controller unit and system |
US5617556A (en) * | 1993-09-20 | 1997-04-01 | International Business Machines Corporation | System and method to prevent the occurrence of a snoop push during read and write operations |
US6216193B1 (en) * | 1998-09-03 | 2001-04-10 | Advanced Micro Devices, Inc. | Apparatus and method in a network interface for recovering from complex PCI bus termination conditions |
US6275885B1 (en) * | 1998-09-30 | 2001-08-14 | Compaq Computer Corp. | System and method for maintaining ownership of a processor bus while sending a programmed number of snoop cycles to the processor cache |
US6732236B2 (en) * | 2000-12-18 | 2004-05-04 | Redback Networks Inc. | Cache retry request queue |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7406571B2 (en) * | 2006-02-09 | 2008-07-29 | International Business Machines Corporation | Memory system and method for controlling the same, and method for maintaining data coherency |
-
2006
- 2006-02-09 US US11/276,004 patent/US7406571B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5355467A (en) * | 1991-06-04 | 1994-10-11 | Intel Corporation | Second level cache controller unit and system |
US5617556A (en) * | 1993-09-20 | 1997-04-01 | International Business Machines Corporation | System and method to prevent the occurrence of a snoop push during read and write operations |
US6216193B1 (en) * | 1998-09-03 | 2001-04-10 | Advanced Micro Devices, Inc. | Apparatus and method in a network interface for recovering from complex PCI bus termination conditions |
US6275885B1 (en) * | 1998-09-30 | 2001-08-14 | Compaq Computer Corp. | System and method for maintaining ownership of a processor bus while sending a programmed number of snoop cycles to the processor cache |
US6732236B2 (en) * | 2000-12-18 | 2004-05-04 | Redback Networks Inc. | Cache retry request queue |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7406571B2 (en) * | 2006-02-09 | 2008-07-29 | International Business Machines Corporation | Memory system and method for controlling the same, and method for maintaining data coherency |
US20150143049A1 (en) * | 2013-11-20 | 2015-05-21 | Electronics And Telecommunications Research Institute | Cache control apparatus and method |
US9824017B2 (en) * | 2013-11-20 | 2017-11-21 | Electronics And Telecommunications Research Institute | Cache control apparatus and method |
WO2016209268A1 (en) * | 2015-06-26 | 2016-12-29 | Hewlett Packard Enterprise Development Lp | Self-tune controller |
US10740270B2 (en) | 2015-06-26 | 2020-08-11 | Hewlett Packard Enterprise Development Lp | Self-tune controller |
Also Published As
Publication number | Publication date |
---|---|
US7406571B2 (en) | 2008-07-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12061562B2 (en) | Computer memory expansion device and method of operation | |
US6708257B2 (en) | Buffering system bus for external-memory access | |
US5463753A (en) | Method and apparatus for reducing non-snoop window of a cache controller by delaying host bus grant signal to the cache controller | |
US5353415A (en) | Method and apparatus for concurrency of bus operations | |
US6321296B1 (en) | SDRAM L3 cache using speculative loads with command aborts to lower latency | |
US5903911A (en) | Cache-based computer system employing memory control circuit and method for write allocation and data prefetch | |
US7945737B2 (en) | Memory hub with internal cache and/or memory access prediction | |
US5644788A (en) | Burst transfers using an ascending or descending only burst ordering | |
US5664150A (en) | Computer system with a device for selectively blocking writebacks of data from a writeback cache to memory | |
KR100950871B1 (en) | Memory hub and access method having internal row caching | |
KR970010368B1 (en) | Cache line replace apparatus and method | |
US5659709A (en) | Write-back and snoop write-back buffer to prevent deadlock and to enhance performance in an in-order protocol multiprocessing bus | |
US5918069A (en) | System for simultaneously writing back cached data via first bus and transferring cached data to second bus when read request is cached and dirty | |
WO1994008297A9 (en) | Method and apparatus for concurrency of bus operations | |
JP2000250813A (en) | Data managing method for i/o cache memory | |
US6748493B1 (en) | Method and apparatus for managing memory operations in a data processing system using a store buffer | |
US5590310A (en) | Method and structure for data integrity in a multiple level cache system | |
US5974497A (en) | Computer with cache-line buffers for storing prefetched data for a misaligned memory access | |
JP4106664B2 (en) | Memory controller in data processing system | |
US7406571B2 (en) | Memory system and method for controlling the same, and method for maintaining data coherency | |
US5287512A (en) | Computer memory system and method for cleaning data elements | |
US5923857A (en) | Method and apparatus for ordering writeback data transfers on a bus | |
JPH0830546A (en) | Bus controller | |
US7757046B2 (en) | Method and apparatus for optimizing line writes in cache coherent systems | |
US20040123021A1 (en) | Memory control apparatus executing prefetch instruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:HARADA, NOBUYUKI;REEL/FRAME:017146/0400 Effective date: 20060209 |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
REMI | Maintenance fee reminder mailed | ||
FPAY | Fee payment |
Year of fee payment: 4 |
|
SULP | Surcharge for late payment | ||
REMI | Maintenance fee reminder mailed | ||
LAPS | Lapse for failure to pay maintenance fees | ||
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20160729 |