GB2190220A - Multi-level storage system - Google Patents
Multi-level storage system Download PDFInfo
- Publication number
- GB2190220A GB2190220A GB08611307A GB8611307A GB2190220A GB 2190220 A GB2190220 A GB 2190220A GB 08611307 A GB08611307 A GB 08611307A GB 8611307 A GB8611307 A GB 8611307A GB 2190220 A GB2190220 A GB 2190220A
- Authority
- GB
- United Kingdom
- Prior art keywords
- storage level
- item
- data item
- count value
- storage
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0808—Multiuser, multiprocessor or multiprocessing cache systems with cache invalidating means
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
A multi-level memory system is described, consisting of a main store and a plurality of cache stores. Each data item in the cache stores has a count value associated with it, for indicating the number of times the item has been accessed since it was loaded into the cache. Whenever a datum is fetched from main store and written into cache, an associated count value in count store 15 is set to an initial value N. Subsequently, the count value is reduced by one, by subtractor 16, each time that datum is accessed in cache. When the count value has seclined to zero a logic circuit 17 signals to AND gate 18 which no longer provides a VALID signal. The corresponding datum is then fetched from the main store and reloaded into cache. This avoids the need for marking shared data items, such as semaphores, as 'non-cached". <IMAGE>
Description
SPECIFICATION
Multi-level storage system
This invention relates to a multi-level storage system, comprising a first storage level, and a second storage level of faster access time than the first level. Generally, the first storage level is referred to as the main store, and the second level as the cache or slave store.
In operation of such a system, whenever a store access is made, a check is made to determine whether the required data item is one of the data items already in the second storage level. If it is, the item can be accessed directly from the second level, for reading or writing, without the necessity for accessing the slower first storage level. If not, the first storage level is accessed, and the item may also be copied into the second storage level so that it is available there for future access.
The second storage level may comprise a plurality of cache stores, for use by different units within a processing system. For example, in a mulitprocessor system each processor may have its own private cache store positioned between it and a main store, which is shared between the processors.
One problem that arises with such a system is that if a data item is being used as a signal between two units (e.g. between two processors) it is usually necessary to ensure that any accesses to that item are to the main store, rather than to the cache stores, so as to ensure that each unit accesses the most recently updated copy of the signalling item. This usually involves marking the copy of the item in the main store as "non-cached", and ensuring that all accesses to items marked in this way go to the main store. In a virtual store system, this can be done by assigning a bit in the segment or page table description to signify "non-cached". However, when the units are dealing with real store, this solution is not possible.
The object of the present invention is to provide a way of overcoming this problem without the necessity for marking data items as non-cached".
Summary of the invention
According to the invention, there is provided a storage system comprising a first storage level, and a second storage level having a faster access speed than the first storage level, the second storage level holding copies of data items from the first storage level, wherein each data item in the second storage level has a count value associated with it, indicating the number of times that item has been accessed since it was loaded into the second storage level, and wherein, when any of the count values reaches a predetermined value the associated data item in the second storage level is invalidated and a new copy of that item is fetched from the first storage level.
Brief description of drawings
One storage system in accordance with the invention will now be described with reference to the accompanying drawings.
Figure 1 shows a data processing system including a multi-level storage system, comprising a main store and a plurality of cache stores.
Figure 2 shows one of the cache stores in detail.
Description of an embodiment of the invention
Fig. 1 shows a data processing system comprising a main processing unit 1 and a plurality of input/output (I/O) processing units 2, sharing a main store 3. The main processing unit 1 has a cache store 4 positioned between it and the main store 3, and similarly each of the 1/0 processing units 2 has its own cache store 5 between it and the main store. The main processing unit and its cache store are positioned physically adjacent to the main store, while the I/O processing units and their cache stores may be at remote locations.
The main store 3 may, for example, comprise a 4 megabyte dynamic random access memory (RAM) with an access time of 200 nanoseconds. The cache stores 4,5 may be static RAMs with access time of 50 nanoseconds. The cache store 3 may typically hold 64 kilobytes, while each of the cache stores 5 may hold one or two kilobytes of data.
The purpose of the cache stores 4,5 is first to increase the speed of access to the data held in the main store 3 and, secondly, to reduce the number of competing access requests to the main store.
Whenever the main processing unit or one of the 1/0 processing units requires access to a data item, it sends the address of the desired item to its associated cache store 4 or 5. If the item is currently resident in the cache, the cache returns a VALID signal to the processing unit. The processing unit can then access the data item, for reading or writing as required, without reference to the main store.
If the required data item is not present in the cache store, a main store access is performed, to fetch a copy of the item into the cache, displacing one of the data items already there. Operation then proceeds as before.
Whenever one of the processing units updates a data item in its associated cache store 4 or 5, the updated value of that item is also sent to the main store, so as to update that item in the main store as well.
The cache store 4 contains logic for detecting updates from any of the cache stores 5 relating to data items, copies of which are held in the cache store 4. When this is de tected, the copy of the data item in the cache store is invalidated, since it is no longer the most up-to-date version of that item.
As described so far, the system shown in
Fig. 1 is conventional. In particular, the mechanism for fetching data items from the main store into the cache stores, and for invalidating the items in the cache store 4 are conventional and so will not be described in any further detail.
In operation, the processing units 1,2 communicate with each other by way of shared data items held in the storage system. For example, these shared data items may be signalling items for performing semaphore operations to signal events from one processing unit to another. In the past, it has been necessary to mark such shared data items as being "non-cached", meaning that they should be accessed from the main store, rather than from the cache stores. However, in the present system, the shared items are not marked in this way, and the processing units are permitted to access these items from the cache stores.
Referring now to Fig. 2, this shows one of the cache stores 5 associated with an I/O processor 2.
The cache store comprises a random access memory (RAM) having three sections 10,11,15, referred to respectively as the data store 10, the tag store 11 and the count store 15. The RAM consists of a plurality of individually addressable lines, each of which holds a data item (from the data store 10), a tag value (from the tag store 11) and a count value (from the count store 15).
The cache store receives an address from its associated I/O processing unit, and this is loaded into an address register 14. The address consists of two parts referred to as the
TAG and the INDEX. The INDEX is decoded by a decoder circuit 12, so as to select one line of the RAM, containing a data item and its associated tag and count values.
The tag value from the selected line is read out of the tag store 11, and is compared with the TAG from the address register 14, by means of a comparator 13. At the same time, the count value from the selected line is read out of the count store 15 and is applied to a logic circuit 17 which detects when the count value is non-zero. The outputs of the comparator 13 and logic circuit 17 are combined in an AND gate 18 to produce the VALID signal, which indicates that the data item in the selected line of the RAM is valid and therefore may be accessed.
It can be seen that VALID is true only if the
TAG from the address register 14 matches the tag value of the data item and at the same time the count value of the data item is non-zero.
Whenever a data item is fetched from the main store and written into the cache store, its associated count value is set to an initial value N.
Each time a data item is accessed in the cache store, its associated count value in the count store 15 is decremented by one, by means of a subtractor circuit 16, and is then written back into the same location.
It can therefore be seen that a data item in the cache store can be accessed up to N times the normal manner. However, at the next (N+ ith) access, the count value will be zero, and so the VALID signal will be false.
Thus, the data item will appear not to be present in the cache, and so the normal mechanism for fetching that item from the main store and loading it into the cache store will be activated.
For example, suppose that in operation the main processing unit 1 requires to signal to one of the I/O processing units 2. To do so, it updates an appropriate signalling item (semmaphore) in its cache store 4, which in turn causes the item to be updated in the main store 3. However, the new value of the signalling item will now yet be communicated to the I/O cache stores 5, since there is no mechanism for doing this. Thus, each I/O processor will continue to access the old value of the signalling item in its own cache store 5, waiting for it to change. Eventually, after the
I/O processor has accessed the item N times, the count value of that item will be zero and hence the item will be invalidated. Therefore, a main store access will now be performed, fetching the updated value of the signalling into the cache store 5.
In summary, it can be seen that signals are communicated between the processing units without the necessity for marking the signalling items as being "non-cached".
The initial count valve N must be chosen to be sufficiently small so as not to lead to an excessive loss of performance as a result of the fact that the I/O processor has to access the signalling item N times before it detects any change in the item. On the other hand, the initial count value N must not be too small, since this may lead to a loss of performance as a result of the fact that, if a nonshared data item is accessed more than N times in the cache store, it will be invalidated and hence will have to be re-fetched from the main store. The choice of the value N will depend on the particular application; a typical value of N might be 15.
Claims (5)
1. A storage system comprising a first storage level, and a second storage level having a faster access speed than the first storage level, the second storage level holding copies of data items from the first storage level, wherein each data item in the second storage level has a count value associated with it, indicating the number of times that item has been accessed since it was loaded into the second storage level, and wherein, when any of the count values reaches a predetermined value the associated data item in the second storage level is invalidated and a new copy of that item is fetched from the first storage level.
2. A system according to Claim 1 wherein the first storage level comprises a random access memory and wherein the second storage level comprises a plurality of separate random access memories which can be independently accessed.
3. A system according to either preceding claim wherein the count value associated with each data item comprises a plurality of bits held in the same storage location of the second storage level as the data item.
4. A system according to any preceding claim wherein each count value is set to a predetermined value when the associated data item is loaded into the second storage level, and is then decremented whenever the item is accessed in the second storage level.
5. A data processing system comprising a plurality of processing units, and a main memory shared by all the processing units, each unit having a separate cache store, each cache store comprising a plurality of storage locations each of which holds a copy of data item from the main store and also holds a count value indicating the number of times the data item in that location has been accessed since it was loaded into the cache store, each cache store having means operative whenever one of its storage locations is accesed, to update the count value in that location, and means operative whenever the count value in any of the locations reaches a predetermined value to invalidate the data item in that location and to fetch a new copy of that item from the main memory.
5. A system according to Claim 4 wherein an item in the second storage level is invalidated when its count value reaches zero.
6. A data processing system comprising a plurality of processing units and a main memory shared by all the processing units, each unit having a separate cache store for holding copies of data items from the main store, wherein each data item in the cache stores has a count value associated with it indicating the number of times that item has been accessed since it was loaded into the cache store, and wherein, when any of the count values reaches a predetermined value, the associated data item in the cache store is invalidated and a new copy of that item is fetched from the main memory.
7. A cache store for use in a data processing system, the cache store comprising a memory having a plurality of locations for holding data items, each data item having a count value associated with it, the count value being set to a predetermined initial value whenever the associated data item is loaded into the memory, and being modified whenever the data item is accessed in the memory so as to keep a record of the number of times that item has been accessed since it was loaded into the memory, and wherein, when any of the count values reaches a predetermined value, the associated data item is invalidated.
8. A multi-level storage system substantially as hereinbefore described with reference to the accompanying drawings.
9. A data processing system substantially as hereinbefore described with reference to the accompanying drawings.
10. A cache store substantially as hereinbefore described with reference to Fig. 2 of the accompanying drawings.
Amendments to the claims have been filed, and have the following effect:
Claims 1, 3, 6 and 7 above have been deleted or textually amended.
New or textually amended claims have been filed as follows:
Claims 4, 5, 8, 9 and 10 above have been re-numbered as 3, 4, 6, 7 and 8 and their appendancies corrected.
1. A storage system comprising a first storage level and a second storage level having a faster access speed than the first storage level, the second storage level comprising a plurality of storage locations each of which holds a copy of a data item from the first storage level and also holds a count value indicating the number of times the data item in that location has been accessed since it was loaded into the second storage level, the second storage level having means operative whenever one of the storage locations is accessed, to update the count value in that location, and means operative whenever the count value in any of the storage locations reaches a predetermined value, to invalidate the data item in that location and to fetch a new copy of that item from the first storage level.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB08611307A GB2190220A (en) | 1986-05-09 | 1986-05-09 | Multi-level storage system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB08611307A GB2190220A (en) | 1986-05-09 | 1986-05-09 | Multi-level storage system |
Publications (2)
Publication Number | Publication Date |
---|---|
GB8611307D0 GB8611307D0 (en) | 1986-06-18 |
GB2190220A true GB2190220A (en) | 1987-11-11 |
Family
ID=10597565
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB08611307A Withdrawn GB2190220A (en) | 1986-05-09 | 1986-05-09 | Multi-level storage system |
Country Status (1)
Country | Link |
---|---|
GB (1) | GB2190220A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0507571A2 (en) * | 1991-04-05 | 1992-10-07 | Fujitsu Limited | Receiving buffer control system |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1488980A (en) * | 1974-04-01 | 1977-10-19 | Xerox Corp | Memory and buffer arrangement for digital computers |
-
1986
- 1986-05-09 GB GB08611307A patent/GB2190220A/en not_active Withdrawn
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1488980A (en) * | 1974-04-01 | 1977-10-19 | Xerox Corp | Memory and buffer arrangement for digital computers |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0507571A2 (en) * | 1991-04-05 | 1992-10-07 | Fujitsu Limited | Receiving buffer control system |
US5765187A (en) * | 1991-04-05 | 1998-06-09 | Fujitsu Limited | Control system for a ring buffer which prevents overrunning and underrunning |
EP0507571B1 (en) * | 1991-04-05 | 1998-09-23 | Fujitsu Limited | Receiving buffer control system |
Also Published As
Publication number | Publication date |
---|---|
GB8611307D0 (en) | 1986-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5155832A (en) | Method to increase performance in a multi-level cache system by the use of forced cache misses | |
US5367660A (en) | Line buffer for cache memory | |
US5586297A (en) | Partial cache line write transactions in a computing system with a write back cache | |
US5146603A (en) | Copy-back cache system having a plurality of context tags and setting all the context tags to a predetermined value for flushing operation thereof | |
EP0168121B1 (en) | Memory access method and apparatus in multiple processor systems | |
US5095424A (en) | Computer system architecture implementing split instruction and operand cache line-pair-state management | |
US5717898A (en) | Cache coherency mechanism for multiprocessor computer systems | |
US4339804A (en) | Memory system wherein individual bits may be updated | |
KR100190351B1 (en) | Apparatus and method for reducing interference in two-level cache memory | |
US6047357A (en) | High speed method for maintaining cache coherency in a multi-level, set associative cache hierarchy | |
US4847804A (en) | Apparatus and method for data copy consistency in a multi-cache data processing unit | |
US5155828A (en) | Computing system with a cache memory and an additional look-aside cache memory | |
US5802567A (en) | Mechanism for managing offset and aliasing conditions within a content-addressable memory-based cache memory | |
US5379396A (en) | Write ordering for microprocessor depending on cache hit and write buffer content | |
US6151661A (en) | Cache memory storage space management system and method | |
EP0533427B1 (en) | Computer memory control system | |
US5895489A (en) | Memory management system including an inclusion bit for maintaining cache coherency | |
US6256710B1 (en) | Cache management during cache inhibited transactions for increasing cache efficiency | |
EP0173909A2 (en) | Look-aside buffer least recently used marker controller | |
KR100380674B1 (en) | Method and system for maintaining cache coherency for write-through store operations in a multiprocessor system | |
US4424564A (en) | Data processing system providing dual storage of reference bits | |
US6397305B1 (en) | Method and apparatus for controlling shared memory access | |
GB2190220A (en) | Multi-level storage system | |
US5960456A (en) | Method and apparatus for providing a readable and writable cache tag memory | |
US6401173B1 (en) | Method and apparatus for optimizing bcache tag performance by inferring bcache tag state from internal processor state |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |