GB2190220A - Multi-level storage system - Google Patents

Multi-level storage system Download PDF

Info

Publication number
GB2190220A
GB2190220A GB08611307A GB8611307A GB2190220A GB 2190220 A GB2190220 A GB 2190220A GB 08611307 A GB08611307 A GB 08611307A GB 8611307 A GB8611307 A GB 8611307A GB 2190220 A GB2190220 A GB 2190220A
Authority
GB
United Kingdom
Prior art keywords
storage level
item
data item
count value
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB08611307A
Other versions
GB8611307D0 (en
Inventor
Alan William Walton
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Services Ltd
Original Assignee
Fujitsu Services Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Services Ltd filed Critical Fujitsu Services Ltd
Priority to GB08611307A priority Critical patent/GB2190220A/en
Publication of GB8611307D0 publication Critical patent/GB8611307D0/en
Publication of GB2190220A publication Critical patent/GB2190220A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0806Multiuser, multiprocessor or multiprocessing cache systems
    • G06F12/0808Multiuser, multiprocessor or multiprocessing cache systems with cache invalidating means

Abstract

A multi-level memory system is described, consisting of a main store and a plurality of cache stores. Each data item in the cache stores has a count value associated with it, for indicating the number of times the item has been accessed since it was loaded into the cache. Whenever a datum is fetched from main store and written into cache, an associated count value in count store 15 is set to an initial value N. Subsequently, the count value is reduced by one, by subtractor 16, each time that datum is accessed in cache. When the count value has seclined to zero a logic circuit 17 signals to AND gate 18 which no longer provides a VALID signal. The corresponding datum is then fetched from the main store and reloaded into cache. This avoids the need for marking shared data items, such as semaphores, as 'non-cached". <IMAGE>

Description

SPECIFICATION Multi-level storage system This invention relates to a multi-level storage system, comprising a first storage level, and a second storage level of faster access time than the first level. Generally, the first storage level is referred to as the main store, and the second level as the cache or slave store.

In operation of such a system, whenever a store access is made, a check is made to determine whether the required data item is one of the data items already in the second storage level. If it is, the item can be accessed directly from the second level, for reading or writing, without the necessity for accessing the slower first storage level. If not, the first storage level is accessed, and the item may also be copied into the second storage level so that it is available there for future access.

The second storage level may comprise a plurality of cache stores, for use by different units within a processing system. For example, in a mulitprocessor system each processor may have its own private cache store positioned between it and a main store, which is shared between the processors.

One problem that arises with such a system is that if a data item is being used as a signal between two units (e.g. between two processors) it is usually necessary to ensure that any accesses to that item are to the main store, rather than to the cache stores, so as to ensure that each unit accesses the most recently updated copy of the signalling item. This usually involves marking the copy of the item in the main store as "non-cached", and ensuring that all accesses to items marked in this way go to the main store. In a virtual store system, this can be done by assigning a bit in the segment or page table description to signify "non-cached". However, when the units are dealing with real store, this solution is not possible.

The object of the present invention is to provide a way of overcoming this problem without the necessity for marking data items as non-cached".

Summary of the invention According to the invention, there is provided a storage system comprising a first storage level, and a second storage level having a faster access speed than the first storage level, the second storage level holding copies of data items from the first storage level, wherein each data item in the second storage level has a count value associated with it, indicating the number of times that item has been accessed since it was loaded into the second storage level, and wherein, when any of the count values reaches a predetermined value the associated data item in the second storage level is invalidated and a new copy of that item is fetched from the first storage level.

Brief description of drawings One storage system in accordance with the invention will now be described with reference to the accompanying drawings.

Figure 1 shows a data processing system including a multi-level storage system, comprising a main store and a plurality of cache stores.

Figure 2 shows one of the cache stores in detail.

Description of an embodiment of the invention Fig. 1 shows a data processing system comprising a main processing unit 1 and a plurality of input/output (I/O) processing units 2, sharing a main store 3. The main processing unit 1 has a cache store 4 positioned between it and the main store 3, and similarly each of the 1/0 processing units 2 has its own cache store 5 between it and the main store. The main processing unit and its cache store are positioned physically adjacent to the main store, while the I/O processing units and their cache stores may be at remote locations.

The main store 3 may, for example, comprise a 4 megabyte dynamic random access memory (RAM) with an access time of 200 nanoseconds. The cache stores 4,5 may be static RAMs with access time of 50 nanoseconds. The cache store 3 may typically hold 64 kilobytes, while each of the cache stores 5 may hold one or two kilobytes of data.

The purpose of the cache stores 4,5 is first to increase the speed of access to the data held in the main store 3 and, secondly, to reduce the number of competing access requests to the main store.

Whenever the main processing unit or one of the 1/0 processing units requires access to a data item, it sends the address of the desired item to its associated cache store 4 or 5. If the item is currently resident in the cache, the cache returns a VALID signal to the processing unit. The processing unit can then access the data item, for reading or writing as required, without reference to the main store.

If the required data item is not present in the cache store, a main store access is performed, to fetch a copy of the item into the cache, displacing one of the data items already there. Operation then proceeds as before.

Whenever one of the processing units updates a data item in its associated cache store 4 or 5, the updated value of that item is also sent to the main store, so as to update that item in the main store as well.

The cache store 4 contains logic for detecting updates from any of the cache stores 5 relating to data items, copies of which are held in the cache store 4. When this is de tected, the copy of the data item in the cache store is invalidated, since it is no longer the most up-to-date version of that item.

As described so far, the system shown in Fig. 1 is conventional. In particular, the mechanism for fetching data items from the main store into the cache stores, and for invalidating the items in the cache store 4 are conventional and so will not be described in any further detail.

In operation, the processing units 1,2 communicate with each other by way of shared data items held in the storage system. For example, these shared data items may be signalling items for performing semaphore operations to signal events from one processing unit to another. In the past, it has been necessary to mark such shared data items as being "non-cached", meaning that they should be accessed from the main store, rather than from the cache stores. However, in the present system, the shared items are not marked in this way, and the processing units are permitted to access these items from the cache stores.

Referring now to Fig. 2, this shows one of the cache stores 5 associated with an I/O processor 2.

The cache store comprises a random access memory (RAM) having three sections 10,11,15, referred to respectively as the data store 10, the tag store 11 and the count store 15. The RAM consists of a plurality of individually addressable lines, each of which holds a data item (from the data store 10), a tag value (from the tag store 11) and a count value (from the count store 15).

The cache store receives an address from its associated I/O processing unit, and this is loaded into an address register 14. The address consists of two parts referred to as the TAG and the INDEX. The INDEX is decoded by a decoder circuit 12, so as to select one line of the RAM, containing a data item and its associated tag and count values.

The tag value from the selected line is read out of the tag store 11, and is compared with the TAG from the address register 14, by means of a comparator 13. At the same time, the count value from the selected line is read out of the count store 15 and is applied to a logic circuit 17 which detects when the count value is non-zero. The outputs of the comparator 13 and logic circuit 17 are combined in an AND gate 18 to produce the VALID signal, which indicates that the data item in the selected line of the RAM is valid and therefore may be accessed.

It can be seen that VALID is true only if the TAG from the address register 14 matches the tag value of the data item and at the same time the count value of the data item is non-zero.

Whenever a data item is fetched from the main store and written into the cache store, its associated count value is set to an initial value N.

Each time a data item is accessed in the cache store, its associated count value in the count store 15 is decremented by one, by means of a subtractor circuit 16, and is then written back into the same location.

It can therefore be seen that a data item in the cache store can be accessed up to N times the normal manner. However, at the next (N+ ith) access, the count value will be zero, and so the VALID signal will be false.

Thus, the data item will appear not to be present in the cache, and so the normal mechanism for fetching that item from the main store and loading it into the cache store will be activated.

For example, suppose that in operation the main processing unit 1 requires to signal to one of the I/O processing units 2. To do so, it updates an appropriate signalling item (semmaphore) in its cache store 4, which in turn causes the item to be updated in the main store 3. However, the new value of the signalling item will now yet be communicated to the I/O cache stores 5, since there is no mechanism for doing this. Thus, each I/O processor will continue to access the old value of the signalling item in its own cache store 5, waiting for it to change. Eventually, after the I/O processor has accessed the item N times, the count value of that item will be zero and hence the item will be invalidated. Therefore, a main store access will now be performed, fetching the updated value of the signalling into the cache store 5.

In summary, it can be seen that signals are communicated between the processing units without the necessity for marking the signalling items as being "non-cached".

The initial count valve N must be chosen to be sufficiently small so as not to lead to an excessive loss of performance as a result of the fact that the I/O processor has to access the signalling item N times before it detects any change in the item. On the other hand, the initial count value N must not be too small, since this may lead to a loss of performance as a result of the fact that, if a nonshared data item is accessed more than N times in the cache store, it will be invalidated and hence will have to be re-fetched from the main store. The choice of the value N will depend on the particular application; a typical value of N might be 15.

Claims (5)

1. A storage system comprising a first storage level, and a second storage level having a faster access speed than the first storage level, the second storage level holding copies of data items from the first storage level, wherein each data item in the second storage level has a count value associated with it, indicating the number of times that item has been accessed since it was loaded into the second storage level, and wherein, when any of the count values reaches a predetermined value the associated data item in the second storage level is invalidated and a new copy of that item is fetched from the first storage level.
2. A system according to Claim 1 wherein the first storage level comprises a random access memory and wherein the second storage level comprises a plurality of separate random access memories which can be independently accessed.
3. A system according to either preceding claim wherein the count value associated with each data item comprises a plurality of bits held in the same storage location of the second storage level as the data item.
4. A system according to any preceding claim wherein each count value is set to a predetermined value when the associated data item is loaded into the second storage level, and is then decremented whenever the item is accessed in the second storage level.
5. A data processing system comprising a plurality of processing units, and a main memory shared by all the processing units, each unit having a separate cache store, each cache store comprising a plurality of storage locations each of which holds a copy of data item from the main store and also holds a count value indicating the number of times the data item in that location has been accessed since it was loaded into the cache store, each cache store having means operative whenever one of its storage locations is accesed, to update the count value in that location, and means operative whenever the count value in any of the locations reaches a predetermined value to invalidate the data item in that location and to fetch a new copy of that item from the main memory.
5. A system according to Claim 4 wherein an item in the second storage level is invalidated when its count value reaches zero.
6. A data processing system comprising a plurality of processing units and a main memory shared by all the processing units, each unit having a separate cache store for holding copies of data items from the main store, wherein each data item in the cache stores has a count value associated with it indicating the number of times that item has been accessed since it was loaded into the cache store, and wherein, when any of the count values reaches a predetermined value, the associated data item in the cache store is invalidated and a new copy of that item is fetched from the main memory.
7. A cache store for use in a data processing system, the cache store comprising a memory having a plurality of locations for holding data items, each data item having a count value associated with it, the count value being set to a predetermined initial value whenever the associated data item is loaded into the memory, and being modified whenever the data item is accessed in the memory so as to keep a record of the number of times that item has been accessed since it was loaded into the memory, and wherein, when any of the count values reaches a predetermined value, the associated data item is invalidated.
8. A multi-level storage system substantially as hereinbefore described with reference to the accompanying drawings.
9. A data processing system substantially as hereinbefore described with reference to the accompanying drawings.
10. A cache store substantially as hereinbefore described with reference to Fig. 2 of the accompanying drawings.
Amendments to the claims have been filed, and have the following effect: Claims 1, 3, 6 and 7 above have been deleted or textually amended.
New or textually amended claims have been filed as follows: Claims 4, 5, 8, 9 and 10 above have been re-numbered as 3, 4, 6, 7 and 8 and their appendancies corrected.
1. A storage system comprising a first storage level and a second storage level having a faster access speed than the first storage level, the second storage level comprising a plurality of storage locations each of which holds a copy of a data item from the first storage level and also holds a count value indicating the number of times the data item in that location has been accessed since it was loaded into the second storage level, the second storage level having means operative whenever one of the storage locations is accessed, to update the count value in that location, and means operative whenever the count value in any of the storage locations reaches a predetermined value, to invalidate the data item in that location and to fetch a new copy of that item from the first storage level.
GB08611307A 1986-05-09 1986-05-09 Multi-level storage system Withdrawn GB2190220A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB08611307A GB2190220A (en) 1986-05-09 1986-05-09 Multi-level storage system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB08611307A GB2190220A (en) 1986-05-09 1986-05-09 Multi-level storage system

Publications (2)

Publication Number Publication Date
GB8611307D0 GB8611307D0 (en) 1986-06-18
GB2190220A true GB2190220A (en) 1987-11-11

Family

ID=10597565

Family Applications (1)

Application Number Title Priority Date Filing Date
GB08611307A Withdrawn GB2190220A (en) 1986-05-09 1986-05-09 Multi-level storage system

Country Status (1)

Country Link
GB (1) GB2190220A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0507571A2 (en) * 1991-04-05 1992-10-07 Fujitsu Limited Receiving buffer control system

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1488980A (en) * 1974-04-01 1977-10-19 Xerox Corp Memory and buffer arrangement for digital computers

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB1488980A (en) * 1974-04-01 1977-10-19 Xerox Corp Memory and buffer arrangement for digital computers

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0507571A2 (en) * 1991-04-05 1992-10-07 Fujitsu Limited Receiving buffer control system
US5765187A (en) * 1991-04-05 1998-06-09 Fujitsu Limited Control system for a ring buffer which prevents overrunning and underrunning
EP0507571B1 (en) * 1991-04-05 1998-09-23 Fujitsu Limited Receiving buffer control system

Also Published As

Publication number Publication date
GB8611307D0 (en) 1986-06-18

Similar Documents

Publication Publication Date Title
US4394731A (en) Cache storage line shareability control for a multiprocessor system
US5257361A (en) Method and apparatus for controlling one or more hierarchical memories using a virtual storage scheme and physical to virtual address translation
US5265232A (en) Coherence control by data invalidation in selected processor caches without broadcasting to processor caches not having the data
US8909871B2 (en) Data processing system and method for reducing cache pollution by write stream memory access patterns
US6282617B1 (en) Multiple variable cache replacement policy
US3764996A (en) Storage control and address translation
US5454093A (en) Buffer bypass for quick data access
US4959777A (en) Write-shared cache circuit for multiprocessor system
JP3651857B2 (en) Cache coherent DMA write method
US4928239A (en) Cache memory with variable fetch and replacement schemes
JP3651857B6 (en) Cache coherent DMA write method
EP0095598B1 (en) Multiprocessor with independent direct cache-to-cache data transfers
US5822763A (en) Cache coherence protocol for reducing the effects of false sharing in non-bus-based shared-memory multiprocessors
US5247649A (en) Multi-processor system having a multi-port cache memory
EP0596636B1 (en) Cache tag memory
US5802582A (en) Explicit coherence using split-phase controls
US6757784B2 (en) Hiding refresh of memory and refresh-hidden memory
US5410669A (en) Data processor having a cache memory capable of being used as a linear ram bank
JP2684196B2 (en) Workstation
JP2825550B2 (en) Multiple virtual space address control method and computer system
EP0118828B1 (en) Instruction fetch apparatus and method of operating same
US5226144A (en) Cache controller for maintaining cache coherency in a multiprocessor system including multiple data coherency procedures
US5689679A (en) Memory system and method for selective multi-level caching using a cache level code
US4445174A (en) Multiprocessing system including a shared cache
US3735360A (en) High speed buffer operation in a multi-processing system

Legal Events

Date Code Title Description
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)