WO1993021580A1 - Method for managing disk track image slots in cache memory - Google Patents

Method for managing disk track image slots in cache memory Download PDF

Info

Publication number
WO1993021580A1
WO1993021580A1 PCT/US1992/003336 US9203336W WO9321580A1 WO 1993021580 A1 WO1993021580 A1 WO 1993021580A1 US 9203336 W US9203336 W US 9203336W WO 9321580 A1 WO9321580 A1 WO 9321580A1
Authority
WO
WIPO (PCT)
Prior art keywords
track
queue
available
referenced
entries
Prior art date
Application number
PCT/US1992/003336
Other languages
French (fr)
Inventor
Leonard Joseph Kurzawa
Gregory William Peterson
Original Assignee
Storage Technology Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Storage Technology Corporation filed Critical Storage Technology Corporation
Priority to AU22928/92A priority Critical patent/AU2292892A/en
Priority to PCT/US1992/003336 priority patent/WO1993021580A1/en
Publication of WO1993021580A1 publication Critical patent/WO1993021580A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache

Definitions

  • This invention relates generally to computer disk memory systems with cache memory, and more particularly, to methods for efficiently utilizing slots for storing disk track images in a cache memory subsystem associated with a disk memory system.
  • PROBLEM Disk memory systems having disk drives with associated cache memory subsystems utilize areas of cache memory for storing or buffering images of disk tracks containing data required by a host computer system. Since there is generally much less memory available in cache memory than in the associated disk drive, these cache memory areas, or “track image slots,” must be efficiently managed in order to maximize utilization of cache memory and to optimize throughput of data written to and read from the disk memory system.
  • Many prior art systems use a queue of referenced track image slots (the "referenced track queue") having entries containing pointers to the most-recently-referenced or used (“MRU”) disk track images.
  • entries containing pointers to the least- recently-referenced or used (“LRU") track images migrate toward the bottom of the queue.
  • LRU least- recently-referenced or used
  • a problem is encountered when an entry at the LRU end of the queue refers to a track image which has been modified by the computer system. Before a track slot can be re-used to contain a new track image, the modi ied track image must be destaged (written back to disk) . The longer the destaging process takes, the more inefficient cache memory usage becomes; therefore, prior art systems have generally scheduled the modified track images to be destaged, while keeping all of the corresponding entries in the queue. In the prior art, a sequential search is then made through the queue for an unmodified entry to be used as an available track slot, the modified entries being bypassed in the search.
  • Searching through the referenced track queue becomes increasingly inefficient and time consuming as modified entries migrate to the LRU end of the queue. This inefficiency occurs because the entries in the queue awaiting destaging must be bypassed on each search, and these entries may constitute the majority of entries that have migrated to the LRU end, which is where each search originates.
  • the referenced track queue is usually built in cache memory, which takes longer to access, and access to which must be locked out while a search of the queue is in progress.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

A method is described for efficiently utilizing slots for storing disk track images in a disk drive system (10) having an associated cache memory subsystem (20). In addition to the customary referenced track queue (40) used to store pointers to disk tracks referenced by a computer system (30), a second queue (50) is employed which provides readily available track slots for newly referenced tracks. A designated number of least-recently-used entries (42') are moved from the referenced track queue (40) to the second queue in one step. A designated number of entries are subsequently copied from the bottom of the second queue (50) to a destage queue, where they are queued to be written back to the disk drive system (100). By the time an entry has migrated to the bottom of the second queue (50), the corresponding track image will generally have been destaged, and the associated track slot is then made available for reuse. A block of available track slots is thus created, and repetitive searching of the referenced track queue (40) is avoided.

Description

METHOD FOR MANAGING DISK TRACK IMAGE SLOTS IN CACHE MEMORY
FIELD OF THE INVENTION
This invention relates generally to computer disk memory systems with cache memory, and more particularly, to methods for efficiently utilizing slots for storing disk track images in a cache memory subsystem associated with a disk memory system.
PROBLEM Disk memory systems having disk drives with associated cache memory subsystems utilize areas of cache memory for storing or buffering images of disk tracks containing data required by a host computer system. Since there is generally much less memory available in cache memory than in the associated disk drive, these cache memory areas, or "track image slots," must be efficiently managed in order to maximize utilization of cache memory and to optimize throughput of data written to and read from the disk memory system. Many prior art systems use a queue of referenced track image slots (the "referenced track queue") having entries containing pointers to the most-recently-referenced or used ("MRU") disk track images. Because the most-recently-referenced disk track images are put onto the top of the referenced track queue, entries containing pointers to the least- recently-referenced or used ("LRU") track images migrate toward the bottom of the queue. As a disk track is referenced by the computer system, the entry in the queue associated with a referenced track image in cache memory is placed on the top of the referenced track queue. Eventually, the least-recently- referenced entries in the queue will thus migrate toward the bottom of the queue. As new tracks are referenced by the computer system, the slots corresponding to the bottom, or LRU end, of the queue can be re-used for the newly referenced track images. A problem is encountered when an entry at the LRU end of the queue refers to a track image which has been modified by the computer system. Before a track slot can be re-used to contain a new track image, the modi ied track image must be destaged (written back to disk) . The longer the destaging process takes, the more inefficient cache memory usage becomes; therefore, prior art systems have generally scheduled the modified track images to be destaged, while keeping all of the corresponding entries in the queue. In the prior art, a sequential search is then made through the queue for an unmodified entry to be used as an available track slot, the modified entries being bypassed in the search.
Searching through the referenced track queue becomes increasingly inefficient and time consuming as modified entries migrate to the LRU end of the queue. This inefficiency occurs because the entries in the queue awaiting destaging must be bypassed on each search, and these entries may constitute the majority of entries that have migrated to the LRU end, which is where each search originates.
Finally, the referenced track queue is usually built in cache memory, which takes longer to access, and access to which must be locked out while a search of the queue is in progress.

Claims

SOLUTIONThe present invention employs, in addition to the prior art concept of a referenced track queue, a second queue of available track slots (the "available track slot queue") which provides readily available track slots for newly referenced tracks. This second queue serves as a pre-destaging area, and comprises a plurality of track slot pointers which have been. removed from the LRU end of the referenced track queue. Each entry removed from the referenced track queue is put on the top of the available track slot queue. An important innovation of the present invention is that a designated plurality of referenced track queue entries can be removed from the referenced track queue and put onto the top of the available track slot queue in one step.Furthermore, with the present method, any of the track images corresponding to an entry (which is awaiting destaging) in the available track slot queue may still be referenced, removed from the available track slot queue, and put onto the MRU end (i.e., the top) of the referenced track queue. Therefore, all of the track images in cache memory may still be referenced. A designated number of the entries at the bottom of the available track slot queue are further queued, in a destage queue, to be destaged to the disk memory system on a periodic basis. Each destaged available track slot queue entry is then made available for re- use by the cache subsystem. As new track slots are required, they are removed from the bottom of the available track slot queue. An important facet of the present method is that the size of the available track slot queue is such that, by the time an available track slot queue entry migrates to the bottom of the queue, there will have been sufficient time for the track image associated with the entry to have been destaged. Thus, a block of available track image slots is created, and repetitive searching of the referenced track queue is avoided.As entries are removed from the available track slot queue, a count of available track image slots is decremented. When this count reaches a minimum threshold level, a designated number of track image slots are removed from the LRU end of the referenced track queue and put onto the top of the available track slot queue.Use of the available track slot queue ensures that there are a certain number of track image slots normally available in cache memory. Therefore, the method of the present invention obviates the need for time-consuming searches for available track image slots, thus providing a significant improvement in system efficiency and a reduction in slot acquisition time over the prior art. BRIEF DESCRIPTION OF THE DRAWING Figure 1 is a block diagram showing a disk drive system connected to a cache memory subsystem, which is also connected to a host computer system; Figure 2 shows cache memory containing a referenced track queue and an available track slot queue, both queues having a plurality of entries, each of which contains a pointer to an area in cache memory containing a disk track images which have been read from a disk drive system into cache memory; andFigure 3 shows cache memory containing an available track slot queue, a destage queue, a plurality of disk track image slots, a free space stack, and an associated disk drive system. DETAILED DESCRIPTIONFigure 1 is a block diagram showing a disk drive system 10 connected via bus 12 to a cache memory subsystem 20, which is connected via bus 32 to a host computer system 30. The cache memory subsystem ("cache memory") 20 contains a plurality of disk track image slots 60 for storing copies, or images of tracks read from the disk drive system 10. Cache memory 20 also contains a referenced track queue 40, an available track slot queue 50, and a destage queue 70, the specific functions of which are described below.Figure 2 shows a referenced track queue 40 in cache memory 20, containing a plurality of entries 42, each of which contains a pointer 45 which points to an area in cache memory 20, called a "slot" 60, containing all or part of a disk track image which has been read ("referenced") from a disk memory system 10 into cache memory 20, as the result of a read or write request by the host computer system 30. The referenced track queue 40 is maintained as a list which has the most recently referenced ("MRU") tracks at one end 44, and the least recently referenced ("LRU") tracks at the other end 46. The entries 42 in the referenced track queue 40 are doubly-linked; that is, each entry 42 has a pointer to the next entry, as well as the preceding entry, in the queue 40.When a track in the disk drive system 10 is referenced by the computer system 30, either for a read or a write operation, a check is first made to determine whether the corresponding track image is already in cache memory. If the track image is not currently in cache memory 20, then the image for the referenced track is read from the disk drive system 10 into an available slot 60 ' in cache memory 20, and an entry 42 corresponding to the track image is linked onto the MRU 44 end of the referenced track queue 40. This entry 42 contains a pointer 45 to the track image slot 60 containing the corresponding track image. As new tracks continue to be referenced by the computer system 30, however, all available disk track image slots 60' eventually become used up, because there are many more tracks in the disk drive system 10 than there are slots 60,60' in cache memory 20. Therefore, previously used track image slots 60 must be made available for reuse. The method of the present invention provides an efficient way of managing the limited amount of space in cache memory 20 and providing new available track slots 60'.Pre-Destaσe QueueBefore a given track slot 60 can be reused to store a new track image, it must be first determined that the slot 60 does not correspond to a track image containing a record which has been modified by the computer system 30. In order to allow reuse of a track image slot 60 that has been modified, that track image must first be destaged (written) back to the disk drive system 10. Destaging is accomplished by processes well-known in the art. The destaging process is facilitated by use of a pre-destage queue (hereinafter called the "available track slot queue") 50 which is also kept in cache memory 10. The available track slot queue 50 contains a doubly linked list of pointers 52, each pointer 52 pointing to a slot 60 for storing disk track images in cache memory 10. The available track slot queue 50 comprises entries 42' from the referenced track queue 40 which have migrated to the least-recently-referenced (LRU) end 46 of the referenced track queue 40. A designated number of entries 42 in the referenced track queue 40 are removed therefrom in a single operation 49 and written to the top of the available track slot queue 50 when the number of entries 42 in the available track slot queue 50 reaches a predetermined minimum threshold level. Figure 2 shows entries 42' for tracks 2,6, and 4 being moved to the top of the available track slot queue 50. Although not shown on the drawing, it should be noted that once these entries 42' have been moved to the available track slot queue 50, the entries are removed from the referenced track queue. The threshold level at which to add entries 42' to the available track slot queue 50, and the number of entries 42' to be added to the queue, can be modified during system operation, in order to achieve optimum system performance.The available track slot 50 queue provides a mechanism which increases the efficiency of data transfer between the disk drive system 10 and the computer system 30. By removing entries 42' from the LRU end of the referenced track queue, many of which entries 42' correspond to tracks containing modified records, and which entries also have migrated to the LRU end 46 of the queue 40, each search through the referenced track queue 40 is made considerably shorter. This is because (1) the search for available track slots 60' starts at the LRU end 46 of the queue 40; (2) destaging of track images corresponding to entries 42' in the queue 40 also starts at the LRU end 46 of the queue 40; and (3) the memory space used by entries 42' awaiting destaging cannot be reused for new track slots 60' until the track image associated with the entry 42' in the queue 40 has in fact been destaged to the disk drive system 10. Thus, by removing a plurality of not-currently-reusable entries 42' from the referenced track queue 40 to the available track slot queue 50, further searches for available-for-reuse slots 60' are shortened.The method of the present invention moves a block of these not-currently-reusable entries 42' from the referenced track queue 40 to a second queue (the available track slot queue) 50, thereby reducing search time, and thus reducing system overhead. The removal of entries 42' from the referenced track queue 40 to the available track slot queue 50 does not mean that the corresponding track image is unavailable. Track images in the available track slot queue 50 may still be referenced, in which case they are removed from the available track slot queue 50 and put back onto the MRU end 44 of the referenced track queue 40. Therefore, each of the tracks in the available track slot queue 50 may still be referenced even after a given track is destaged. Entries written from the referenced track queue 40 to the available track slot queue 50 contain a indicator bit 56 which indicates whether the associated track image has been modified(i.e. , whether a write operation has been performed on at least one record in the associated track since it was read from the disk drive system 10) . For purposes of illustration, indicator bits 56 represented by "M" indicate that the associated track image has been modified, and those represented by "NM" indicate that the associated track image has not been modified, or that the corresponding track has been destaged and its slot 60 is available for re-use. Cache Memory With Free Space StackFigure 3 shows cache memory 20 containing an available track slot queue 50, a destage queue 70, a plurality of disk track image slots 60,60', a free space stack 80, and an associated disk drive system 10. In order to create space for new track images read from the disk drive system 10, each entry in the available track slot queue 50 having a track modified indicator bit 56 set (indicating that the associated track image has been modified) is scheduled for destaging. As soon as an entry, whose corresponding track image has been modified, is placed in the available track slot queue, a copy of the entry is placed in the destage queue 70, as shown by arrows 57. Each entry in the destage queue is subsequently destaged, as shown by arrow 90, back to the disk drive system 10. Figure 3 illustrates tracks 2,6, and 4 being destaged. Note that Figure 2 shows that entries 42 for tracks 2 and 6 in the referenced track queue 40 have track modified indicator bits 56 set to "M", thus indicating that the corresponding track images have been modified. When a track image has been destaged 90, the track modified indicator bit 56 gets reset in each corresponding entry in the available track slot queue 50.Figure 3 also shows the state of the available track slot queue 50 after the destaging operation 90 has been completed, with the track modified indicator bits 56 for tracks 2 and 6 reset to "NM", thus indicating that the track image slots 60 corresponding to tracks 2 and 6 are now available for reuse. Even though the slots 60 are available for re-use, they may still be referenced until removed from the available track slot queue 50. Periodically, as shown by arrow 92, a predetermined number of disk track image slots 60' corresponding to unmodified or "reset" entries in the available track slot queue 50 are removed from the queue 50 and placed in the free space stack 80. The free space stack 80 is used to supply memory, as shown by arrow 94, for future required disk track image slots 60'. Since free space for new available track slots 60' is removed from the bottom end of the available track slot queue 50, there is generally enough time for a destage operation to have been completed, due to the length of the queue 50, before a given entry migrates to the bottom of the available track slot queue.As track slots are removed from the available track slot queue 50, after the corresponding tracks have been destaged, a count of available track slot entries is decremented accordingly. When the count reaches a minimum threshold level, a predesignated number of track slots are removed from the referenced track queue 40 and put onto the top of the available track slot queue 50, and the count of available track slot entries is accordingly incremented.It is to be expected that those skilled in the art will develop other embodiments of the present invention that differ from the disclosed embodiment, but which will nevertheless fall within the scope of the appended claims. WE CLAIM:
1. In a disk drive system (10) having at least one disk drive and a cache memory subsystem (20) , each said disk drive containing a plurality of disk tracks, said cache memory subsystem (20) having a pre- destaging area (50) , a destaging area (70) , and referenced track queue (40) comprising a list of entries (42, 44, 46), each said entry (42, 44, 46) having a pointer containing indicia of location of a different disk track image slot (60) in said cache memory subsystem (20) , each said slot (60) containing an image of one of said disk tracks which has been staged from one of said disk drives to said cache memory subsystem (20) , a method for efficiently utilizing said disk track image slots (60) which are available for re-use, comprising the steps of: removing the n least-recently-referenced said entries (42' ) from said referenced track queue (40) , and moving them to a pre-destaging area (50) , where n is a predetermined integer equal to or greater than 1; and indicating as available for re-use, a disk track image slot (60) corresponding to each pre- destaging area entry whose corresponding disk track image has been destaged to one of said disk drives.
2. The method of claim 1, including the additional steps of: writing, the contents of each said pre- destaging area entry, whose corresponding disk track image has been modified, to a destaging area (70) ; and destaging each of the disk track images corresponding to the entries in said destaging area (70) , by writing said disk track images to one of said disk drives.
3. In a disk drive system (10) having at least one disk drive containing a plurality of tracks, said disk drive system (10) also having a cache memory subsystem (20) having a referenced track queue (40) containing a plurality of entries (42, 44, 46), each of said entries (42, 44, 46) containing a pointer to a different disk track image which has been read from one of said disk drives to said cache memory subsystem (20) and subsequently modified, said cache memory subsystem (20) also having a plurality of slots (60) for storing all or part of a said disk track image, said referenced track queue (40) being maintained in a manner such that a pointer to one of said disk track images is put at the top (44) of the queue when said disk track image is referenced by said cache memory subsystem (20) , thereby causing the least-recently- referenced track image pointers (42) to migrate toward the bottom (46) of said queue, a method for efficiently utilizing said disk track image slots, comprising the steps of: allocating an available track slot queue (50) in said cache memory subsystem (20) , said available track slot queue (50) having a plurality of locations, each said location being of sufficient size to contain one of said entries (42, 44, 46) in said referenced track queue (40) one of said disk track image slots; writing the contents of a predetermined number of said entries (42' ) located at the bottom of said referenced track queue (40) to the top of said available track slot queue (50) , when said available track slot queue (50) contains a minimum threshold number of entries; and indicating that the disk track image slots corresponding to a predetermined number of said entries (42' ) in the bottom of said available track slot queue (40) are available for storing said disk track images.
4. The method of claim 3, including the additional steps of: allocating a track destage queue (70) accessible to said cache memory subsystem (20) , having a plurality of entries, each said entry having indicia of location of a different one of said disk track image slots containing a said track to be written to a given said disk drive; further defining said available track slot queue (50) such that each said entry in said queue also has a track-modified indicator (56) for indicating the need for de-staging said corresponding disk track image, and indicating an availability for re-use, of said entry; and writing the contents of each said available track slot queue entry whose associated disk track image has been modified, to said track destage queue
(70) , and zeroing, when said de-staging is complete, said modified indicator in said modified entry in said available track slot queue (50) .
5. The method of claim 3, wherein said predetermined number of said entries in said referenced track queue (40) is greater than one.
6. The method of claim 3, including the additional steps of: defining a disk track image list containing a plurality of entries, each of said entries containing indicia of location of one of said disk track image slots which is available for storing one of said disk track images; and writing the contents of a predetermined number of said entries located at the bottom of said available track slot queue (50) .to said disk track image list.
7. The method of claim 3 including the additional step of: varying said minimum threshold number of entries at which to write said entries written from said referenced track queue (40) to said available track slot queue (50) during operation of said disk drive system (10) to improve performance of said disk drive system (10) .
8. The method of claim 6 wherein the step of writing the contents of a predetermined number of said entries in said referenced track queue (40) to said available track slot queue (50) includes writing said contents to said available track slot queue (50) when the number of entries in said available track slot queue (50) reaches a predetermined minimum threshold number.
9. The method of claim 8, including the additional step of: varyingsaidpredeterminedminimumthreshold number during operation of said disk drive system (10) to improve performance of said disk drive system (10) .
10. The method of claim 3 wherein: said available track slot queue (50) is in the form of a doubly-linked list wherein each said entry in the queue (50) has indicia of location of both a previous entry in the queue and a next entry in the queue, excepting the first and last entries, which have indicia of location of only the next entry and the previous entry, respectively.
PCT/US1992/003336 1992-04-22 1992-04-22 Method for managing disk track image slots in cache memory WO1993021580A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
AU22928/92A AU2292892A (en) 1992-04-22 1992-04-22 Method for managing disk track image slots in cache memory
PCT/US1992/003336 WO1993021580A1 (en) 1992-04-22 1992-04-22 Method for managing disk track image slots in cache memory

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US1992/003336 WO1993021580A1 (en) 1992-04-22 1992-04-22 Method for managing disk track image slots in cache memory

Publications (1)

Publication Number Publication Date
WO1993021580A1 true WO1993021580A1 (en) 1993-10-28

Family

ID=22231005

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1992/003336 WO1993021580A1 (en) 1992-04-22 1992-04-22 Method for managing disk track image slots in cache memory

Country Status (2)

Country Link
AU (1) AU2292892A (en)
WO (1) WO1993021580A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2107092A (en) * 1981-10-02 1983-04-20 Western Electric Co Data processing systems
EP0301211A2 (en) * 1987-07-02 1989-02-01 International Business Machines Corporation Cache management for a peripheral data storage subsystem
EP0389151A2 (en) * 1989-03-22 1990-09-26 International Business Machines Corporation System and method for partitioned cache memory management
EP0469202A1 (en) * 1990-08-02 1992-02-05 Digital Equipment Corporation Sequential reference management for cache memories

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2107092A (en) * 1981-10-02 1983-04-20 Western Electric Co Data processing systems
EP0301211A2 (en) * 1987-07-02 1989-02-01 International Business Machines Corporation Cache management for a peripheral data storage subsystem
EP0389151A2 (en) * 1989-03-22 1990-09-26 International Business Machines Corporation System and method for partitioned cache memory management
EP0469202A1 (en) * 1990-08-02 1992-02-05 Digital Equipment Corporation Sequential reference management for cache memories

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"PREVENTING OVERFLOW IN A WRAPAROUND BUFFER.", IBM TECHNICAL DISCLOSURE BULLETIN, INTERNATIONAL BUSINESS MACHINES CORP. (THORNWOOD), US, vol. 32., no. 11., 1 April 1990 (1990-04-01), US, pages 81/82., XP000097616, ISSN: 0018-8689 *

Also Published As

Publication number Publication date
AU2292892A (en) 1993-11-18

Similar Documents

Publication Publication Date Title
US11175853B2 (en) Systems and methods for write and flush support in hybrid memory
US6192450B1 (en) Destage of data for write cache
US6119209A (en) Backup directory for a write cache
US6351754B1 (en) Method and system for controlling recovery downtime
US4875155A (en) Peripheral subsystem having read/write cache with record access
US5257352A (en) Input/output control method and system
KR100772863B1 (en) Method and apparatus for shortening operating time of page replacement in demand paging applied system
US5269019A (en) Non-volatile memory storage and bilevel index structure for fast retrieval of modified records of a disk track
US4779189A (en) Peripheral subsystem initialization method and apparatus
US6449689B1 (en) System and method for efficiently storing compressed data on a hard disk drive
CN100557580C (en) The posted write-through cache that is used for flash memory
US5778430A (en) Method and apparatus for computer disk cache management
US7194589B2 (en) Reducing disk IO by full-cache write-merging
US7949839B2 (en) Managing memory pages
US5125086A (en) Virtual memory paging apparatus with variable size in-page clusters
EP0751462B1 (en) A recoverable disk control system with a non-volatile memory
US5715447A (en) Method of and an apparatus for shortening a lock period of a shared buffer
JPS6367686B2 (en)
JPS60147855A (en) Memory managing apparatus
US6842826B1 (en) Method and apparatus for providing efficient management of least recently used (LRU) algorithm insertion points corresponding to defined times-in-cache
US5748985A (en) Cache control method and cache controller
US6782444B1 (en) Digital data storage subsystem including directory for efficiently providing formatting information for stored records
EP0156179B1 (en) Method for protecting volatile primary store in a staged storage system
US7080206B2 (en) System and method for adaptively loading input data into a multi-dimensional clustering table
WO1993021580A1 (en) Method for managing disk track image slots in cache memory

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU MC NL SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA