WO1995032473A1 - Set associative block management disk cache - Google Patents
Set associative block management disk cache Download PDFInfo
- Publication number
- WO1995032473A1 WO1995032473A1 PCT/US1995/006510 US9506510W WO9532473A1 WO 1995032473 A1 WO1995032473 A1 WO 1995032473A1 US 9506510 W US9506510 W US 9506510W WO 9532473 A1 WO9532473 A1 WO 9532473A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- block
- data
- disk drive
- cache
- address
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0864—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using pseudo-associative means, e.g. set-associative or hashing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/312—In storage controller
Definitions
- the present invention relates generally to memory management, and particularly to memory cache management relative to a disk device.
- a disk drive memory device while having massive storage capacity, is relatively slow in its access speed.
- a disk drive stores information on a rotating media divided into concentric tracks with each track divided into sectors.
- the fundamental data unit within a disk drive storage media is a sector, i.e., physical access to the rotating media is by reading and, in the case of modifiable media, writing data relative to sectors of the disk drive.
- Slow access speed with respect to such rotating storage media results from a need to first position read/write heads of the disk drive relative to a track of the storage media and then wait for the appropriate disk sector to pass by the read/write heads.
- a cache is an effective means to reduce access time relative to a disk drive, i.e., relative to a rotating media.
- a cache is a semiconductor memory holding a copy of selected portions of information held on the rotating disk drive media.
- a cache may be implemented in a variety of semiconductor memory, such as dynamic random access memory (DRAM) or static access random memory (SRAM). Because the semiconductor memory device is much faster than the disk drive device, the cache advantageously reduces access time when a data request may be taken from the cache rather than the much slower disk drive. When the cache can satisfy the data request, a cache "hit" occurs. The more cache hits occurring, the greater the improvement in overall disk drive access time, i.e., overall access time is reduced.
- DRAM dynamic random access memory
- SRAM static access random memory
- the percentage of data requests serviced from the disk cache is the "hit rate.”
- a cache operates with close to a 100 percent hit rate.
- the cache must be limited in size, i.e., can only store a portion of data from the disk drive, an therefore cannot satisfy every data request.
- the subject matter of the present invention relates to a cache memor and the management thereof within a disk drive, i.e., a disk cache as an integra component of the disk drive device and transparent to the control host makin use of the disk drive device.
- cach memory or to management thereof shall be with respect to an interna semiconductor memory component of a disk drive, and not a portion of hos device memory managed external of the disk drive, i.e., such as in a memor management scheme within a control host memory architecture.
- a data block is a logical data structure used by a hos device when interacting with the disk drive, i.e., the host exchanges data wit the disk drive in a sequence of consecutive logical address data blocks.
- a disk drive can include, for example, a 512 KB RAM buffer partitione into three 128 KB data block segments and a resident microprocessor contro program/data segment.
- the buffer provides a intermediate holding area fo data in transit between the control host and the actual disk drive rotatin storage media.
- One of the 128 KB segments of the DRAM buffer is repository for microprocessor variables.
- the remaining three 128 K segments contain a copy of disk drive data, and may also contain prefetch data Prefetch data is obtained from the disk drive rotating media in anticipation o future access.
- a data hit occurs when requested data exists in one of the three segments stored in the 512 KB RAM disk drive buffer.
- a data miss occurs if the requested data is not found in the disk drive buffer.
- the disk drive microprocessor accesses the disk drive storage media and writes the collected data into one of the 128 KB segment of the disk buffer, along with any prefetch data associated therewith.
- each buffer segment holds a logical address sequence of sectors beginning at a designated starting address.
- the disk drive microprocessor scans each of the disk drive buffer segments to determine whether or not the requested data exists in the disk drive. Generally, each segment boundary is checked against the requested data address and against the transfer length to determine whether all or a portion of the requested data exists in a given disk drive buffer segment. Disk drive buffers are generally limited to storage of a small number of segments, e.g., 3 to 6, due to the high amount of overhead required to manage such segments, i.e., large number of address comparisons to determine data hits.
- the requested data may be transferred directly from the disk drive buffer to the requesting device. If a cache miss occurs, however, the requested data must be transferred between the cache and the much slower rotating disk storage media. Furthermore, if the requested data does not exist in any of the buffer segments, then one segment is replaced entirely with the requested data. The process of flushing a full disk buffer segment and replacing it with, for example, a single data block leads to "holes" in the disk drive buffer. For example, a single 512 byte data block could replace an entire 128 KB buffer segment, flushing away a mass of potentially useful information previously stored in that buffer segment.
- the subject matter of the present invention provides a disk drive cache and method of management, thereby expanding use of the disk drive buffer beyond a mere repository of data in transit and establishes an overall increase in the number of cache hits relative to data requests issued by a control host to the disk drive.
- Disk drive cache management under the present invention is set associative block-by-block management minimizing cache fragmentation associated with a segmented disk drive buffer and ultimately improving the hit rate and overall access speed.
- a preferred embodiment of the present invention in a first aspect is a disk drive bifurcated into an interface engine and a disk engine.
- the interface engine includes an interface block interacting direcdy with a control host, and managing directly a set associative cache memory contained within the interface engine.
- the disk engine including a rotating media storage element, stands ready to service any data requests not satisfied by the interface engine.
- the interface engine may autonomously satisfy data requests from die control host, d ereby leaving the disk engine free to perform disk management functions relative to the storage media.
- FIG. 1 illustrates generally by block diagram a disk drive architecture including a set associative block management disk cache in accordance widi the present invention.
- FIG. 2 illustrates a relationship between a logical block address presented to die disk drive of FIG. 1 and a block tag table of the set associative cache o the present invention.
- FIG. 3 is a flow chart illustrating a disk read command applied to the disk drive of FIG. 1 , including manipulation of the set associative cache thereof.
- FIG. 4 illustrates by block diagram an alternative arrangement for the disk drive of FIG. 1 according to a second embodiment of the present invention.
- a preferred embodiment of the present invention as illustrated in the drawings comprises a method and apparatus for management of a disk drive cache.
- Cache management is conducted on a block-by-block basis, i.e., for each logical block address presented to the disk drive, a corresponding interrogation of the disk drive cache yields a cache hit or a cache miss.
- the following disclosure describes a technique to manage a disk cache with less overhead and improved hit rate, resulting in improved overall access time for a disk drive device.
- the invention will be illustrated by description of implementations in both hardware, i.e., ASIC, and software, i.e., ROM firmware.
- FIG. 1 illustrates a disk drive 10 servicing data requests 12 issued by a control host 14. More particularly, control host 14 issues data requests by reference to logical block address values 12a and transfer length 12b. Disk drive 10 responds by executing die requested disk drive activity, i.e., a read or a write operation, relative to data stored at a physical location corresponding to the logical block address 12a.
- die requested disk drive activity i.e., a read or a write operation
- the following discussion will focus, however, on the collection of data from disk drive 10, i.e., a read command.
- Improved performance by use of a disk memory cache results from prompt response by disk drive 10 to a read command, i.e., the requested data being found in and extracted from the disk drive 10 cache memory.
- a write command cannot be satisfied by reference to die disk drive cache memory because th write command requires change in disk drive content, not mere collection o disk drive 10 content.
- a write to disk can be postponed or delaye once the data is transferred into the cache memory.
- successive hits o write commands may result in the same block being updated several times whil it remains in cache awaiting transfer to disk, thereby eliminating repetitive dis accesses for those blocks being updated with new information.
- the size, i.e., number of bits, required in logical block address 12a is function of the total storage capacity of hard disk drive 10 and die "block size" declared. By increasing the block size, a smaller number of logical bloc addresses need be used to reference all data held by disk drive 10. Conversely, a smaller block size requires more logical block addresses, i.e., requires mor bits in the logical block address 12a.
- Disk drive 10 includes an interface engine 10a and a disk engine 10b.
- the present invention allows significan decoupling, i.e., autonomous operation, of the interface engine 10a relative t the disk engine 10b.
- the interface engine 10a can thereby handle direc interaction widi control host 14 in the event of a cache hit.
- the disk engine 10 further supports interaction with control host 14 in die event of a cache miss.
- Interface engine 10a includes an interface ASIC 20 and a RAM buffer 22.
- RAM buffer 22 (which may be DRAM or SRAM or flash memory, fo example) operates generally in die fashion of a data buffer, i.e., a holding plac for data taken from die disk engine 10b or data provided by control host 14 an to be written to disk engine 10b. Under the present invention, however DRAM buffer 22 also serves as a set associative disk cache 24.
- Interface ASI 20 interacts directly with control host 14.
- Interface ASIC 20 also has direc access to the DRAM buffer 22.
- DRAM buffer 22 provides die disk cache 24 a four memory banks 24a managed wid reference to a block tag table 24b als maintained in buffer 22.
- the disk drive cache 24 is a four-way se associative cache whereby a given data block is stored at a given offset witiii any one of die four memory banks 24a.
- the block tag table 24b includes on entry for each potential storage location witiiin the memory banks 24a. Under a four-way set associative management scheme, entries in table 24b are organized in four member sets, each set being found at a given offset within block tag table 24b. The offset is a function of the logical block address. Reference to block tag table 24b at die given offset provides indication o whether data at the requested logical block address 12a may be found in the disk drive cache 24.
- Disk engine 10b includes at least one rotating disk media 30 and its associated read/write heads positioned over concentric tracks by an actuator structure, a read write channel block 32 and a microprocessor block 34.
- Microprocessor block 34 interacts directly with each of the interface ASIC block 20 and the DRAM buffer 22 of interface engine 10.
- Read/ write channel block 32 interacts direcdy with the microprocessor block 34 of disk engine 10b and the DRAM buffer 22 of interface engine 10a.
- the read/write channel 32 handles actual manipulation of d e media 30 in response to microprocessor block 34 control.
- microprocessor 34 orchestrates collection of information from and delivery of information to media 30 according to a variety of control schemes.
- microprocessor 34 maintains a command queue organizing read and write commands and associated data relative to media 30.
- a given I/O command may be executed generally under the control of microprocessor 34 when necessary to access the media 30.
- I/O commands relative to media 30 may be unnecessary.
- cache hit is meant that the information being sought exists in the buffer 22 and is avadable for transfer without need for accessing the storage disk media 30.
- FIG. 2 illustrates generally the relationship between a logical block address 12a and block tag table 24b in determining a disk drive cache hit, i.e., determining whether data associated with a given logical block address 12a may be taken from one of memory banks 24a.
- Logical block address 12a is loaded into a cache address register 40, or a significant portion as tag 40a and a least significant portion as offset 40b .
- Offset 40b provides an index into block tag table 24b. More particularly, block tag table 24b is divided into four columns, each column providing a tag table for a corresponding one of the four memory banks 24a, individually identified as bank 0, bank 1, bank 2, and bank 3. Thus, offset 40b designates a set of four block tags 42 within block tag table 24b. If the tag 40a of the logical block address 12a matches one of the block tags 42 in table 24b, as specified by offset 40b, then a hit is determined to occur within disk drive cache 24 relative to the logical block address 12a.
- block tags 42a, 42b, 42c, and 42d each corresponding to one of the four memory banks 24a designated herein as bank 0, bank 1, bank 2, and bank 3, respectively. If, for example, block tag 42b matches tag 40a of cache address register 40, i.e., matches the most significant portion of logical block address 12a, then the requested data may be obtained from memory bank 1 beginning at a location therein corresponding to offset 40b.
- die organization of disk drive cache 24 is a four-way set associative cache utilizing four memory banks 24a to establish a set of four locations avadable for storage of any given data block.
- the logical block address least significant portion i.e., die offset 40b, defines a set of four block tags 42. and a match between any member of the designated block tag set with the tag 40a, i.e., the most significant portion of logical block address 12a, indicates a block hit in cache 24.
- disk drive 10 has a 1 gigabyte storage capacity and 512 byte block size.
- logical block address 12a comprises 21 bits and cache address register 40 need be a 21 bit register.
- DRAM buffer 22 is a 512 KB buffer with 128 KB thereof reserved for microprocessor 34 data.
- the remaining 384 KB (768 blocks) are divided into the four memory banks 24a, i.e., memory bank 24a holding 96 KB (192 blocks).
- Block tag table 24b requires 768 entries, i.e., one entry for each of 768 blocks of data stored in the memory banks 24a.
- Table 24a is broken into four columns, one for each of memory banks 24a, with each column containing 192 entries.
- the offset 40b of cache address register 40 is, therefore, an 8 bit field specifying one and only one of the 192 rows of table 24b.
- offset 40b values in excess of the number of rows in table 24b would be used to specify one and only one offset, i.e. row, in table 24b.
- the tag 40a of cache register 40 is a 13 bit field.
- additional bits may be employed in each entry of block tag table 24b as, for example, validity bits, dirty bits, and bits in support of a given replacement policy.
- FIG. 3 illustrates a first implementation of die present invention provided by programming, i.e., ROM firmware, of the microprocessor 44 of disk drive 10 and use of a conventional interface ASIC 20.
- Microprocessor 34 scans the cache 24 by masking die offset 40b and indexing into the tag table 24b.
- the cache tag 40a is compared to each of the four tags 42 taken from the block tag table 24b. If any of the tags 42 match, a data block transfer begins from the corresponding cache location, i.e., from one of banks 24a at the offset 40b. If more than one block is requested, then the offset is incremented, and the next group of four tags 42 are compared to the most significant portion of die logical block address. This process continues until a cache miss occurs.
- FIG. 3 illustrates a first implementation of die present invention provided by programming, i.e., ROM firmware, of the microprocessor 44 of disk drive 10 and use of a conventional interface ASIC 20.
- Microprocessor 34 scans the cache 24 by masking
- a disk read procedure receives a logical block address (LBA) and a length datum (LENGTH).
- microprocessor 34 masks to obtain the least significant portion, i.e., bits 0-7, of the logical block address as the variable TAG, and masks to obtain the most significant portioR, i.e., bits 8- 20, of the logical block address as the variable OFFSET.
- a variable BANK is initialized to reference die first one of memory banks 24a within cache 24.
- microprocessor 34 compares the variable TAG with the entry in the tag table 24b corresponding to the current value of variable BANK and the variable OFFSET.
- the BANK variable identifies one of four columns within table 24b and the OFFSET variable identifies a row in table 24b. If the block tag 42 taken from tag table 24b does not match the variable TAG, then processing branches through block 56 where the variable BANK is incremented to reference the next one of memory banks 24a and continues to decision block 58. In decision block 58, microprocessor 34 determines whedier additional columns of table 24b remain to be interrogated. Thus, if the variable BANK remains less than four, under the present example of a four-way set associative cache, then processing returns to decision block 54. If, however, all columns of table 24 have been interrogated, i.e., the variable BANK equals the value 4, then a cache miss exists.
- each of the four tags 42 at a given offset within table 24b could be extracted concurrently and a single comparison performed within microprocessor 34 to determine whether and which one of d e four block tags 42 matches the tag 40a.
- variable LENGTH is decremented and, in block 64, compared to a terminal value 0. If the LENGTH variable has not yet reached a value 0, the processing branches through block 66 where the logical block address (LBA) i incremented to reference the next logical block, in essence incrementing th variable OFFSET, and processing returns to block 50.
- LBA logical block address
- the algorithm of FIG. 3 returns block-by-block th requested data blocks directly from cache 24.
- microprocessor 3 initiates access to the media 30 and returns data corresponding to the requeste logical block address.
- the collection of data from media 30 and delivery t control host 14 as represented in block 68 of FIG. 3 may proceed according t conventional operation of disk drive 10.
- microprocessor 34 woul schedule collection of data according to the then current value for die logica block address (LBA) and the then current value for the variable LENGTH.
- microprocessor 34 executes any replacemen algorithms required in light of the cache miss occurrence. Generally, th requested data is collected from media 30 by way of read write channel 32. I cache 24 has sufficient room, men the requested data is placed in an availabl one of memory banks 24a, beginning at the required offset. Microprocessor 3 also updates the content of block tag table 24b to reflect the new content o cache 24. Interface ASIC 20 then completes the final step of die read operatio by collecting d e data from cache 24 for delivery to control host 14. If cach
- replacement policy is executed, e.g., such as one known in die art including di methods of least recentiy used, least frequentiy used, and randomly selected. variety of replacement policies may be implemented, however, within the scop of the present invention.
- FIG. 4 illustrates the preferred implementation of die present inventio as die disk drive 10'.
- a modified interface ASI 20 ' corresponds generally to that of FIG. 1 except that interface ASIC 20 " includes the block tag table 24b'.
- the block size is increased relative to the previous example from 512 bytes to 2 KB. This allows the resulting 192 entries of cache tag table 24b' to reside entirely within the interface ASIC 20'.
- Cache 24 is tiien organized within DRAM buffer 22 as four memory banks 24a" of 32 blocks each. Widi a 2 KB block size, the 1 gigabyte disk drive 10 uses a 13 bit logical block address 12a', and interface ASIC 20' requires a corresponding 13 bit cache address register (CAR) 40'.
- CAR cache address register
- the requested logical block address 12a' is loaded into cache address register (CAR) 40' of the interface ASIC 40' during die disk I/O command phase.
- the interface ASIC 20' then generates the cache tag 40a' and offset 40b' from the cache address register 40'.
- the offset 40b' is applied as an index to the block tag table 24b' of ASIC 20'.
- the table 24b' read registers 90, individually 90a, 90b, 90c, and 90d, receive the set of four block tags 42a', 42b ' , 42c', and 42d ⁇ respectively.
- the tag 40a' is then loaded into each of tag registers 92a, 92b, 92c, and 92d.
- Each of the registers 90 are coupled to the corresponding register 92 by a comparison function 94, individually 94a-94d.
- Each of die comparison functions 94 may be applied to a hit calculation block 96.
- the four-bit output 98 of hit calculation block 96 reflects whether or not a match occurred between one of the tags 42 and the tag 40a, and if a match occurred which of block tags 42' matched tag 40a'.
- the four block tags 42 are thereby compared concurrently to the tag 40a'. If one of the block ta ⁇ * s 42 match the tag 40a', then a cache hit exists and interface ASIC 20' collects directly from DRAM buffer 22 the requested data and delivers it to control host 14.
- hit calculation block 96 output 98 is applied to a memory access block 100 having direct access to the DRAM buffer 22.
- the hit calculation output 98 provided by block 96 provides sufficient information to identify one of die banks 24a of DRAM buffer 22 containing the requested data .
- Block 100 utilizes the offset 40b' to index into the appropriate one of banks 24a to initiate collection of a data block thereat.
- hit calculation block 96 If hit calculation block 96 indicates a cache miss, it generates a block transfer interrupt (BTI) signal 102 for application to microprocessor 34 M i croprocessor 34 in turn obtains the requested data in conventional fashion from the storage media 30 by way of read/write channel 32.
- microprocessor 34 must have read/write access to the block tag table 24b' to update table 24b' relative to information collected from media 30 More particularly, each data block collected from media 30 is placed in one of memory banks 24a at die required offset, and the block tag table 24b ' must be updated to reflect the new content of DRAM buffer 22.
- interface ASIC 20 ' collects the requested data from DRAM buffer 22 and delivers it to control host 14 in conventional fashion.
- a write operation need not determine the content of cache 24, i.e., no need to determine a "cache hit " with respect to a write operation.
- a write procedure begins by placing the new data in a selected one of memory banks 24a, i.e., at the required offset therein according to a given replacement policy, and modifying of the block tag table 24b to reflect the new content of cache 24.
- a write command is scheduled by microprocessor 34 to take d i e new data from buffer 22 and write it appropriately upon the media 30.
- the new data remains in cache 24 and is available if requested without reference to the media 30.
- an improved apparatus method for disk drive management has been shown and described incorporating a set associative cache function into d i e disk drive data buffer.
- the disk drive cache is managed on a block-by-block basis to allow an interface engine portion of the disk drive to interact directly with the control host, thereby freeing the disk engine portion of the disk drive to conduct other activities, i.e., moving data onto or off of the storage media .
- the interface engine operates autonomously, satisfying data requests without requiring intervention by the disk drive microprocessor.
- the present invention provides an overall increase in cache hits while concurrently reducing fragmentation of the disk drive cache, thereby increasing overall performance of the disk drive in reduction of average data access time.
- a 2 ⁇ -way set associative cache may be utilized in which the sets ar associated by powers of two.
- Other bases and powers are also included withi the scope of the present invention, such as three-way, or five-way, se associative disk caches, for example.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
Cache management within a disk drive buffer memory resource (22 and 24) under a set associative block management (40 and 24b) is shown and described. The disclosure includes implementation in both software and hardware. Cache management is conducted on a block-by-block to reduce overall fragmentation within the disk drive cache, and thereby increase cache hit rates and decrease disk drive access time.
Description
SET ASSOCIATIVE BLOCK MANAGEMENT DISK CACHE
Field of the Invention
The present invention relates generally to memory management, and particularly to memory cache management relative to a disk device.
Background of the Invention
A disk drive memory device, while having massive storage capacity, is relatively slow in its access speed. A disk drive stores information on a rotating media divided into concentric tracks with each track divided into sectors. The fundamental data unit within a disk drive storage media is a sector, i.e., physical access to the rotating media is by reading and, in the case of modifiable media, writing data relative to sectors of the disk drive. Slow access speed with respect to such rotating storage media results from a need to first position read/write heads of the disk drive relative to a track of the storage media and then wait for the appropriate disk sector to pass by the read/write heads.
A cache is an effective means to reduce access time relative to a disk drive, i.e., relative to a rotating media. A cache is a semiconductor memory holding a copy of selected portions of information held on the rotating disk drive media. A cache may be implemented in a variety of semiconductor memory, such as dynamic random access memory (DRAM) or static access random memory (SRAM). Because the semiconductor memory device is much faster than the disk drive device, the cache advantageously reduces access time when a data request may be taken from the cache rather than the much slower disk drive. When the cache can satisfy the data request, a cache "hit" occurs. The more cache hits occurring, the greater the improvement in overall disk drive access time, i.e., overall access time is reduced. The percentage of data requests serviced from the disk cache is the "hit rate." In an ideal case, a cache operates with close to a 100 percent hit rate. Unfortunately, the cache must be
limited in size, i.e., can only store a portion of data from the disk drive, an therefore cannot satisfy every data request. One way of expressing reduce access time using a cache is as follows: tACCESS = P(HIT)*tCACHE_ACCESS + P(MISS)*tDISK_ ACCESS, where P(HIT)+ P(MISS) =1 and where tAccεss represents total .access time P(HIT)*tCACHE_ ACCESS represents the product of the number of cache hits time the cache access time; and, P(MISS)*tDisκ_Accεss represents the product of th number of cache misses times the average disk access time.
The subject matter of the present invention relates to a cache memor and the management thereof within a disk drive, i.e., a disk cache as an integra component of the disk drive device and transparent to the control host makin use of the disk drive device. Thus, further reference herein to the term cach memory or to management thereof shall be with respect to an interna semiconductor memory component of a disk drive, and not a portion of hos device memory managed external of the disk drive, i.e., such as in a memor management scheme within a control host memory architecture.
Current disk buffers are managed generally on a segment-by-segmen basis, with each segment containing a sequence of data blocks. By "segment" i meant a single group of data blocks that is managed as an entity. A "dat block" corresponds to the information stored in one, or a multiple number o disk sectors. A data block, therefore, is a logical data structure used by a hos device when interacting with the disk drive, i.e., the host exchanges data wit the disk drive in a sequence of consecutive logical address data blocks.
A disk drive can include, for example, a 512 KB RAM buffer partitione into three 128 KB data block segments and a resident microprocessor contro program/data segment. The buffer provides a intermediate holding area fo data in transit between the control host and the actual disk drive rotatin storage media. One of the 128 KB segments of the DRAM buffer is repository for microprocessor variables. The remaining three 128 K segments contain a copy of disk drive data, and may also contain prefetch data Prefetch data is obtained from the disk drive rotating media in anticipation o
future access. In any event, a data hit occurs when requested data exists in one of the three segments stored in the 512 KB RAM disk drive buffer. A data miss occurs if the requested data is not found in the disk drive buffer. After a miss, one of the 128 KB segments is replaced entirely with the requested data, i.e., the disk drive microprocessor accesses the disk drive storage media and writes the collected data into one of the 128 KB segment of the disk buffer, along with any prefetch data associated therewith.
Internal fragmentation becomes a problem in such implementations, because entire buffer segments may be replaced by a single data block or relatively few data blocks. Under such fragmentation, subsequent reference to the disk buffer is less likely to be a data hit and overall access time is undesirably increased.
The segment-by-segment organization under current implementations associates with each segment of the RAM buffer a starting address datum and a number of sectors datum. Thus, each buffer segment holds a logical address sequence of sectors beginning at a designated starting address.
The disk drive microprocessor scans each of the disk drive buffer segments to determine whether or not the requested data exists in the disk drive. Generally, each segment boundary is checked against the requested data address and against the transfer length to determine whether all or a portion of the requested data exists in a given disk drive buffer segment. Disk drive buffers are generally limited to storage of a small number of segments, e.g., 3 to 6, due to the high amount of overhead required to manage such segments, i.e., large number of address comparisons to determine data hits. The following equation determines whether or not requested data exists in a disk drive buffer segment:
IF(Starting_Address<Requested_Address<Starting_Address+Sectors_in _Cache- 1 ) //where Sectors_in_Cache- 1 represents the number of sectors in a particular cache segment THEN a cache hit occurred
ELSE a cache miss occurred
If a cache hit occurs, the requested data may be transferred directly from the disk drive buffer to the requesting device. If a cache miss occurs, however, the requested data must be transferred between the cache and the much slower rotating disk storage media. Furthermore, if the requested data does not exist in any of the buffer segments, then one segment is replaced entirely with the requested data. The process of flushing a full disk buffer segment and replacing it with, for example, a single data block leads to "holes" in the disk drive buffer. For example, a single 512 byte data block could replace an entire 128 KB buffer segment, flushing away a mass of potentially useful information previously stored in that buffer segment.
The subject matter of the present invention provides a disk drive cache and method of management, thereby expanding use of the disk drive buffer beyond a mere repository of data in transit and establishes an overall increase in the number of cache hits relative to data requests issued by a control host to the disk drive.
Summary of the Invention with Objects
Disk drive cache management under the present invention is set associative block-by-block management minimizing cache fragmentation associated with a segmented disk drive buffer and ultimately improving the hit rate and overall access speed.
A preferred embodiment of the present invention in a first aspect is a disk drive bifurcated into an interface engine and a disk engine. The interface engine includes an interface block interacting direcdy with a control host, and
managing directly a set associative cache memory contained within the interface engine. The disk engine, including a rotating media storage element, stands ready to service any data requests not satisfied by the interface engine. Under such arrangement of the present invention, the interface engine may autonomously satisfy data requests from die control host, d ereby leaving the disk engine free to perform disk management functions relative to the storage media.
It is an object of the present invention to provide a disk cache requiring less management overhead and improving cache hit rates, for an overall reduced access time for a disk drive device.
It is a further object of the present invention to provide a bifurcated disk drive including an interface engine and a disk engine whereby the interface engine may autonomously handle data requests when such data requests are available in a disk cache of the interface engine portion of the disk drive.
The subject matter of the present invention is particularly pointed out and distinctly claimed in the concluding portion of this specification. However, both the organization and method of operation of the invention, together with further advantages and objects thereof, may best be understood by reference to the following description taken with the accompanying drawings wherein like reference characters refer to like elements.
Brief Description of the Drawings
In die drawings:
FIG. 1 illustrates generally by block diagram a disk drive architecture including a set associative block management disk cache in accordance widi the present invention.
FIG. 2 illustrates a relationship between a logical block address presented to die disk drive of FIG. 1 and a block tag table of the set associative cache o
the present invention.
FIG. 3 is a flow chart illustrating a disk read command applied to the disk drive of FIG. 1 , including manipulation of the set associative cache thereof.
FIG. 4 illustrates by block diagram an alternative arrangement for the disk drive of FIG. 1 according to a second embodiment of the present invention.
Detailed Description of a Preferred Embodiment
A preferred embodiment of the present invention as illustrated in the drawings comprises a method and apparatus for management of a disk drive cache. Cache management is conducted on a block-by-block basis, i.e., for each logical block address presented to the disk drive, a corresponding interrogation of the disk drive cache yields a cache hit or a cache miss.
The following disclosure describes a technique to manage a disk cache with less overhead and improved hit rate, resulting in improved overall access time for a disk drive device. The invention will be illustrated by description of implementations in both hardware, i.e., ASIC, and software, i.e., ROM firmware.
FIG. 1 illustrates a disk drive 10 servicing data requests 12 issued by a control host 14. More particularly, control host 14 issues data requests by reference to logical block address values 12a and transfer length 12b. Disk drive 10 responds by executing die requested disk drive activity, i.e., a read or a write operation, relative to data stored at a physical location corresponding to the logical block address 12a. The following discussion will focus, however, on the collection of data from disk drive 10, i.e., a read command. Improved performance by use of a disk memory cache results from prompt response by disk drive 10 to a read command, i.e., the requested data being found in and extracted from the disk drive 10 cache memory. A write command, however,
cannot be satisfied by reference to die disk drive cache memory because th write command requires change in disk drive content, not mere collection o disk drive 10 content. However, a write to disk can be postponed or delaye once the data is transferred into the cache memory. In fact, successive hits o write commands may result in the same block being updated several times whil it remains in cache awaiting transfer to disk, thereby eliminating repetitive dis accesses for those blocks being updated with new information.
The size, i.e., number of bits, required in logical block address 12a is function of the total storage capacity of hard disk drive 10 and die "block size" declared. By increasing the block size, a smaller number of logical bloc addresses need be used to reference all data held by disk drive 10. Conversely, a smaller block size requires more logical block addresses, i.e., requires mor bits in the logical block address 12a.
Disk drive 10 includes an interface engine 10a and a disk engine 10b. A will be explained more fully hereafter, the present invention allows significan decoupling, i.e., autonomous operation, of the interface engine 10a relative t the disk engine 10b. The interface engine 10a can thereby handle direc interaction widi control host 14 in the event of a cache hit. The disk engine 10 further supports interaction with control host 14 in die event of a cache miss.
Interface engine 10a includes an interface ASIC 20 and a RAM buffer 22. RAM buffer 22 (which may be DRAM or SRAM or flash memory, fo example) operates generally in die fashion of a data buffer, i.e., a holding plac for data taken from die disk engine 10b or data provided by control host 14 an to be written to disk engine 10b. Under the present invention, however DRAM buffer 22 also serves as a set associative disk cache 24. Interface ASI 20 interacts directly with control host 14. Interface ASIC 20 also has direc access to the DRAM buffer 22. DRAM buffer 22 provides die disk cache 24 a four memory banks 24a managed wid reference to a block tag table 24b als maintained in buffer 22. Generally, the disk drive cache 24 is a four-way se associative cache whereby a given data block is stored at a given offset witiii any one of die four memory banks 24a. The block tag table 24b includes on
entry for each potential storage location witiiin the memory banks 24a. Under a four-way set associative management scheme, entries in table 24b are organized in four member sets, each set being found at a given offset within block tag table 24b. The offset is a function of the logical block address. Reference to block tag table 24b at die given offset provides indication o whether data at the requested logical block address 12a may be found in the disk drive cache 24.
Disk engine 10b includes at least one rotating disk media 30 and its associated read/write heads positioned over concentric tracks by an actuator structure, a read write channel block 32 and a microprocessor block 34. Microprocessor block 34 interacts directly with each of the interface ASIC block 20 and the DRAM buffer 22 of interface engine 10. Read/ write channel block 32 interacts direcdy with the microprocessor block 34 of disk engine 10b and the DRAM buffer 22 of interface engine 10a. As may be appreciated, the read/write channel 32 handles actual manipulation of d e media 30 in response to microprocessor block 34 control. As may be appreciated, microprocessor 34 orchestrates collection of information from and delivery of information to media 30 according to a variety of control schemes. For example, microprocessor 34 maintains a command queue organizing read and write commands and associated data relative to media 30. For the present discussion, however, it will be understood that a given I/O command may be executed generally under the control of microprocessor 34 when necessary to access the media 30. In the event of a cache hit, however, such I/O commands relative to media 30 may be unnecessary. By "cache hit" is meant that the information being sought exists in the buffer 22 and is avadable for transfer without need for accessing the storage disk media 30.
FIG. 2 illustrates generally the relationship between a logical block address 12a and block tag table 24b in determining a disk drive cache hit, i.e., determining whether data associated with a given logical block address 12a may be taken from one of memory banks 24a.
Logical block address 12a is loaded into a cache address register 40, or a
significant portion as tag 40a and a least significant portion as offset 40b. Offset 40b provides an index into block tag table 24b. More particularly, block tag table 24b is divided into four columns, each column providing a tag table for a corresponding one of the four memory banks 24a, individually identified as bank 0, bank 1, bank 2, and bank 3. Thus, offset 40b designates a set of four block tags 42 within block tag table 24b. If the tag 40a of the logical block address 12a matches one of the block tags 42 in table 24b, as specified by offset 40b, then a hit is determined to occur within disk drive cache 24 relative to the logical block address 12a.
For example, consider the block tags 42a, 42b, 42c, and 42d each corresponding to one of the four memory banks 24a designated herein as bank 0, bank 1, bank 2, and bank 3, respectively. If, for example, block tag 42b matches tag 40a of cache address register 40, i.e., matches the most significant portion of logical block address 12a, then the requested data may be obtained from memory bank 1 beginning at a location therein corresponding to offset 40b.
Thus, die organization of disk drive cache 24 is a four-way set associative cache utilizing four memory banks 24a to establish a set of four locations avadable for storage of any given data block. The logical block address least significant portion, i.e., die offset 40b, defines a set of four block tags 42. and a match between any member of the designated block tag set with the tag 40a, i.e., the most significant portion of logical block address 12a, indicates a block hit in cache 24.
As may be appreciated, total capacity for disk drive 10, block size cache
24 size, and the number of memory banks 24a dictate the overall size of cache address register 40 as well as the size of its sub-components tag 40a and offset
40b. The following equations indicate generally the size, in bits rounded up, of die cache address register (CAR) 40, tag 40a and offset 40b:
sizeof(CAR) = LOG2(DiskCapaciιy/BlockSize) sizeof (OFFSET) = LOG2(CacheSize / 4*BlockSize) sizeof(TAG) = sizeof (CAR) - size of (OFFSET)
The description of a specific implementation of the present invention will follow with reference to a specific disk capacity and block size as implemented in a four-way set associative disk cache. For example, disk drive 10 has a 1 gigabyte storage capacity and 512 byte block size. Accordingly, logical block address 12a comprises 21 bits and cache address register 40 need be a 21 bit register. DRAM buffer 22 is a 512 KB buffer with 128 KB thereof reserved for microprocessor 34 data. The remaining 384 KB (768 blocks) are divided into the four memory banks 24a, i.e., memory bank 24a holding 96 KB (192 blocks). Block tag table 24b requires 768 entries, i.e., one entry for each of 768 blocks of data stored in the memory banks 24a. Table 24a is broken into four columns, one for each of memory banks 24a, with each column containing 192 entries. The offset 40b of cache address register 40 is, therefore, an 8 bit field specifying one and only one of the 192 rows of table 24b. As may be appreciated, offset 40b values in excess of the number of rows in table 24b would be used to specify one and only one offset, i.e. row, in table 24b. The tag 40a of cache register 40 is a 13 bit field. As may be appreciated, additional bits may be employed in each entry of block tag table 24b as, for example, validity bits, dirty bits, and bits in support of a given replacement policy.
FIG. 3 illustrates a first implementation of die present invention provided by programming, i.e., ROM firmware, of the microprocessor 44 of disk drive 10 and use of a conventional interface ASIC 20. Generally, Microprocessor 34 scans the cache 24 by masking die offset 40b and indexing into the tag table 24b. The cache tag 40a is compared to each of the four tags 42 taken from the block tag table 24b. If any of the tags 42 match, a data block transfer begins from the corresponding cache location, i.e., from one of banks 24a at the offset 40b. If more than one block is requested, then the offset is incremented, and the next group of four tags 42 are compared to the most significant portion of die logical block address. This process continues until a cache miss occurs.
In FIG. 3, a disk read procedure receives a logical block address (LBA) and a length datum (LENGTH). In block 50, microprocessor 34 masks to obtain the least significant portion, i.e., bits 0-7, of the logical block address as the variable TAG, and masks to obtain the most significant portioR, i.e., bits 8- 20, of the logical block address as the variable OFFSET. In block 52, a variable BANK is initialized to reference die first one of memory banks 24a within cache 24. In decision block 54, microprocessor 34 compares the variable TAG with the entry in the tag table 24b corresponding to the current value of variable BANK and the variable OFFSET. More particularly, the BANK variable identifies one of four columns within table 24b and the OFFSET variable identifies a row in table 24b. If the block tag 42 taken from tag table 24b does not match the variable TAG, then processing branches through block 56 where the variable BANK is incremented to reference the next one of memory banks 24a and continues to decision block 58. In decision block 58, microprocessor 34 determines whedier additional columns of table 24b remain to be interrogated. Thus, if the variable BANK remains less than four, under the present example of a four-way set associative cache, then processing returns to decision block 54. If, however, all columns of table 24 have been interrogated, i.e., the variable BANK equals the value 4, then a cache miss exists.
While illustrated herein as an iterative interrogation of each of four banks within table 24b, it will be understood that each of the four tags 42 at a given offset within table 24b could be extracted concurrently and a single comparison performed within microprocessor 34 to determine whether and which one of d e four block tags 42 matches the tag 40a.
Returning to decision block 54, if one of the block tags 42 taken from tag table 24b matches the variable TAG, then a cache hit exists and processing branches from decision block 54 to block 60 where microprocessor 34 accesses the appropriate memory bank 24a at the given offset and initiates return of a block of data therefrom to the control host 14. Continuing to block 62, the variable LENGTH is decremented and, in block 64, compared to a terminal
value 0. If the LENGTH variable has not yet reached a value 0, the processing branches through block 66 where the logical block address (LBA) i incremented to reference the next logical block, in essence incrementing th variable OFFSET, and processing returns to block 50.
Thus, so long as the cache 24 data blocks corresponding to a requeste logical block address, the algorithm of FIG. 3 returns block-by-block th requested data blocks directly from cache 24.
If, however, a data cache miss occurs, then processing, beginning wit the YES branch of block 58, advances to block 68 where microprocessor 3 initiates access to the media 30 and returns data corresponding to the requeste logical block address. The collection of data from media 30 and delivery t control host 14 as represented in block 68 of FIG. 3 may proceed according t conventional operation of disk drive 10. Thus, microprocessor 34 woul schedule collection of data according to the then current value for die logica block address (LBA) and the then current value for the variable LENGTH.
Also, in block 68 microprocessor 34 executes any replacemen algorithms required in light of the cache miss occurrence. Generally, th requested data is collected from media 30 by way of read write channel 32. I cache 24 has sufficient room, men the requested data is placed in an availabl one of memory banks 24a, beginning at the required offset. Microprocessor 3 also updates the content of block tag table 24b to reflect the new content o cache 24. Interface ASIC 20 then completes the final step of die read operatio by collecting d e data from cache 24 for delivery to control host 14. If cach
24 is full, i.e., no place to put the data collected from media 30, then replacement policy is executed, e.g., such as one known in die art including di methods of least recentiy used, least frequentiy used, and randomly selected. variety of replacement policies may be implemented, however, within the scop of the present invention.
FIG. 4 illustrates the preferred implementation of die present inventio as die disk drive 10'. In the embodiment of FIG. 4, a modified interface ASI
20' corresponds generally to that of FIG. 1 except that interface ASIC 20" includes the block tag table 24b'. To reduce the size of block tag table 24b' for incorporation into the interface ASIC 20', the block size is increased relative to the previous example from 512 bytes to 2 KB. This allows the resulting 192 entries of cache tag table 24b' to reside entirely within the interface ASIC 20'. Cache 24 is tiien organized within DRAM buffer 22 as four memory banks 24a" of 32 blocks each. Widi a 2 KB block size, the 1 gigabyte disk drive 10 uses a 13 bit logical block address 12a', and interface ASIC 20' requires a corresponding 13 bit cache address register (CAR) 40'.
The requested logical block address 12a' is loaded into cache address register (CAR) 40' of the interface ASIC 40' during die disk I/O command phase. The interface ASIC 20' then generates the cache tag 40a' and offset 40b' from the cache address register 40'. The offset 40b' is applied as an index to the block tag table 24b' of ASIC 20'. As a result, the table 24b' read registers 90, individually 90a, 90b, 90c, and 90d, receive the set of four block tags 42a', 42b', 42c', and 42d\ respectively. The tag 40a' is then loaded into each of tag registers 92a, 92b, 92c, and 92d. Each of the registers 90 are coupled to the corresponding register 92 by a comparison function 94, individually 94a-94d. Each of die comparison functions 94 may be applied to a hit calculation block 96. Thus, the four-bit output 98 of hit calculation block 96 reflects whether or not a match occurred between one of the tags 42 and the tag 40a, and if a match occurred which of block tags 42' matched tag 40a'. The four block tags 42, are thereby compared concurrently to the tag 40a'. If one of the block ta<*s 42 match the tag 40a', then a cache hit exists and interface ASIC 20' collects directly from DRAM buffer 22 the requested data and delivers it to control host 14. More particularly, hit calculation block 96 output 98 is applied to a memory access block 100 having direct access to the DRAM buffer 22. The hit calculation output 98 provided by block 96 provides sufficient information to identify one of die banks 24a of DRAM buffer 22 containing the requested data. Block 100 utilizes the offset 40b' to index into the appropriate one of banks 24a to initiate collection of a data block thereat.
If hit calculation block 96 indicates a cache miss, it generates a block
transfer interrupt (BTI) signal 102 for application to microprocessor 34 Microprocessor 34 in turn obtains the requested data in conventional fashion from the storage media 30 by way of read/write channel 32. As may be appreciated, microprocessor 34 must have read/write access to the block tag table 24b' to update table 24b' relative to information collected from media 30 More particularly, each data block collected from media 30 is placed in one of memory banks 24a at die required offset, and the block tag table 24b' must be updated to reflect the new content of DRAM buffer 22. Eventually, interface ASIC 20' collects the requested data from DRAM buffer 22 and delivers it to control host 14 in conventional fashion.
The above described implementations of the present invention have focused on the reading of data from disk drive 10. A write operation need not determine the content of cache 24, i.e., no need to determine a "cache hit" with respect to a write operation. Generally, a write procedure begins by placing the new data in a selected one of memory banks 24a, i.e., at the required offset therein according to a given replacement policy, and modifying of the block tag table 24b to reflect the new content of cache 24. Once the new data is placed in buffer 22, a write command is scheduled by microprocessor 34 to take die new data from buffer 22 and write it appropriately upon the media 30. As may be appreciated, however, the new data remains in cache 24 and is available if requested without reference to the media 30.
Thus, an improved apparatus method for disk drive management has been shown and described incorporating a set associative cache function into die disk drive data buffer. The disk drive cache is managed on a block-by-block basis to allow an interface engine portion of the disk drive to interact directly with the control host, thereby freeing the disk engine portion of the disk drive to conduct other activities, i.e., moving data onto or off of the storage media. For data available in the disk drive cache, the interface engine operates autonomously, satisfying data requests without requiring intervention by the disk drive microprocessor. The present invention provides an overall increase in cache hits while concurrently reducing fragmentation of the disk drive cache, thereby increasing overall performance of the disk drive in reduction of
average data access time.
It will be appreciated that the present invention is not restricted to th particular embodiment that has been described and illustrated, and tha variations may be made therein without departing from the scope of th invention as found in the appended claims and equivalents thereof. Fo example, a 2^-way set associative cache may be utilized in which the sets ar associated by powers of two. Other bases and powers are also included withi the scope of the present invention, such as three-way, or five-way, se associative disk caches, for example.
What is claimed is:
Claims
1. A disk drive responsive to data block requests from a host, said data block requests including at least a requested data address specifying a dat block, said disk drive comprising: a storage engine having capacity to hold a plurality of data blocks, each data block being identified by a data address presentable to said disk drive as said requested data address; and an interface engine including a cache memory, said cache memory storing a copy of selected ones of data blocks held by said storage engine, said interface engine including an identification of said selected ones of said dat blocks, said disk drive providing to said host in response to said requested dat address a cache memory copy of the associated data block when available in said cache memory.
2. A disk drive according to claim 1 wherein said data address includes a logical block address datum and a length datum, the logical block address specifying a given data block of said storage engine and said length datum specifying a logical sequence of data blocks held by said storage engine beginning with said given data block.
3. A disk drive according to claim 2 wherein said interface engine returns a first portion of said logical sequence of data blocks sequentially to said host, said first portion of said logical sequence of data blocks being present in said cache memory, and a next one of said logical sequence of data blocks following said first portion being not present in said cache memory.
4. A disk drive according to claim 1 wherein said identification o selected ones of said data blocks comprises a block tag table, said block tag table including a block tag entry for each data block storage location of said cache memory, and including at a given offset therein a block tag value corresponding to a most significant portion of a data address associated with a data block held in a corresponding location in said cache memory.
5. A disk drive according to claim 1 wherein said storage engine comprises: a processor element; a media storage element having storage locations for said plurality of data blocks; and a data channel coupled to said media element and providing access to said storage locations as a function of the corresponding data address.
6. A disk drive according to claim 1 wherein said interface engine comprises an interface block coupled to said host and coupled to said cache memory, said interface block including said identification of said selected ones of said data blocks and providing autonomous interaction with said host in satisfying a requested data address corresponding to data held in said cache memory.
7. A metiiod of disk drive management, die disk drive responsive to a host device issuing data block requests including at least a requested data block address corresponding to a data block held by said disk drive, the method comprising the steps: maintaining within said disk drive a memory cache holding copies of selected ones of data blocks stored on a storage media of said disk drive; maintaining witiiin said disk drive a block tag table including a block tag entry for each data block held by said memory cache, said block tag table indicating presence of a given data block within said cache memory; and responding to said data block request by interrogating said block tag table to determine presence of the corresponding data block within said memory cache and providing a corresponding data block if present in said cache memory.
8. A method according to claim 7 wherein said step of maintaining said memory cache allows for any given data block a limited number of storage locations within said memory cache, a least sigmficant portion of said requested data block address specifying said limited number of storage locations.
9. A metiiod according to claim 8 wherein said step of maintaining within said disk drive a block tag table includes maintaining each block tag entry as a most significant portion of a data block address for a data block maintained within said memory cache.
10. A method according to claim 7 wherein said step of maintaining within said disk drive said block tag table includes maintaining a most significant portion of a data block address at a given offset within said block tag table, said most sigmficant portion of said data block address corresponding to a most significant portion of a data block address for a data block maintained in said cache memory at said given offset.
1 1. A method according to claim 7 wherein said step of responding to said data block request includes responding to a data block request comprising said data block address and further a length datum specifying a sequence of data blocks beginning at said data block address, said step of responding including the step of providing a first portion of said sequence as data blocks taken from said memory cache when available therein, and a second portion beginning with a data block not available in said memory cache.
12. In a disk drive responsive to a data address specifying a data block, an improvement comprising: a cache memory within said disk drive and including N memory banks, each memory bank having capacity to hold M data blocks; a block tag table within said disk drive and including NxM block tag entries organized as N columns and M rows, an index applied to said block tag table identifying one row of N block tag entries, each block tag entry holding a most significant portion of a data address for a data block held in said cache memory at an offset therein corresponding to the offset of the block tag entry row; and a cache management element applying a least significant portion of a received data address as said index to said block tag table and comparing each of the identified N block tags to a most significant portion of said data address, and upon correspondence therebetween providing from said cache memory the corresponding data block in response to said data address.
13. An improvement according to claim 12 wherein said disk drive is responsive to said data block address in conjunction with a length datum, said length datum specifying a sequence of data blocks beginning at said data address, said cache management element returning a first portion of said sequence of data blocks as taken from said cache memory when available therein, and a second portion of said sequence of data blocks beginning with a data block not present in said cache memory, said second portion of said sequence being then taken from a storage media of said disk drive.
14. An improvement according to claim 13 wherein said storage media is a rotating storage media divided into concentric tracks, each track divided into sectors, each data block corresponding to a multiple of said sectors.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US24713994A | 1994-05-20 | 1994-05-20 | |
US08/247,139 | 1994-05-20 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO1995032473A1 true WO1995032473A1 (en) | 1995-11-30 |
Family
ID=22933735
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US1995/006510 WO1995032473A1 (en) | 1994-05-20 | 1995-05-19 | Set associative block management disk cache |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO1995032473A1 (en) |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4920478A (en) * | 1985-05-29 | 1990-04-24 | Kabushiki Kaisha Toshiba | Cache system used in a magnetic disk controller adopting an LRU system |
-
1995
- 1995-05-19 WO PCT/US1995/006510 patent/WO1995032473A1/en active Application Filing
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4920478A (en) * | 1985-05-29 | 1990-04-24 | Kabushiki Kaisha Toshiba | Cache system used in a magnetic disk controller adopting an LRU system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3898782B2 (en) | Information recording / reproducing device | |
US4466059A (en) | Method and apparatus for limiting data occupancy in a cache | |
US5133060A (en) | Disk controller includes cache memory and a local processor which limits data transfers from memory to cache in accordance with a maximum look ahead parameter | |
US5596736A (en) | Data transfers to a backing store of a dynamically mapped data storage system in which data has nonsequential logical addresses | |
US6862660B1 (en) | Tag memory disk cache architecture | |
US6601137B1 (en) | Range-based cache control system and method | |
JP2554449B2 (en) | Data processing system having cache memory | |
US5991775A (en) | Method and system for dynamic cache allocation between record and track entries | |
JP3697149B2 (en) | How to manage cache memory | |
US5233702A (en) | Cache miss facility with stored sequences for data fetching | |
JP3183993B2 (en) | Disk control system | |
JPH06342395A (en) | Method and medium for storage of structured data | |
JP4060506B2 (en) | Disk controller | |
US5420983A (en) | Method for merging memory blocks, fetching associated disk chunk, merging memory blocks with the disk chunk, and writing the merged data | |
JP2002140231A (en) | Extended cache memory system | |
JPH05303528A (en) | Write-back disk cache device | |
US5717888A (en) | Accessing cached data in a peripheral disk data storage system using a directory having track and cylinder directory entries | |
US6092145A (en) | Disk drive system using sector buffer for storing non-duplicate data in said sector buffer | |
AU707876B2 (en) | System and method for sequential detection in a cache management system | |
WO1995032473A1 (en) | Set associative block management disk cache | |
JP3111912B2 (en) | Disk cache control method | |
JPH04246746A (en) | Storage device system | |
JPH1011337A (en) | Method for controlling data in storage device | |
JPH0460730A (en) | Cache control system | |
US20060047901A1 (en) | Access control method, disk control unit and storage apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): JP KR |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE |
|
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
122 | Ep: pct application non-entry in european phase |