WO1994022134A1 - Buffer control for data transfer within hard disk during idle periods - Google Patents

Buffer control for data transfer within hard disk during idle periods Download PDF

Info

Publication number
WO1994022134A1
WO1994022134A1 PCT/US1994/002980 US9402980W WO9422134A1 WO 1994022134 A1 WO1994022134 A1 WO 1994022134A1 US 9402980 W US9402980 W US 9402980W WO 9422134 A1 WO9422134 A1 WO 9422134A1
Authority
WO
WIPO (PCT)
Prior art keywords
hard disk
storage
disk drive
reserved
information segments
Prior art date
Application number
PCT/US1994/002980
Other languages
French (fr)
Inventor
Michael Anderson
Original Assignee
Micropolis Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micropolis Corporation filed Critical Micropolis Corporation
Publication of WO1994022134A1 publication Critical patent/WO1994022134A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/31Providing disk cache in a specific location of a storage system
    • G06F2212/312In storage controller
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements

Definitions

  • This invention relates to the field of hard disk drive digital storage systems. More particularly, this invention relates to an improved high speed, rapid storing disk drive and cache storage system.
  • the hard disk drive In the field of digital information storage systems, the hard disk drive has become the staple mass storage medium for most personal and commercial computer systems. As hard disk storage capacity and central processing unit
  • CPU clocking speeds continue to increase, many techniques have been developed to increase the data transfer rate between the CPU and mass storage disk drive.
  • Write delays occur for a number of reasons, including delays associated with positioning of the write head over the specified encoding track on the disk (seek time) , the time to rotate the disk to the specific location on the track (latency delay) , and the time required to physically transfer the information to the disk. Disk drive utilization continues only after a "write complete" signal is received by the CPU confirming a successful transfer of the information to the hard disk. Thus, even the most powerful high-speed computers suffer from data transfer inefficiencies associated with the process of writing data to the hard disk.
  • a principal objective of the present invention is to provide a rapid storing hard disk drive system which provides high-performance write capabilities while being devoid of seek and latency delays during multiple write operations.
  • a reserved area of contiguous memory locations is provided on the hard disk, and is combined with a cache buffer to form an effective two-stage, fast access data cache system which optimizes hard disk performance by performing write operations during periods of disk drive inactivity.
  • a hard disk drive storage unit, a controller for the disk drive, and a cache buffer operate together to provide rapid storing of digital information to consecutive locations in a reserved area on the hard disk, the information eventually being transferred to final storage locations elsewhere on the hard disk.
  • An important feature of the invention is the storing of a plurality of digital information segments received from a host data processor in a cache buffer.
  • the information segments awaiting eventual storage are accumulated in the cache buffer during periods of high disk drive utilization.
  • the cache buffer memory may have a storage capacity of 32K bytes, although the memory size may be increased or decreased depending on design requirements..
  • a predetermined cache buffer capacity limit preferably seventy-five percent of total buffer capacity
  • the information segments residing in the cache buffer are transferred to a reserved area of contiguous memory on the hard disk.
  • the reserved area is preferably located along the periphery of the hard disk.
  • a further aspect of the present invention involves the redistribution of information segments from the reserved area to final storage areas, located elsewhere on the disk, during periods of disk drive inactivity.
  • periods of inactivity are associated with user "think time,” or periods during which the workstation awaits user interaction.
  • the transfer of data from the reserved area to final storage locations occurs in the background during these periods of inactivity, unbeknownst to the user.
  • twenty seconds may be required to transfer eight megabytes of data from the reserved area to final storage locations during these idle periods.
  • the cache buffer and reserved area are available to accept the next series of information segments when hard disk activity resumes.
  • the present invention optimizes hard disk performance by taking full advantage of idle periods by performing data storage operations in the background.
  • the disk drive continues to respond to subsequent read and write requests during periods of high disk drive demand.
  • the transfer of information segments between the cache buffer and the reserved area is managed by an indirection table which may be included in the disk drive controller electronics.
  • Each information segment to be written to the hard disk includes a final storage area address which indicates the eventual storage location .on the hard disk assigned to that segment.
  • an indirect or temporary address is assigned to each information segment.
  • the indirection table is updated to reflect the segment's current location. The indirection table is again updated when the segment is eventually transferred out of the reserved area and to its final storage area address on the hard disk.
  • Yet another aspect of the present invention involves the capability to read and modify information segments resident in either the cache buffer or the reserved area of the hard disk. Because the indirection table maintains the indirect address of each information segment located in the cache buffer or the reserved area, the final storage area of the hard disk need not be searched and accessed, thereby avoiding appreciable time delays. Since the desired information to be read or modified is completely resident in either the cache buffer or the relatively small high-speed reserved area, the average access time associated with read and modify operations is substantially reduced.
  • the cache buffer is comprised of high-speed random access memory (RAM) .
  • RAM random access memory
  • the reserved area typically comprises between five and fifteen percent of the total hard disk capacity. Typically, ten megabytes of memory may be dedicated for reserved area processing. However, substantially greater capacity may be required for reserved area processing depending on the particular application and design requirements. Thus, for specific example, a ten percent reserved area for a 1.5 gigabyte, 3 1/2 inch drive would be about 150 megabytes. Further, the reserved area is preferably located along the periphery of the hard disk which is often the most reliable and readily accessible portion, and may be comprised of multiple tracks, although a single track may be appropriate in some applications.
  • information segments may be written to, and accessed from, the reserved area in such a way as to simulate the operation of a first-in-first-out (FIFO) memory stack.
  • Two reserved area pointers may be employed to manage the transfer of data between the cache buffer and the reserved area.
  • a pointer is a register in memory Which contains the address of a particular segment of data. Thus, the address in the pointer register literally "points" to the location in the reserved area where the data in question resides.
  • the first information segment written to the reserved area is given priority over subsequently written segments when segments are transferred from the reserved area to the cache buffer, hence the term first-in-first-out.
  • One pointer (the "beginning” pointer) contains the address of this first segment transferred to the reserved area.
  • the other pointer (the "ending” pointer) contains the address of the last, or most recently transferred. segment.
  • all transfer operations out of the reserved area begin at the location indicated by the beginning pointer address. All transfer operations into the reserved area begin at the address immediately following the ending pointer address.
  • information segments may be written to, and accessed from, the reserved area in such a way as to simulate the operation of a last-in-first-out (LIFO) memory stack.
  • a reserved area pointer (“ending" pointer) is set to the address of the last segment written to the reserved area.
  • the last segment written into the reserved area is given priority over previously stored segments, and is the first segment transferred to the cache buffer.
  • the pointer is then set to the address of the next most recently stored segment in the reserved area.
  • non-volatile solid state memory comprising at least the cache buffer, indirection table, and reserved area pointers.
  • the non-volatile memory is comprised of the combination of random access memory (RAM) , preferably static RAM (SRAM) , and a lithium battery cell.
  • RAM random access memory
  • SRAM static RAM
  • Yet another aspect of the invention involves the generation of a "write complete" signal informing the host processor that the information segment was properly transferred to the cache buffer. Upon the successful transfer of each information segment from the host processor to the cache buffer, the host processor is informed that the transfer was completed. Transferring data to the cache buffer for eventual storage, rather than the hard disk, avoids the processing delays inherent in traditional approaches. Traditional storage devices generate a write complete signal only after each information segment is transferred to the hard disk. Consequently, seek time and latency delays are incurred for each transfer of data to the hard disk.
  • the transfer of information segments from the cache buffer to the reserved area continues until the reserved area is filled.
  • the transfer operation is temporarily suspended and a "write complete" signal is sent to the host processor only after the reserved area contents, or a portion thereof, have been redistributed to the appropriate final storage area locations.
  • Another feature of the invention may include a procedure for storing sequentially received information segments, as contrasted to randomly received segments, in the final storage area of the hard disk.
  • a sequence of information segments is sequentially ordered in the cache buffer after being received. Rather than being transferred to the reserved area to await future distribution in the final storage area, the sequentially ordered information segments are written to consecutive locations in the final storage area without any lost revolutions of the hard disk. By writing to contiguous address locations in the final storage area, write efficiency is greatly increased when compared to the conventional method of storing fragments of data in random storage locations on the hard disk.
  • FIG. 1 is a perspective view of a Winchester or hard disk drive with the upper cover removed;
  • FIG. 2 is a graphical depiction in block diagram form of the components comprising the rapid storing architecture
  • FIG. 3 is a depiction of the storage and addressing process in which data is transferred between the disk controller and the physical hard disk;
  • FIG. 4 is a flow chart setting forth the successive steps accomplished in the rapid storing disk drive architecturer
  • FIG. l is a schematic showing of a Winchester or conventional disk drive storage unit 8 with the cover removed.
  • the storage unit of FIG. 1 includes a plurality of storage disks 10 arranged in a stack which rotate in unison about a common spindle at relatively high speeds.
  • the read/write apparatus includes several magnetic read/write heads 9 individually attached to corresponding suspension arms 7 which move in unison with respect to the stack of hard disks 10 as the unitary positioner 11 rotates.
  • FIG. 2 is a diagrammatic block diagram of the high ⁇ speed rapid storing disk drive system.
  • Hard disk 10 is comprised of at least a reserved area 12 of contiguous storage addresses and a final storage area 13.
  • the reserved area 12 typically comprises between five and fifteen percent of total hard disk 10 storage capacity, although this percentage may vary depending on the specific system application.
  • Digital information segments sent from host central processing unit (CPU) 28 are transmitted along a SCSI (Small Computer Systems Interface) data bus 26 and received by disk controller 14, which controls the transmission of information segments between host CPU 28 and hard disk 10.
  • Disk controller 14 is comprised of at least indirection table 18, cache memory buffer 16, and reserved area ending pointer 20.
  • a reserved area beginning pointer 22 operates in concert with reserved area ending pointer 20 to coordinate information exchanged between reserved area 12 of hard disk 10 and disk controller 14.
  • non-volatile solid state memory 24 comprising at least indirection %able 18, cache memory buffer 16, reserved area ending pointer 20, and reserved area beginning pointer 22 included within disk controller 14.
  • the non ⁇ volatile memory portion 24 of disk controller 14 is preferably comprised of static random access memory (SRAM) .
  • SRAM static random access memory
  • a lithium battery (not shown) is provided to supply sufficient power to ensure that non-volatile memory contents is not lost should standard system power be interrupted.
  • non-volatile memory ensures complete information retention should supply power be interrupted.
  • FIG. 3 a diagrammatic representation of the addressing and storing processes of the present invention are provided.
  • Information segments sent from host CPU 28 are received by cache memory buffer 16.
  • Cache memory buffer 16 is configured to accept a plurality of information segments sent from host CPU 28 for eventual storage on hard disk 10.
  • Indirection table 18 coordinates addressing and storing of information segments in reserved area 12 and final storage area 13 of hard disk 10.
  • Indirection table 18 is comprised of at least three address registers; final disk address register 34, indirect cache register 30, and indirect reserved area register 32.
  • Each information segment received by cache memory buffer 16 includes a final disk address which represents the specific destination address in the final disk area 13 unique for each received information segment. The final disk address for each information segment is maintained in final address register 34.
  • a corresponding indirect cache address for each information segment is maintained in indirect cache register 30, while said information segment data resides in cache memory buffer 16.
  • the final disk address " associated with that information segment is loaded into final address register 34.
  • a corresponding indirect cache address is generated and maintained in indirect cache register 30 identifying the location of the information segment data residing in cache memory buffer 16. Additional information segments are received by cache memory buffer 16 from host CPU 28 until a predetermined capacity limit has been exceeded. In a preferred embodiment, the cache memory buffer 16 capacity limit is seventy-five percent of total cache memory buffer 16 storage capacity.
  • the plurality of information segments accumulated in cache memory buffer 16 are transferred to reserved area 12 of hard disk 10.
  • Indirection table 18 is updated to reflect the transfer of each information segment from cache memory buffer 16 to reserved area 12.
  • An indirect reserved area address is generated and maintained in indirect reserved area register 32 for each information segment transferred to reserved area 12 on hard disk 10. Transfer of information segments from cache memory buffer 16 to reserved area 12 continues until cache memory buffer 16 is empty. However, in the event that host CPU 28 sends new information segments to cache memory buffer 16 prior to completing the entire transfer, said transfer operation is temporarily suspended or interrupted. Priority is given to acceptance of new information segments from host CPU 28 by cache memory buffer 16 in this situation. Transfer of information segments from cache memory buffer 16 to reserved area 12 resumes after the new information segments are received. In an alternative embodiment, transferring of information segments from cache memory buffer 16 to reserved area 12 occurs simultaneously or concurrently with accepting of new information segments by cache memory buffer 16 from host CPU 28.
  • cache memory buffer 16 and reserved area 12 overcomes deficiencies inherent in prior designs by providing means to continuously accept information segments from host CPU 28 for eventual storage on hard disk 10 without the transfer delays associated with writing each information segment to hard disk 10 prior to accepting additional information segments.
  • the inherent inefficiencies of writing information to the hard disk, namely seek time and latency delays, associated with prior art designs are thus avoided and overcome.
  • randomly received information segments are accepted by cache memory buffer 16 and sequentially ordered in said cache memory buffer 16 in the order received.
  • the plurality of ordered information segments are written to contiguous memory addresses in reserved area 12 thereby preserving the ordering scheme.
  • Reserved area address register 32 contains the corresponding addresses of each transferred information segment residing in reserved area 12.
  • sequentially received informstion segments are accepted by cache memory buffer 16 and sequentially ordered in cache memory buffer 16 in the order of acceptance.
  • the successive data blocks are transferred directly to contiguous final disk area addresses in the final disk area 13 without any lost revolutions of hard disk 10.
  • This embodiment of the present invention overcomes seek time and latency deficiencies of prior art designs by transferring a plurality of sequentially received, successive data blocks to contiguous final address locations 13 on hard disk 10, rather than individually transferring each information segment to non ⁇ contiguous hard disk 10 memory locations.
  • Yet another embodiment of the present invention involves the transfer of information segments residing in reserved area 12 to final disk area locations 13 on hard disk 10. This transfer operation occurs during periods in which host CPU 28 discontinues sending new information segments to cache memory buffer 16. Typically, such periods of inactivity are associated with user "think time,” or periods during which the workstation awaits user interaction. Upon sensing such a period of inactivity, preferably defined as a duration of more than one second in which no new information segments are received by cache memory buffer 16, transfer of information segments from reserved area 12 to final storage locations 13 occurs in the background, unbeknownst to the user.
  • information segments are transferred out of reserved area 12 and into cache memory buffer 16 prior to being transferred to final storage locations 13.
  • the transfer of information segments from reserved area 12 to final storage locations 13 is accomplished without the intermediate transfer step involving cache memory buffer 16.
  • the transfer operation continues until all information segments residing in reserved area 12 are transferred to predetermined final disk area addresses 13. In the event that new information segments are sent from host CPU 28 to cache memory buffer 16 during this transfer operation, said transfer operation is temporarily suspended or interrupted. In an alternative embodiment, acceptance of new information segments by cache memory 16 occurs simultaneously or concurrently with the transfer of reserved area 12 information segments to final disk area address locations 13.
  • a reserved area beginning pointer 22 and reserved area ending pointer 20 operate in concert to manage addressing duties associated with information segments transferred between reserved area 12 and cache memory buffer 16.
  • Reserved area beginning pointer 22 contains the indirect reserved area address of the first information segment transferred to, and residing in, reserved area 12.
  • Reserved area ending pointer 20 contains the indirect address of the last information segment transferred to, and residing in, reserved area 12. Use of both beginning pointer 22 and ending pointer 20 provides an efficient method of addressing and storing information segments in reserved area 12.
  • Information segments may be written to, and accessed from, reserved area 12 in such a way as to simulate the operation of a first-in-first-out (FIFO) memory stack.
  • two reserved area pointers are used to efficiently manage data transfers into and out of reserved area 12.
  • Reserved area beginning pointer 22 points to the address of the first information segment written to reserved area 12.
  • Reserved area ending pointer 20 points to the address of the last information segment written to reserved area 12.
  • Ending pointer 20 advances to the new ending address upon completion of the transfer of the new data segment.
  • reserved area 12 address locations starting at the address contained in reserved area beginning pointer 22. Beginning pointer 22 is advanced to the next successive information segment awaiting transfer. Information segments are sequentially transferred from reserved area 12 until reserved area 12 is empty.
  • information segments may be written to, and transferred from, reserved area 12 in such a way as to simulate the operation of -at last-in-first-out (LIFO) memory stack.
  • a single ending pointer 20 is used to manage data transfers into and out of reserved area 12.
  • ending pointer 20 contains the address of the last, or most recent, segment written to reserved area 12.
  • ending pointer 20 advances to contain the address of the latest segment written to reserved area 12.
  • information segments are transferred out of reserved area 12, said transfer begins with the most recently stored data segment and continues with the next most recently stored segment. The transfer operation continues until the reserved area 12 is empty.
  • ending pointer 20 advances to contain the address of the next most recently stored segment.
  • information segments residing in reserved area 12 are transferred to cache memory buffer 16 prior to being stored in final disk area addresses 13 on hard disk 10. This transfer operation occurs during periods of inactivity during which host CPU 28 discontinues sending information segments to cache memory buffer 16.
  • the predetermined order is maintained within cache memory buffer 16, said transfer being reflected in indirection table 18.
  • Indirection table 18 is updated to reflect the transfer of each information segment from reserved area 12 to cache memory buffer 16.
  • Indirect cache register 30 is updated to contain the indirect address of each information segment now residing in cache memory buffer 16.
  • Final address register 34 contains the final disk area 13 address of each information segment awaiting transfer from cache memory buffer 16 to final disk area 13. The final disk address of each information segment transferred to final disk area 13 is removed from final address register 34 upon completion of said transfer.
  • a host CPU initially sends a "write" command for a specific information segment, as indicated at 40.
  • the information segment includes an address portion and a data portion which is transferred to the cache buffer, as at 42.
  • a look-up to the indirection table is performed to determine if the information segment address is currently maintained in the indirection table, indicating an earlier version of the particular information segment, as at 44. If said address is located in the indirection table, the indirection address is updated to point to the current information segment data loaded in the cache buffer, as at 46.
  • a "write complete" signal is transmitted to host CPU, as at 48.
  • Information segments accumulate in the cache memory buffer until a predetermined capacity limit is exceeded, as at 50.
  • the capacity limit is preferably set to seventy-five percent of total cache memory buffer storage capacity.
  • the predetermined limit is exceeded, the information segments stored in the cache memory buffer are transferred to the reserved area on the hard disk, as at 52.
  • the indirection table is updated, as at 54, to reflect the current location of each information segment in the reserved area of the hard disk.
  • information segments stored in the cache memory buffer are transferred to the reserved area on the hard disk, as at 58.
  • the indirection table is updated, as at 60, to reflect the current address of each information segment in the reserved area of the hard disk.
  • the reserved area beginning pointer is set to the address location of the first information segment transferred to the reserved area, as at 60. Starting with the first information segment transferred to the reserved area, the information segments in the reserved area are transferred to the cache memory buffer, as at 62.
  • the information segments stored in the cache memory buffer are then transferred to final address locations within the final disk area using the "elevator seek" process, a known method for efficiently storing sequentially related data, as at 64.
  • the indirection table is updated to reflect the completed transfer of information segments from the cache memory buffer to the final disk area, as at 66.
  • the beginning reserved area pointer is updated to point to the reserved area address which will be occupied by the next information segment transferred from the cache memory buffer to the reserved area, as at 68.
  • the beginning reserved area pointer will point to the next information segment in the reserved area to be transferred to the final disk area, as at 68.
  • the transfer of information segments from the reserved area to the final disk area occurs in the background, unbeknownst to the user, during periods of workstation inactivity. Utilization of this idle time constitutes a novel and unique aspect of the present invention which overcomes the inherent seek time and latency delays associated with prior art designs. It is to be understood that the foregoing description of the accompanying drawings shall relate to preferred and illustrated embodiments of the invention. Other embodiments may be utilized without departing from the spirit and scope of the invention.
  • the reserved area portion of the hard r disk may exceed fifteen percent or twenty percent of total hard disk storage capacity depending on the specific application of the present invention.
  • portions of the hard disk other than the outer periphery may be designated as the reserved area.
  • sequentially received information segments or successive data blocks may be written to the reserved area in the same way as randomly received information segments.
  • transfer of information segments from the reserved area to the final disk area may be accomplished without passing through the cache memory buffer.
  • organization of information segments written to the reserved area can be accomplished by means other than those simulating a LIFO or FIFO memory stack methodology.
  • the information segments written to the reserved area may be randomly organized rather than sequentially organized.
  • the number of pointers employed to manage the addressing and storing duties may vary from one to several.
  • the information segments may include optical data which are written to and accessed from an optical storage disk. Accordingly, it is to be understood that the detailed description and drawings set forth hereinabove are for illustrative purposes only and do not constitute a limitation on the scope of the invention.

Abstract

A high speed disk drive system includes a hard disk drive storage unit (8), a controller (14), and a cache memory buffer (16) which provide rapid storing of digital information to consecutive locations in a reserved area (12) on the hard disk (10). A host data processor (28) sends data segments to a non-volatile cache memory buffer (16) until a predetermined capacity limit is exceeded, at which time the data segments in the cache buffer are transferred contiguously to a relatively small, fast access reserved memory area (12) on the hard disk (10). Data is redistributed from the reserved area (12) to final storage (13) elsewhere on the hard disk (10) in background operations or during idle periods, while the system continues to respond to subsequent requests from the host processor (28).

Description

BUFFER CONTROL FOR DATA TRANSFER WITHIN HARD DISK DURING IDLE PERIODS
FIELD OF THE INVENTION
This invention relates to the field of hard disk drive digital storage systems. More particularly, this invention relates to an improved high speed, rapid storing disk drive and cache storage system.
BACKGROUND OF TEE INVENTION
In the field of digital information storage systems, the hard disk drive has become the staple mass storage medium for most personal and commercial computer systems. As hard disk storage capacity and central processing unit
(CPU) clocking speeds continue to increase, many techniques have been developed to increase the data transfer rate between the CPU and mass storage disk drive.
Much attention has been focused on design improvements directed at increasing data transfer rates between the host CPU and mass storage unit by interposing a cache buffer between the CPU and disk drive to store a limited amount of frequently used instructions or data. Such a scheme increases overall read and write efficiency by transferring this frequently used information from a high-speed memory buffer (cache) to the CPU, rather than fetching such information from the slower disk drive unit. Even when a caching design is employed, high-speed computers must pause while sequential or random information is written to the hard disk.
Write delays occur for a number of reasons, including delays associated with positioning of the write head over the specified encoding track on the disk (seek time) , the time to rotate the disk to the specific location on the track (latency delay) , and the time required to physically transfer the information to the disk. Disk drive utilization continues only after a "write complete" signal is received by the CPU confirming a successful transfer of the information to the hard disk. Thus, even the most powerful high-speed computers suffer from data transfer inefficiencies associated with the process of writing data to the hard disk.
Several prior art references, such as U.S. Pat. Nos. 4,635,194 and 4,523,275, disclose improved cache buffer designs which focus on reducing cache buffer and hard disk drive transactions in an effort to improve overall processing performance. These, and other similar designs, fail to address the CPU-to-disk transfer efficiency limitations associated with writing data to random, non¬ contiguous storage locations on the hard disk, resulting in a compounding of seek and latency time delays. Further, these references suggest no improvements to the hard disk drive unit which suffers from seek and latency delays inherent in each data transfer, which dramatically reduce overall write efficiency.
Accordingly, a principal objective of the present invention is to provide a rapid storing hard disk drive system which provides high-performance write capabilities while being devoid of seek and latency delays during multiple write operations.
SUMMARY OF THE INVENTION In accomplishing the objective set forth above, a reserved area of contiguous memory locations is provided on the hard disk, and is combined with a cache buffer to form an effective two-stage, fast access data cache system which optimizes hard disk performance by performing write operations during periods of disk drive inactivity. In accordance with a broad aspect of the invention, a hard disk drive storage unit, a controller for the disk drive, and a cache buffer operate together to provide rapid storing of digital information to consecutive locations in a reserved area on the hard disk, the information eventually being transferred to final storage locations elsewhere on the hard disk. An important feature of the invention is the storing of a plurality of digital information segments received from a host data processor in a cache buffer. The information segments awaiting eventual storage are accumulated in the cache buffer during periods of high disk drive utilization. By way of example, the cache buffer memory may have a storage capacity of 32K bytes, although the memory size may be increased or decreased depending on design requirements.. Upon exceeding a predetermined cache buffer capacity limit, preferably seventy-five percent of total buffer capacity, the information segments residing in the cache buffer are transferred to a reserved area of contiguous memory on the hard disk. The reserved area is preferably located along the periphery of the hard disk.
A further aspect of the present invention involves the redistribution of information segments from the reserved area to final storage areas, located elsewhere on the disk, during periods of disk drive inactivity. Typically, such periods of inactivity are associated with user "think time," or periods during which the workstation awaits user interaction. The transfer of data from the reserved area to final storage locations occurs in the background during these periods of inactivity, unbeknownst to the user. Typically, twenty seconds may be required to transfer eight megabytes of data from the reserved area to final storage locations during these idle periods. Upon completion of this transfer, the cache buffer and reserved area are available to accept the next series of information segments when hard disk activity resumes.
Unlike traditional storage device designs, which focus attention on periods of high demand and remain idle during periods of no demand, the present invention optimizes hard disk performance by taking full advantage of idle periods by performing data storage operations in the background. Thus, the disk drive continues to respond to subsequent read and write requests during periods of high disk drive demand.
In accordance with another aspect of the invention, the transfer of information segments between the cache buffer and the reserved area is managed by an indirection table which may be included in the disk drive controller electronics. Each information segment to be written to the hard disk includes a final storage area address which indicates the eventual storage location .on the hard disk assigned to that segment. Upon being received by the cache buffer, an indirect or temporary address is assigned to each information segment. When the segment is transferred from the cache buffer to the reserved area of the hard disk, the indirection table is updated to reflect the segment's current location. The indirection table is again updated when the segment is eventually transferred out of the reserved area and to its final storage area address on the hard disk.
Yet another aspect of the present invention involves the capability to read and modify information segments resident in either the cache buffer or the reserved area of the hard disk. Because the indirection table maintains the indirect address of each information segment located in the cache buffer or the reserved area, the final storage area of the hard disk need not be searched and accessed, thereby avoiding appreciable time delays. Since the desired information to be read or modified is completely resident in either the cache buffer or the relatively small high-speed reserved area, the average access time associated with read and modify operations is substantially reduced.
Further, because the cache buffer is comprised of high-speed random access memory (RAM) , writing to and reading from the cache buffer can be accomplished at speeds comparable to the clocking speed of the CPU. In contrast, writing to a standard mass storage disk having no such reserved area would require significantly more time, as much as an order of magnitude.
Another important aspect of the invention involves the reserved area of rapid storing/rapid access memory located on the hard disk. The reserved area typically comprises between five and fifteen percent of the total hard disk capacity. Typically, ten megabytes of memory may be dedicated for reserved area processing. However, substantially greater capacity may be required for reserved area processing depending on the particular application and design requirements. Thus, for specific example, a ten percent reserved area for a 1.5 gigabyte, 3 1/2 inch drive would be about 150 megabytes. Further, the reserved area is preferably located along the periphery of the hard disk which is often the most reliable and readily accessible portion, and may be comprised of multiple tracks, although a single track may be appropriate in some applications.
In accordance with a collateral aspect of the present invention, information segments may be written to, and accessed from, the reserved area in such a way as to simulate the operation of a first-in-first-out (FIFO) memory stack. Two reserved area pointers may be employed to manage the transfer of data between the cache buffer and the reserved area. A pointer is a register in memory Which contains the address of a particular segment of data. Thus, the address in the pointer register literally "points" to the location in the reserved area where the data in question resides. In this configuration, the first information segment written to the reserved area is given priority over subsequently written segments when segments are transferred from the reserved area to the cache buffer, hence the term first-in-first-out.
One pointer (the "beginning" pointer) contains the address of this first segment transferred to the reserved area. The other pointer (the "ending" pointer) contains the address of the last, or most recently transferred. segment. In accordance with this one illustrative embodiment, all transfer operations out of the reserved area begin at the location indicated by the beginning pointer address. All transfer operations into the reserved area begin at the address immediately following the ending pointer address.
Alternatively, information segments may be written to, and accessed from, the reserved area in such a way as to simulate the operation of a last-in-first-out (LIFO) memory stack. As individual information segments are written to the reserved area, a reserved area pointer ("ending" pointer) is set to the address of the last segment written to the reserved area. Upon transferring segments out of the reserved area, the last segment written into the reserved area is given priority over previously stored segments, and is the first segment transferred to the cache buffer. The pointer is then set to the address of the next most recently stored segment in the reserved area. Another feature of the preferred embodiment of the present invention is the use of non-volatile solid state memory comprising at least the cache buffer, indirection table, and reserved area pointers. The non-volatile memory is comprised of the combination of random access memory (RAM) , preferably static RAM (SRAM) , and a lithium battery cell. Thus, no external power is required to retain data residing in the non-volatile memory, which would otherwise be susceptible to power transients and outages resulting in corruption or total loss of the stored information.
Yet another aspect of the invention involves the generation of a "write complete" signal informing the host processor that the information segment was properly transferred to the cache buffer. Upon the successful transfer of each information segment from the host processor to the cache buffer, the host processor is informed that the transfer was completed. Transferring data to the cache buffer for eventual storage, rather than the hard disk, avoids the processing delays inherent in traditional approaches. Traditional storage devices generate a write complete signal only after each information segment is transferred to the hard disk. Consequently, seek time and latency delays are incurred for each transfer of data to the hard disk.
In accordance with another aspect of a preferred embodiment, during periods of peak hard disk demand, the transfer of information segments from the cache buffer to the reserved area continues until the reserved area is filled. When a predetermined reserved area capacity limit has been reached, the transfer operation is temporarily suspended and a "write complete" signal is sent to the host processor only after the reserved area contents, or a portion thereof, have been redistributed to the appropriate final storage area locations. By a judicious selection of cache buffer and reserved area size, saturation of the reserved area can be minimized, and occurs, if at all, only during periods of prolonged peak hard disk utilization. As this invention is readily adaptable for use in interactive environments (i.e. workstations and networks) , such occurrences of saturation should be very infrequent. Another feature of the invention may include a procedure for storing sequentially received information segments, as contrasted to randomly received segments, in the final storage area of the hard disk. A sequence of information segments is sequentially ordered in the cache buffer after being received. Rather than being transferred to the reserved area to await future distribution in the final storage area, the sequentially ordered information segments are written to consecutive locations in the final storage area without any lost revolutions of the hard disk. By writing to contiguous address locations in the final storage area, write efficiency is greatly increased when compared to the conventional method of storing fragments of data in random storage locations on the hard disk.
Other objects, features, and advantages of the invention will become apparent from a consideration of the following detailed description and from the accompanying drawings.
BRIEF DESCRIPTION OF TEE DRAWINGS
FIG. 1 is a perspective view of a Winchester or hard disk drive with the upper cover removed;
FIG. 2 is a graphical depiction in block diagram form of the components comprising the rapid storing architecture;
FIG. 3 is a depiction of the storage and addressing process in which data is transferred between the disk controller and the physical hard disk; and
FIG. 4 is a flow chart setting forth the successive steps accomplished in the rapid storing disk drive architecturer
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
Referring more particularly to the drawings, FIG. l is a schematic showing of a Winchester or conventional disk drive storage unit 8 with the cover removed. The storage unit of FIG. 1 includes a plurality of storage disks 10 arranged in a stack which rotate in unison about a common spindle at relatively high speeds. The read/write apparatus includes several magnetic read/write heads 9 individually attached to corresponding suspension arms 7 which move in unison with respect to the stack of hard disks 10 as the unitary positioner 11 rotates.
FIG. 2 is a diagrammatic block diagram of the high¬ speed rapid storing disk drive system. Hard disk 10 is comprised of at least a reserved area 12 of contiguous storage addresses and a final storage area 13. In a preferred embodiment, the reserved area 12 typically comprises between five and fifteen percent of total hard disk 10 storage capacity, although this percentage may vary depending on the specific system application.
Digital information segments sent from host central processing unit (CPU) 28 are transmitted along a SCSI (Small Computer Systems Interface) data bus 26 and received by disk controller 14, which controls the transmission of information segments between host CPU 28 and hard disk 10. Disk controller 14 is comprised of at least indirection table 18, cache memory buffer 16, and reserved area ending pointer 20. In an alternative embodiment, a reserved area beginning pointer 22 operates in concert with reserved area ending pointer 20 to coordinate information exchanged between reserved area 12 of hard disk 10 and disk controller 14.
One aspect of the present invention involves the use of non-volatile solid state memory 24 comprising at least indirection %able 18, cache memory buffer 16, reserved area ending pointer 20, and reserved area beginning pointer 22 included within disk controller 14. The non¬ volatile memory portion 24 of disk controller 14 is preferably comprised of static random access memory (SRAM) . A lithium battery (not shown) is provided to supply sufficient power to ensure that non-volatile memory contents is not lost should standard system power be interrupted. Unlike conventional controllers which employ volatile cache buffer memory, whereby all information is lost upon power interruption, the use of non-volatile memory ensures complete information retention should supply power be interrupted.
Turning now to FIG. 3, a diagrammatic representation of the addressing and storing processes of the present invention are provided. Information segments sent from host CPU 28 are received by cache memory buffer 16. Cache memory buffer 16 is configured to accept a plurality of information segments sent from host CPU 28 for eventual storage on hard disk 10.
Indirection table 18 coordinates addressing and storing of information segments in reserved area 12 and final storage area 13 of hard disk 10. Indirection table 18 is comprised of at least three address registers; final disk address register 34, indirect cache register 30, and indirect reserved area register 32. Each information segment received by cache memory buffer 16 includes a final disk address which represents the specific destination address in the final disk area 13 unique for each received information segment. The final disk address for each information segment is maintained in final address register 34.
A corresponding indirect cache address for each information segment is maintained in indirect cache register 30, while said information segment data resides in cache memory buffer 16. As each new information segment is received by cache memory buffer 16, the final disk address"associated with that information segment is loaded into final address register 34. A corresponding indirect cache address is generated and maintained in indirect cache register 30 identifying the location of the information segment data residing in cache memory buffer 16. Additional information segments are received by cache memory buffer 16 from host CPU 28 until a predetermined capacity limit has been exceeded. In a preferred embodiment, the cache memory buffer 16 capacity limit is seventy-five percent of total cache memory buffer 16 storage capacity.
Upon exceeding the predetermined cache memory buffer 16 capacity limit, the plurality of information segments accumulated in cache memory buffer 16 are transferred to reserved area 12 of hard disk 10. Indirection table 18 is updated to reflect the transfer of each information segment from cache memory buffer 16 to reserved area 12. An indirect reserved area address is generated and maintained in indirect reserved area register 32 for each information segment transferred to reserved area 12 on hard disk 10. Transfer of information segments from cache memory buffer 16 to reserved area 12 continues until cache memory buffer 16 is empty. However, in the event that host CPU 28 sends new information segments to cache memory buffer 16 prior to completing the entire transfer, said transfer operation is temporarily suspended or interrupted. Priority is given to acceptance of new information segments from host CPU 28 by cache memory buffer 16 in this situation. Transfer of information segments from cache memory buffer 16 to reserved area 12 resumes after the new information segments are received. In an alternative embodiment, transferring of information segments from cache memory buffer 16 to reserved area 12 occurs simultaneously or concurrently with accepting of new information segments by cache memory buffer 16 from host CPU 28.
This unique combination of cache memory buffer 16 and reserved area 12 overcomes deficiencies inherent in prior designs by providing means to continuously accept information segments from host CPU 28 for eventual storage on hard disk 10 without the transfer delays associated with writing each information segment to hard disk 10 prior to accepting additional information segments. The inherent inefficiencies of writing information to the hard disk, namely seek time and latency delays, associated with prior art designs are thus avoided and overcome.
In accordance with one embodiment of the present invention, randomly received information segments (unrelated non-sequential data) are accepted by cache memory buffer 16 and sequentially ordered in said cache memory buffer 16 in the order received. Upon being transferred from cache memory buffer 16 to reserved area 12, the plurality of ordered information segments are written to contiguous memory addresses in reserved area 12 thereby preserving the ordering scheme. Reserved area address register 32 contains the corresponding addresses of each transferred information segment residing in reserved area 12.
In another embodiment, sequentially received informstion segments (successive data blocks of related data) , are accepted by cache memory buffer 16 and sequentially ordered in cache memory buffer 16 in the order of acceptance. Rather than transferring the sequentially received information segments to reserved area 12, the successive data blocks are transferred directly to contiguous final disk area addresses in the final disk area 13 without any lost revolutions of hard disk 10. This embodiment of the present invention overcomes seek time and latency deficiencies of prior art designs by transferring a plurality of sequentially received, successive data blocks to contiguous final address locations 13 on hard disk 10, rather than individually transferring each information segment to non¬ contiguous hard disk 10 memory locations.
Yet another embodiment of the present invention involves the transfer of information segments residing in reserved area 12 to final disk area locations 13 on hard disk 10. This transfer operation occurs during periods in which host CPU 28 discontinues sending new information segments to cache memory buffer 16. Typically, such periods of inactivity are associated with user "think time," or periods during which the workstation awaits user interaction. Upon sensing such a period of inactivity, preferably defined as a duration of more than one second in which no new information segments are received by cache memory buffer 16, transfer of information segments from reserved area 12 to final storage locations 13 occurs in the background, unbeknownst to the user.
In a preferred embodiment, information segments are transferred out of reserved area 12 and into cache memory buffer 16 prior to being transferred to final storage locations 13. Alternatively, the transfer of information segments from reserved area 12 to final storage locations 13 is accomplished without the intermediate transfer step involving cache memory buffer 16.
The transfer operation continues until all information segments residing in reserved area 12 are transferred to predetermined final disk area addresses 13. In the event that new information segments are sent from host CPU 28 to cache memory buffer 16 during this transfer operation, said transfer operation is temporarily suspended or interrupted. In an alternative embodiment, acceptance of new information segments by cache memory 16 occurs simultaneously or concurrently with the transfer of reserved area 12 information segments to final disk area address locations 13.
In another embodiment of the present invention, a reserved area beginning pointer 22 and reserved area ending pointer 20 operate in concert to manage addressing duties associated with information segments transferred between reserved area 12 and cache memory buffer 16. Reserved area beginning pointer 22 contains the indirect reserved area address of the first information segment transferred to, and residing in, reserved area 12. Reserved area ending pointer 20 contains the indirect address of the last information segment transferred to, and residing in, reserved area 12. Use of both beginning pointer 22 and ending pointer 20 provides an efficient method of addressing and storing information segments in reserved area 12.
Information segments may be written to, and accessed from, reserved area 12 in such a way as to simulate the operation of a first-in-first-out (FIFO) memory stack. In this configuration, two reserved area pointers are used to efficiently manage data transfers into and out of reserved area 12. Reserved area beginning pointer 22 points to the address of the first information segment written to reserved area 12. Reserved area ending pointer 20 points to the address of the last information segment written to reserved area 12. When additional information segments are written to reserved area 12, the additional data is written to contiguous address locations immediately following the ending address contained in reserved area ending pointer 20. Ending pointer 20 advances to the new ending address upon completion of the transfer of the new data segment. With respect to the transfer of information segments out of reserved area 12, the transfer begins with reserved area 12 address locations starting at the address contained in reserved area beginning pointer 22. Beginning pointer 22 is advanced to the next successive information segment awaiting transfer. Information segments are sequentially transferred from reserved area 12 until reserved area 12 is empty.
In accordance with an alternative embodiment, information segments may be written to, and transferred from, reserved area 12 in such a way as to simulate the operation of -at last-in-first-out (LIFO) memory stack. In this configuration, a single ending pointer 20 is used to manage data transfers into and out of reserved area 12. As individual information segments are written to the reserved area 12, ending pointer 20 contains the address of the last, or most recent, segment written to reserved area 12. As additional segments are written to reserved area 12, ending pointer 20 advances to contain the address of the latest segment written to reserved area 12. As information segments are transferred out of reserved area 12, said transfer begins with the most recently stored data segment and continues with the next most recently stored segment. The transfer operation continues until the reserved area 12 is empty. When each information segment is transferred out of reserved area 12, ending pointer 20 advances to contain the address of the next most recently stored segment. In yet another embodiment of the present invention, information segments residing in reserved area 12 are transferred to cache memory buffer 16 prior to being stored in final disk area addresses 13 on hard disk 10. This transfer operation occurs during periods of inactivity during which host CPU 28 discontinues sending information segments to cache memory buffer 16. As sequentially ordered information segments residing in reserved area 12 are transferred to cache memory buffer 16, the predetermined order is maintained within cache memory buffer 16, said transfer being reflected in indirection table 18. Indirection table 18 is updated to reflect the transfer of each information segment from reserved area 12 to cache memory buffer 16. Indirect cache register 30 is updated to contain the indirect address of each information segment now residing in cache memory buffer 16. Final address register 34 contains the final disk area 13 address of each information segment awaiting transfer from cache memory buffer 16 to final disk area 13. The final disk address of each information segment transferred to final disk area 13 is removed from final address register 34 upon completion of said transfer.
Turning now to FIG. 4, a logic flow diagram is provided to illustrate the important operations associated with the present invention. A host CPU initially sends a "write" command for a specific information segment, as indicated at 40. The information segment includes an address portion and a data portion which is transferred to the cache buffer, as at 42. A look-up to the indirection table is performed to determine if the information segment address is currently maintained in the indirection table, indicating an earlier version of the particular information segment, as at 44. If said address is located in the indirection table, the indirection address is updated to point to the current information segment data loaded in the cache buffer, as at 46. After transferring the information segment to the cache buffer, as at 42, or updating the indirection table to point to the current information segment data, as at 46, a "write complete" signal is transmitted to host CPU, as at 48. Information segments accumulate in the cache memory buffer until a predetermined capacity limit is exceeded, as at 50. The capacity limit is preferably set to seventy-five percent of total cache memory buffer storage capacity. When the predetermined limit is exceeded, the information segments stored in the cache memory buffer are transferred to the reserved area on the hard disk, as at 52. Concurrently, the indirection table is updated, as at 54, to reflect the current location of each information segment in the reserved area of the hard disk. During periods in which the host CPU discontinues sending new information segments to the cache memory buffer, such idle periods preferably being greater than one second in duration, information segments stored in the cache memory buffer are transferred to the reserved area on the hard disk, as at 58. The indirection table is updated, as at 60, to reflect the current address of each information segment in the reserved area of the hard disk. The reserved area beginning pointer is set to the address location of the first information segment transferred to the reserved area, as at 60. Starting with the first information segment transferred to the reserved area, the information segments in the reserved area are transferred to the cache memory buffer, as at 62. The information segments stored in the cache memory buffer are then transferred to final address locations within the final disk area using the "elevator seek" process, a known method for efficiently storing sequentially related data, as at 64. The indirection table is updated to reflect the completed transfer of information segments from the cache memory buffer to the final disk area, as at 66. The beginning reserved area pointer is updated to point to the reserved area address which will be occupied by the next information segment transferred from the cache memory buffer to the reserved area, as at 68.
Alternatively, in the event that the transfer of reserved area information segments to the final disk area is interrupted, the beginning reserved area pointer will point to the next information segment in the reserved area to be transferred to the final disk area, as at 68. The transfer of information segments from the reserved area to the final disk area occurs in the background, unbeknownst to the user, during periods of workstation inactivity. Utilization of this idle time constitutes a novel and unique aspect of the present invention which overcomes the inherent seek time and latency delays associated with prior art designs. It is to be understood that the foregoing description of the accompanying drawings shall relate to preferred and illustrated embodiments of the invention. Other embodiments may be utilized without departing from the spirit and scope of the invention. Thus, by way of example and not of limitation, the reserved area portion of the hardrdisk may exceed fifteen percent or twenty percent of total hard disk storage capacity depending on the specific application of the present invention. Also, portions of the hard disk other than the outer periphery may be designated as the reserved area. Further, sequentially received information segments or successive data blocks may be written to the reserved area in the same way as randomly received information segments. Also, transfer of information segments from the reserved area to the final disk area may be accomplished without passing through the cache memory buffer. Further, organization of information segments written to the reserved area can be accomplished by means other than those simulating a LIFO or FIFO memory stack methodology. Moreover, the information segments written to the reserved area may be randomly organized rather than sequentially organized. Also, the number of pointers employed to manage the addressing and storing duties may vary from one to several. In addition, the information segments may include optical data which are written to and accessed from an optical storage disk. Accordingly, it is to be understood that the detailed description and drawings set forth hereinabove are for illustrative purposes only and do not constitute a limitation on the scope of the invention.

Claims

CLAIMSWhat is claimed is:
1. A high speed, rapid storing hard disk drive system, comprising: a hard disk drive storage unit, said hard disk including a reserved storage area and a final storage area; a controller for controlling the operation of said hard disk to store digital information into said reserved and final storage areas respectively; a storage buffer for receiving a plurality of digital information segments from a host processor; means for storing sequentially received information segments in said final storage area without any lost revolutions of said hard disk; means for transferring randomly received information segments to said reserved storage area, said transfer occurring when said storage buffer is filled to a predetermined capacity; and means for transferring said random segments in said reserved storage area to said final storage area during idle periods in which no said information segments are received by said storage buffer.
2. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein at least said storage buffer is comprised of non-volatile solid state memory.
3. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said storage buffer includes means for storing an indirect address for each said received random segment, said indirect address indicating the storage location of said random segment in said reserved area.
4. A high speed, rapid storing hard disk drive system as claimed in Claim 3, wherein said storage buffer includes means to update said indirect address of each said random segment upon transfer between said storage buffer and said reserved area, said updated indirect address indicating the current location of said random segment.
5. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein a storage complete signal is sent to said host processor upon each said information segment being received by said storage buffer.
6. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said predetermined capacity is seventy-five percent of total storage buffer capacity.
7. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said means for storing saidsequential information segments include means for ordering said sequential segments into successive data blocks in said storage buffer and transferring said data blocks to contiguous storage locations in said final storage area.
8. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said storage buffer includes means for modifying said information segments.
9. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said storage buffer includes means for determining that said random segments reside in said reserved area and for transferring located random segments to said storage buffer.
10. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said host processor idle periods are at least one second in duration.
11. A high speed, rapid storing hard disk drive system as claimed in Claim 1, wherein said reserved storage area comprises at least five percent of said total hard disk capacity which is located near the periphery of said hard disk.
12. A hard disk drive data cache system comprising: a hard disk drive storage unit, said hard disk including a reserved storage area and a final storage area; a controller for controlling said hard disk to store data in said reserved and final storage areas respectively; means for receiving a plurality of information segments received from a host processor; means for transferring a plurality of said received information segments to said reserved storage area; and means for transferring said information segments from said reserved storage area to said final storage area.
13. A hard disk drive data cache system as claimed in Claim 12, wherein said information segments are transferred to successive storage locations in said reserved storage area, said transfer occurring when said receiving means exceeds a predetermined capacity limit.
14. A hard disk drive data cache system as claimed in Claim 12, wherein said reserved storage area comprises at least five percent of said total hard disk capacity which may be located near the periphery of said hard disk.
15. A hard disk drive data cache system as claimed in Claim 12, wherein said receiving means include means for storing the final storage area address of each said information segment and an indirect address assigned to said information segment, said indirect address indicating the location of said information segment in said reserved area.
16. A data cache system as claimed in Claim 12, wherein at least said receiving means is comprised of non-volatile solid state memory.
17. A data cache system as claimed in Claim 12, whereby the transfer of said information segments to said final storage area occurs during periods in which no said information segments are received by said receiving means.
18. A method for storing digital information in a storage system including a hard disk drive storage unit, said hard disk including a reserved storage area and a final storage area, a master controller for coupling a host data processor to said hard disk drive unit, and a storage buffer for receiving digital information segments from said host data processor for eventual storage in a final storage area on said hard disk, including the steps of: receiving a plurality of information segments in said storage buffer from said host data processor; determining that said storage buffer has exceeded a predetermined capacity limit; transferring said information segments from said storage buffer to consecutive storage locations in said reserved area upon exceeding said predetermined capacity limit; and transferring said information segments from said reserved area to said final storage area during periods in which said host data processor discontinues sending subsequent information segments to said storage buffer.
19. A method as defined in Claim 18, including the additional step of storing in said storage buffer an indirect address assigned to each said information segment transferred to said reserved area, said indirect address indicating the location of said information segment in said reserved area.
20. A method as defined in Claim 19, including the additional step of updating said indirect address for each said information segment transferred between said reserved area and said storage buffer.
21. A method as defined in Claim 18, including the additional step of transferring sequentially received information segments to consecutive storage locations in said final storage area without any lost revolutions of said hard disk.
22. A high speed, rapid storing hard disk drive system comprising: a hard disk drive for storing digital information on magnetic disk; a controller for said hard disk drive; means for receiving input digital information segments intended for eventual storage in various spaced locations on said disk; and means for storing said input digital information segments in consecutive order in a predetermined reserved area of said disk.
23. A high speed, rapid storing hard disk drive system as claimed in Claim 22, wherein said information segments are stored in said reserved area when said receiving means accumulates a predetermined volume of input information segments.
24. A high speed, rapid storing hard disk drive system as claimed in Claim 22 wherein said eventual storage of information segments on said disk occurs during periods in which no input information segments are received.
25. A high speed, rapid storing hard disk drive system as claimed in Claim 22, wherein at least said receiving means is comprised of non-volatile solid state memory.
26. A high speed, rapid storing hard disk drive system as claimed in Claim 22, wherein said reserved area is comprised of contiguous storage locations.
27. A hard disk drive system comprising: a hard storage disk; a controller for controlling said hard disk to store data in said disk; means for receiving a plurality of data; means for preserving data in said receiving means; and means for transferring data to a reserved storage area on said hard disk.
28. A hard disk drive system as claimed in Claim 27, wherein said data preserving means include non-volatile solid state memory.
29. A hard disk drive system as claimed in Claim 27, including means for storing a sequence of received data to consecutive storage locations in a final storage area on said hard disk without any lost revolutions of said hard disk.
30. A hard disk drive system as claimed in Claim 27, wherein said reserved storage area is comprised of contiguous storage locations on said hard drive.
PCT/US1994/002980 1993-03-18 1994-03-18 Buffer control for data transfer within hard disk during idle periods WO1994022134A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US3422793A 1993-03-18 1993-03-18
US034,227 1993-03-18

Publications (1)

Publication Number Publication Date
WO1994022134A1 true WO1994022134A1 (en) 1994-09-29

Family

ID=21875088

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/002980 WO1994022134A1 (en) 1993-03-18 1994-03-18 Buffer control for data transfer within hard disk during idle periods

Country Status (1)

Country Link
WO (1) WO1994022134A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998025199A1 (en) * 1996-12-02 1998-06-11 Gateway 2000, Inc. Method and apparatus for adding to the reserve area of a disk drive
EP1030305A2 (en) * 1999-02-15 2000-08-23 Mitsubishi Denki Kabushiki Kaisha Hierarchical data storage system and data caching method
US6925526B2 (en) * 2002-10-31 2005-08-02 International Business Machines Corporation Method and apparatus for servicing mixed block size data access operations in a disk drive data storage device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4593354A (en) * 1983-02-18 1986-06-03 Tokyo Shibaura Denki Kabushiki Kaisha Disk cache system
US4792917A (en) * 1985-12-28 1988-12-20 Hitachi, Ltd. Control apparatus for simultaneous data transfer
US4870565A (en) * 1982-12-01 1989-09-26 Hitachi, Ltd. Parallel transfer type director means

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4870565A (en) * 1982-12-01 1989-09-26 Hitachi, Ltd. Parallel transfer type director means
US4593354A (en) * 1983-02-18 1986-06-03 Tokyo Shibaura Denki Kabushiki Kaisha Disk cache system
US4792917A (en) * 1985-12-28 1988-12-20 Hitachi, Ltd. Control apparatus for simultaneous data transfer

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998025199A1 (en) * 1996-12-02 1998-06-11 Gateway 2000, Inc. Method and apparatus for adding to the reserve area of a disk drive
US5966732A (en) * 1996-12-02 1999-10-12 Gateway 2000, Inc. Method and apparatus for adding to the reserve area of a disk drive
EP1030305A2 (en) * 1999-02-15 2000-08-23 Mitsubishi Denki Kabushiki Kaisha Hierarchical data storage system and data caching method
EP1030305A3 (en) * 1999-02-15 2002-09-18 Mitsubishi Denki Kabushiki Kaisha Hierarchical data storage system and data caching method
US6925526B2 (en) * 2002-10-31 2005-08-02 International Business Machines Corporation Method and apparatus for servicing mixed block size data access operations in a disk drive data storage device

Similar Documents

Publication Publication Date Title
EP0080875B1 (en) Data storage system for a host computer
US5140683A (en) Method for dispatching work requests in a data storage hierarchy
US4972364A (en) Memory disk accessing apparatus
JP3898782B2 (en) Information recording / reproducing device
JP3183993B2 (en) Disk control system
US4974197A (en) Batching data objects for recording on optical disks with maximum object count
JPH09259033A (en) Buffer write method
KR20040010517A (en) Disk Controller Configured to Perform Out of Order Execution of Write OperationsS
JP2003518313A (en) Buffer management system for managing the transfer of data to and from the disk drive buffer
JPS619722A (en) Apparatus for rearranging page with track in disc memory
US5696931A (en) Disc drive controller with apparatus and method for automatic transfer of cache data
US5136692A (en) Memory disk buffer manager
KR20060017816A (en) Method and device for transferring data between a main memory and a storage device
US10628045B2 (en) Internal data transfer management in a hybrid data storage device
JP3566319B2 (en) Information storage device
WO1994022134A1 (en) Buffer control for data transfer within hard disk during idle periods
US10459658B2 (en) Hybrid data storage device with embedded command queuing
US6209057B1 (en) Storage device having data buffer
EP0278425B1 (en) Data processing system and method with management of a mass storage buffer
WO1984002016A1 (en) Dynamic addressing for variable track length cache memory
EP0278471B1 (en) Data processing method and system for accessing rotating storage means
JPH04311216A (en) External storage controller
JPS59172186A (en) Cache memory control system
JPH073661B2 (en) Information processing system and control method thereof
JPS61273650A (en) Magnetic disk controlling device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): CA JP

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: CA