US20190220215A1 - Data storage device and data storage control method - Google Patents
Data storage device and data storage control method Download PDFInfo
- Publication number
- US20190220215A1 US20190220215A1 US16/366,304 US201916366304A US2019220215A1 US 20190220215 A1 US20190220215 A1 US 20190220215A1 US 201916366304 A US201916366304 A US 201916366304A US 2019220215 A1 US2019220215 A1 US 2019220215A1
- Authority
- US
- United States
- Prior art keywords
- storage unit
- data
- host
- write
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0647—Migration mechanisms
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0866—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
- G06F12/0871—Allocation or management of cache space
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0685—Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B5/00—Recording by magnetisation or demagnetisation of a record carrier; Reproducing by magnetic means; Record carriers therefor
- G11B5/48—Disposition or mounting of heads or head supports relative to record carriers ; arrangements of heads, e.g. for scanning the record carrier to increase the relative speed
- G11B5/54—Disposition or mounting of heads or head supports relative to record carriers ; arrangements of heads, e.g. for scanning the record carrier to increase the relative speed with provision for moving the head into or out of its operative position or across tracks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/22—Employing cache memory using specific memory technology
- G06F2212/222—Non-volatile memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/31—Providing disk cache in a specific location of a storage system
- G06F2212/313—In storage device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/50—Control mechanisms for virtual memory, cache or TLB
- G06F2212/502—Control mechanisms for virtual memory, cache or TLB using adaptive policy
Definitions
- Embodiments described herein relate generally to a data storage device and a data storage control method.
- a hybrid drive that uses a semiconductor storage medium such as a NAND-type flash memory as a cache of a storage such as a hard disk drive (HDD) is known.
- a semiconductor storage medium such as a NAND-type flash memory
- a cache of a storage such as a hard disk drive (HDD)
- FIG. 1 is a block diagram illustrating an overall configuration of a data storage device according to a first embodiment
- FIGS. 2A to 2G are block diagrams illustrating a queuing method of the data storage device according to the first embodiment
- FIG. 3 is a flowchart illustrating the queuing method of the data storage device according to the first embodiment
- FIG. 4A is a diagram illustrating a relationship between management information and data designated by a write command in a data storage device according to a second embodiment
- FIG. 4B is a diagram illustrating storage locations designated by the LBAs of the management information and the data of FIG. 4A ;
- FIG. 5 is a flowchart illustrating a queuing method of the data storage device according to the second embodiment
- FIG. 6 is a diagram illustrating a method of updating a cache management table in a data storage device according to a third embodiment
- FIG. 7 is a diagram illustrating a method of updating a cache management table in a data storage device according to a fourth embodiment
- FIGS. 8A and 8C are diagrams illustrating a method of transferring read data from a magnetic storage unit in a data storage device according to a fifth embodiment
- FIGS. 8B and 8D are diagrams illustrating a method of updating a cache management table corresponding to the processes of FIGS. 8A and 8C ;
- FIG. 9A is a diagram illustrating a method of writing data stored in a magnetic storage unit to a semiconductor storage unit in a data storage device according to a sixth embodiment
- FIG. 9B is a diagram illustrating a method of updating a cache management table corresponding to the process of FIG. 9A .
- a data storage device includes a first storage unit, a second storage unit, a first queue, a second queue, and a distributor.
- the second storage unit is used as a cache of the first storage unit and has a lower write transfer rate and a faster response time than the first storage unit.
- the first queue corresponds to the first storage unit.
- the second queue corresponds to the second storage unit.
- the distributor distributes a write command received presently from a host to one of the first and second queues in which the number of write commands registered presently is smaller.
- FIG. 1 is a block diagram illustrating an overall configuration of a data storage device according to a first embodiment.
- a data storage device 21 includes a magnetic storage unit 21 A, a semiconductor storage unit 21 B, a host interface 12 , a system controller 13 , and a buffer 14 .
- the semiconductor storage unit 21 B has a lower write transfer rate and a faster response time than the magnetic storage unit 21 A.
- the write transfer rate of the magnetic storage unit 21 A can be set to 100 MB/sec and the response time (seek time) can be set to 2 msec to 30 msec.
- the write transfer rate of the semiconductor storage unit 21 B can be set to 40 MB/sec and the response time can be set to 300 ⁇ sec to 500 ⁇ sec.
- a plurality of magnetic disks 2 and 3 is provided in the magnetic storage unit 21 A, disk surfaces M 0 and M 1 are provided on both surfaces of a magnetic disk 2 , and disk surfaces M 2 and M 3 are provided on both surfaces of the magnetic disk 3 .
- the magnetic disks 2 and 3 are integrally supported by a spindle 11 .
- magnetic heads H 0 to H 3 are provided to the respective disk surfaces M 0 to M 3 , and the magnetic heads H 0 to H 3 are disposed so as to face the disk surfaces M 0 to M 3 , respectively.
- the magnetic heads H 0 to H 3 are held on the disk surfaces M 0 to M 3 by arms A 0 to A 3 , respectively.
- the arms A 0 to A 3 can allow the magnetic heads H 0 to H 3 to slide within a horizontal surface.
- a voice coil motor 4 that drives the arms A 0 to A 3 is provided in the magnetic storage unit 21 A, and a spindle motor 10 that rotates the magnetic disks 2 and 3 through the spindle 11 is provided.
- the magnetic disks 2 and 3 , the magnetic heads H 0 to H 3 , the arms A 0 to A 3 , the voice coil motor 4 , the spindle motor 10 , and the spindle 11 are accommodated in a case 1 .
- a magnetic recording controller 5 is provided in the magnetic storage unit 21 A
- a head controller 6 a power controller 7 , a read-write channel 8 , and a hard disk controller 9 are provided in the magnetic recording controller 5 .
- a write current controller 6 A and a readback signal detector 6 B are provided in the head controller 6 .
- a spindle motor controller 7 A and a voice coil motor controller 7 B are provided in the power controller 7 .
- the head controller 6 amplifies signals during recording and reading.
- the write current controller 6 A controls a write current flowing in the magnetic heads H 0 to H 3 .
- the readback signal detector 6 B detects signals detected by the magnetic heads H 0 to H 3 .
- the power controller 7 drives the voice coil motor 4 and the spindle motor 10 .
- the spindle motor controller 7 A controls rotation of the spindle motor 10 .
- the voice coil motor controller 7 B controls driving of the voice coil motor 4 .
- the read-write channel 8 converts signals read by the magnetic heads H 0 to H 3 into a data format handled by a host 17 and converts data output from the host 17 into a signal format recorded by the magnetic heads H 0 to H 3 . Examples of such format conversion include DA conversion and encoding.
- the read-write channel 8 decodes signals read by the magnetic heads H 0 to H 3 and modulates data codes output from the host 17 .
- the hard disk controller 9 can control recording and reading based on a command from the system controller 13 and exchange data between the system controller 13 and the read-write channel 8 .
- a NAND controller 15 and a NAND memory 16 are provided in the semiconductor storage unit 21 B.
- the NAND memory 16 caches data written to the magnetic disks 2 and 3 .
- the NAND controller 15 can control the NAND memory 16 . Examples of the control of the NAND memory 16 include control of reading and writing of the NAND memory 16 , block selection, error correction, and the like.
- the host interface 12 can receive a write command and a read command from the host 17 and output read data read from the magnetic disks 2 and 3 or the NAND memory 16 to the host 17 .
- the host interface 12 is connected to the host 17 .
- the host 17 may be a personal computer that outputs a write command and a read command to the data storage device 21 and may be an external interface.
- the system controller 13 can send a command for reading and writing data from and to the magnetic disks 2 and 3 to the hard disk controller 9 and send a command for reading and writing data from and to the NAND memory 16 to the NAND controller 15 .
- the system controller 13 , the host interface 12 , the read-write channel 8 , the NAND controller 15 , and a CPU (not illustrated) can be configured as a system on-chip (SoC), for example.
- SoC system on-chip
- the process of the system controller 13 can be controlled by firmware executed by a CPU (not illustrated).
- a cache manager 13 A and a distributor 13 B can be provided in the system controller 13 .
- the cache manager 13 A can manage a cache management table 14 C.
- the distributor 13 B can distribute a write command received from the host 17 to a queue 14 A or 14 B.
- the buffer 14 can transfer read data read from the NAND memory 16 to the system controller 13 and receive write data written to the NAND memory 16 from the system controller 13 .
- the buffer 14 may be DRAM or SRAM.
- the queues 14 A and 14 B and the cache management table 14 C can be provided in the buffer 14 .
- the queue 14 A is provided so as to correspond to the magnetic storage unit 21 A and can hold a job for the magnetic storage unit 21 A.
- the queue 14 B is provided so as to correspond to the semiconductor storage unit 21 B and can hold a job for the semiconductor storage unit 21 B. As the job, a write command received from the host 17 can be held.
- the cache management table 14 C can register a correspondence between a logical block address LBA of data stored in the magnetic storage unit 21 A or the semiconductor storage unit 21 B and a storage address FPB of the semiconductor storage unit 21 B and register a dirty flag and the number of accesses for each LBA.
- the magnetic disks 2 and 3 When data is read and written from and to the magnetic disks 2 and 3 , the magnetic disks 2 and 3 are rotated by the spindle motor 10 , and signals are read from the disk surfaces M 0 to M 3 by the magnetic heads H 0 to H 3 , respectively, and are detected by the readback signal detector 6 B.
- the signals detected by the readback signal detector 6 B are converted into data by the read-write channel 8 and are sent to the hard disk controller 9 .
- the hard disk controller 9 performs tracking control of the magnetic heads H 0 to H 3 based on a burst pattern included in the signals detected by the readback signal detector 6 B. Further, the present positions of the magnetic heads H 0 to H 3 are calculated based on sector/cylinder information included in the signals detected by the readback signal detector 6 B, and seek control is performed so that the magnetic heads H 0 to H 3 approach target positions.
- the system controller 13 when data is written using the NAND memory 16 as a write cache, the system controller 13 temporarily stores write data supplied from the host 17 in the buffer 14 . Moreover, the NAND controller 15 transfers write data stored in the buffer 14 to the NAND memory 16 and writes the write data to the NAND memory 16 . When data is written to the NAND memory 16 as a write cache, the system controller 13 may exchange the write data supplied from the host 17 with the NAND controller 15 without temporarily storing the same in the buffer 14 , and the NAND controller 15 may write the write data to the NAND memory 16 .
- the NAND controller 15 when data is read using the NAND memory 16 as a read cache, the NAND controller 15 reads read data from the NAND memory 16 and temporarily stores the same in the buffer 14 . Moreover, the system controller 13 transfers the read data stored in the buffer 14 to the host 17 . When data is read using the NAND memory 16 as a read cache, the NAND controller 15 may exchange the read data read from the NAND memory 16 with the system controller 13 without temporarily storing the same in the buffer 14 , and the system controller 13 may transfer the read data to the host 17 .
- the system controller 13 determines whether write data designated by the write command will be cached in the NAND memory 16 .
- the system controller 13 instructs the NAND controller 15 to record the write data in the NAND memory 16 .
- the system controller 13 instructs the hard disk controller 9 to record the write data on the magnetic disks 2 and 3 .
- the system controller 13 determines whether read data designated by the read command is cached in the NAND memory 16 .
- the system controller 13 instructs the NAND controller 15 to read the read data from the NAND memory 16 .
- the system controller 13 instructs the hard disk controller 9 to read the read data from the magnetic disks 2 and 3 .
- the distributor 13 B can distribute a write command received presently from the host 17 to one of the queues 14 A and 14 B in which the number of write commands registered presently is smaller.
- FIGS. 2A to 2G are block diagrams illustrating a queuing method of the data storage device according to the first embodiment.
- a write command WM 1 is sent from the host 17 to the data storage device 21 . It is also assumed that X 1 is included in the write command WM 1 as an LBA and L 1 is included as the length of data specified by the write command WM 1 . In this case, if the queues 14 A and 14 B are empty, the distributor 13 B distributes the write command WM 1 to the queue 14 A.
- a write command WM 2 is sent from the host 17 to the data storage device 21 . It is also assumed that X 2 is included in the write command WM 2 as an LBA and L 2 is included as the length of data specified by the write command WM 2 . In this case, the write command WM 1 is registered in the queue 14 A. Thus, since the number of write commands registered presently in the queue 14 B is smaller than that of the queue 14 A, the distributor 13 B distributes the write command WM 2 to the queue 14 B.
- a write command WM 3 is sent from the host 17 to the data storage device 21 . It is also assumed that X 3 is included in the write command WM 3 as an LBA and L 3 is included as the length of data specified by the write command WM 3 . In this case, since the write command WM 1 is registered in the queue 14 A and the write command WM 2 is registered in the queue 14 B, the distributor 13 B distributes the write command WM 3 to the queue 14 A.
- a write command WM 4 is sent from the host 17 to the data storage device 21 . It is also assumed that X 4 is included in the write command WM 4 as an LBA and L 4 is included as the length of data specified by the write command WM 4 .
- the write commands WM 1 and WM 3 are registered in the queue 14 A, and the write command WM 2 is registered in the queue 14 B.
- the distributor 13 B distributes the write command WM 4 to the queue 14 B.
- FIG. 2E when the data storage device 21 sends a request for data WC 2 designated by the write command WM 2 to the host 17 , the data WC 2 is sent from the host 17 to the data storage device 21 . Moreover, as illustrated in FIG. 2F , the data WC 2 is stored in the semiconductor storage unit 21 B and the write command WM 2 is deleted from the queue 14 B.
- a write command WM 5 is sent from the host 17 to the data storage device 21 . It is also assumed that X 5 is included in the write command WM 5 as an LBA and L 5 is included as the length of data specified by the write command WM 5 .
- the write commands WM 1 and WM 3 are registered in the queue 14 A, and the write command WM 4 is registered in the queue 14 B.
- the distributor 13 B distributes the write command WM 5 to the queue 14 B.
- FIG. 3 is a flowchart illustrating a queuing method of the data storage device according to the first embodiment.
- the distributor 13 B determines whether the number of jobs piled in the queue 14 A of the magnetic storage unit 21 A is equal to or smaller than the number of jobs piled in the queue 14 B of the semiconductor storage unit 21 B (S 1 ). Moreover, when the number of jobs piled in the queue 14 A of the magnetic storage unit 21 A is equal to or smaller than the number of jobs piled in the queue 14 B of the semiconductor storage unit 21 B (Yes in S 1 ), the distributor 13 B distributes a job to the queue 14 A of the magnetic storage unit 21 A (S 2 ).
- the distributor 13 B distributes a job to the queue 14 B of the semiconductor storage unit 21 B (S 3 ).
- the response speed to an access request from the host 17 is 40 MB/sec for the method of writing data to the magnetic storage unit 21 A after performing write-caching to the semiconductor storage unit 21 B. That is, this response speed corresponds to the write transfer rate of the semiconductor storage unit 21 B.
- the response speed to an access request from the host 17 can be increased to 140 MB/sec. That is, this response speed corresponds to the sum of the write transfer rate of the magnetic storage unit 21 A and the write transfer rate of the semiconductor storage unit 21 B.
- the first embodiment it is possible to reduce the waiting time for performing write-caching to the semiconductor storage unit 21 B and to improve the response speed to an access request from the host 17 as compared to a method of writing data to the magnetic storage unit 21 A after performing write-caching to the semiconductor storage unit 21 B.
- the distributor 13 B may distribute a write command having a smaller data length to the queue 14 B of the semiconductor storage unit 21 B preferentially than a write command having a larger data length. For example, a write command having a data length equal to or smaller than a predetermined value may be distributed to the queue 14 B, and a write command having a data length exceeding the predetermined value may be distributed to the queue 14 A.
- the magnetic storage unit 21 A it is possible to improve efficiency of sequential write and to reduce the seek time.
- the response speed to an access request from the host 17 it is possible to improve the response speed to an access request from the host 17 .
- the semiconductor storage unit 21 B it is possible to prevent a large volume of data from being stored and to prevent an increase in the capacity of the semiconductor storage unit 21 B.
- FIG. 4A is a diagram illustrating a relationship between management information and data designated by a write command in the data storage device according to the second embodiment
- FIG. 4B is a diagram illustrating storage locations designated by the LBAs of the management information and the data of FIG. 4A .
- a write command can designate management information J 1 or a data body J 2 .
- the management information J 1 includes a storage location of the data body J 2 .
- the management information J 1 generally has a smaller data length than the data body J 2 .
- FIG. 4B it is assumed that values 0 to 5000 are allocated to the magnetic disk 2 as LBAs.
- the management information J 1 is stored in the location corresponding to the LBAs 2400 to 2600
- the data body J 2 is stored in the location corresponding to the LBAs 0 to 2399 and 2601 to 5000.
- FIG. 5 is a flowchart illustrating a queuing method of the data storage device according to the second embodiment.
- the second embodiment it is possible to reduce the seek time in the magnetic storage unit 21 A and to efficiently write the data body J 2 to the magnetic storage unit 21 A. Thus, it is possible to improve the response speed to an access request from the host 17 .
- the cache manager 13 A can measure the number of accesses from the host 17 , to data stored in the magnetic storage unit 21 A without being stored in the semiconductor storage unit 21 B and register the number of accesses in the cache management table 14 C. Moreover, the data storage device 21 may write data stored in the magnetic storage unit 21 A to the semiconductor storage unit 21 B based on the number of accesses registered in the cache management table 14 C. For example, the data stored in the magnetic storage unit 21 A can be written to the semiconductor storage unit 21 B when the number of accesses is 2 or more.
- FIG. 6 is a diagram illustrating a method of updating a cache management table in a data storage device according to a third embodiment.
- FIG. 6 it is assumed that data has been written to the magnetic storage unit 21 A based on a write command WM 1 from the host 17 of FIG. 1 in which an LBA of X 1 is included.
- X 1 is registered in the LBA and the FPB is not designated.
- “0” is set to a dirty flag, and “0” is registered in the number of accesses.
- the dirty flag “0” indicates that data has been written to the magnetic storage unit 21 A regardless of whether data has been written to the semiconductor storage unit 21 B.
- the data stored in the magnetic storage unit 21 A without being stored in the semiconductor storage unit 21 B is read from the host 17 , the data is written to the semiconductor storage unit 21 B, whereby the response speed to a read command for the data from the host 17 can be improved.
- the cache manager 13 A may measure the number of accesses in response to a read command from the host 17 , to data stored in the semiconductor storage unit 21 B and reset the number of accesses to the data according to a write command from the host 17 .
- FIG. 7 is a diagram illustrating a method of updating a cache management table in a data storage device according to a fourth embodiment.
- FIG. 7 it is assumed that data has been written to the semiconductor storage unit 21 B based on a write command WM 2 from the host 17 of FIG. 1 in which an LBA of X 2 is included.
- a write command WM 2 from the host 17 of FIG. 1 in which an LBA of X 2 is included.
- an FPB of Y 2 is allocated so as to correspond to the LBA of X 2
- X 2 is registered in the LBA
- Y 2 is registered in the FPB in the cache management table 14 C.
- “1” is set to a dirty flag, and “0” is registered in the number of accesses.
- the dirty flag “1” indicates that data has been written to the semiconductor storage unit 21 B without being written to the magnetic storage unit 21 A.
- a write command in which an LBA of X 2 is included is issued from the host 17 , and X 2 is designated by the write command as the LBA.
- the number of accesses at the LBA of X 2 is reset to “0,” and the FPB is updated to an address that is newly allocated.
- NAND flash memories have a property to deteriorate as the number of rewrites increases.
- a larger number of accesses indicated in the cache management table 14 C serve as an indicator that many effective hits have occurred with progress of deterioration in the NAND memory used as a cache.
- the data storage device 21 may write data stored in the magnetic storage unit 21 A to the semiconductor storage unit 21 B and send the data to the host 17 when a read command from the host 17 is received and the length of data designated by the read command is equal to or smaller than a predetermined value. Moreover, the data storage device 21 may send the data stored in the magnetic storage unit 21 A to the host 17 without writing the data to the semiconductor storage unit 21 B when the length of data designated by the read command from the host 17 exceeds the predetermined value.
- FIGS. 8A and 8C are diagrams illustrating a method of transferring read data from a magnetic storage unit in a data storage device according to a fifth embodiment
- FIGS. 8B and 8D are diagrams illustrating a method of updating a cache management table corresponding to the processes of FIGS. 8A and 8C .
- FIGS. 8A and 8B it is assumed that data DA having a length exceeding a predetermined value is stored in the magnetic storage unit 21 A without being stored in the semiconductor storage unit 21 B.
- the LBA of the data DA is X 6
- the cache management table 14 C X 6 is registered in the LBA, and the FPB is not designated.
- “0” is set to the dirty flag, and “0” is registered in the number of accesses.
- X 6 is designated as the LBA by a read command when the read command is received from the host 17 , the data DA stored in the magnetic storage unit 21 A is sent to the host 17 without being written to the semiconductor storage unit 21 B.
- the number of accesses is incremented by only “1.”
- FIGS. 8C and 8D it is assumed that data DB having a length equal to or smaller than a predetermined value is stored in the magnetic storage unit 21 A without being stored in the semiconductor storage unit 21 B.
- the LBA of the data DB is X 7
- the cache management table 14 C X 7 is registered in the LBA, and the FPB is not designated.
- “0” is set to the dirty flag, and “0” is registered in the number of accesses.
- X 7 is designated as the LBA by a read command when the read command is received from the host 17 , the data DB stored in the magnetic storage unit 21 A is written to an area of the semiconductor storage unit 21 B in which the FPB includes Y 7 and is sent to the host 17 .
- the number of accesses is incremented by only “1.”
- the FPB of Y 7 is allocated so as to correspond to the LBA of X 7
- the FPB of Y 7 is registered so as to correspond to the LBA of X 7 .
- the fifth embodiment when a read command is received from the host 17 , data having a length equal to or smaller than a predetermined value is written to the semiconductor storage unit 21 B, and data having a length exceeding the predetermined value is not written to the semiconductor storage unit 21 B.
- a predetermined value is written to the semiconductor storage unit 21 B, and data having a length exceeding the predetermined value is not written to the semiconductor storage unit 21 B.
- the data storage device 21 may write data stored in the magnetic storage unit 21 A to the semiconductor storage unit 21 B when no access has been made from the host 17 for a predetermined period or more and the number of accesses is a predetermined value or more.
- FIG. 9A is a diagram illustrating a method of writing data stored in a magnetic storage unit to a semiconductor storage unit in a data storage device according to a sixth embodiment
- FIG. 9B is a diagram illustrating a method of updating a cache management table corresponding to the process of FIG. 9A .
- FIGS. 9A and 9B it is assumed that data DC is stored in the magnetic storage unit 21 A without being stored in the semiconductor storage unit 21 B.
- the LBA of the data DC is X 8
- the cache management table 14 C X 8 is registered in the LBA and the FPB is not designated.
- “0” is set to the dirty flag.
- the data DC is read only five times based on a read command from the host 17 , the number of accesses of the cache management table 14 C is incremented by only “5.”
- the data DC stored in the magnetic storage unit 21 A is written to the semiconductor storage unit 21 B when no access has been made from the host 17 for a predetermined period or more and the number of accesses is equal to or larger than a predetermined value (for example, 5).
- a predetermined value for example, 5
- the FPB of Y 8 is allocated so as to correspond to the LBA of X 8
- the cache management table 14 C the FPB of Y 8 is registered so as to correspond to the LBA of X 8 .
- the data DC stored in the magnetic storage unit 21 A is written to the semiconductor storage unit 21 B when no access has been made from the host 17 for a predetermined period or more and the number of accesses is equal to or larger than a predetermined value. By doing so, it is possible to improve the response speed to an access request from the host 17 while effectively utilizing the vacant period of the data storage device 21 .
- the distributor 13 B may distribute the write command preferentially to the queue 14 B of the semiconductor storage unit 21 B.
- the seventh embodiment it is possible to allow data having a higher access expectation value to be efficiently cached to the semiconductor storage unit 21 B and to improve the response speed to an access request from the host 17 .
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Memory System Of A Hierarchy Structure (AREA)
Abstract
According to one embodiment, a data storage device includes a first storage unit, a second storage unit, a first queue, a second queue, and a distributor. The second storage unit is used as a cache of the first storage unit and has a lower write transfer rate and a faster response time than the first storage unit. The first queue corresponds to the first storage unit. The second queue corresponds to the second storage unit. The distributor distributes a write command received presently from a host to one of the first and second queues in which the number of write commands registered presently is smaller.
Description
- This application is a continuation of U.S. application Ser. No. 15/223,347, filed Jul. 29, 2016, which is a divisional of U.S. application Ser. No. 14/021,032, filed Sep. 9, 2013. U.S. application Ser. No. 14/021,032 is based upon and claims the benefit of priority from Japanese Patent Application No. 2013-118505, filed on Jun. 5, 2013; the entire contents of each of which are incorporated herein by reference.
- Embodiments described herein relate generally to a data storage device and a data storage control method.
- In order to realize both large data capacity and fast data access, a hybrid drive that uses a semiconductor storage medium such as a NAND-type flash memory as a cache of a storage such as a hard disk drive (HDD) is known.
-
FIG. 1 is a block diagram illustrating an overall configuration of a data storage device according to a first embodiment; -
FIGS. 2A to 2G are block diagrams illustrating a queuing method of the data storage device according to the first embodiment; -
FIG. 3 is a flowchart illustrating the queuing method of the data storage device according to the first embodiment; -
FIG. 4A is a diagram illustrating a relationship between management information and data designated by a write command in a data storage device according to a second embodiment, andFIG. 4B is a diagram illustrating storage locations designated by the LBAs of the management information and the data ofFIG. 4A ; -
FIG. 5 is a flowchart illustrating a queuing method of the data storage device according to the second embodiment; -
FIG. 6 is a diagram illustrating a method of updating a cache management table in a data storage device according to a third embodiment; -
FIG. 7 is a diagram illustrating a method of updating a cache management table in a data storage device according to a fourth embodiment; -
FIGS. 8A and 8C are diagrams illustrating a method of transferring read data from a magnetic storage unit in a data storage device according to a fifth embodiment, andFIGS. 8B and 8D are diagrams illustrating a method of updating a cache management table corresponding to the processes ofFIGS. 8A and 8C ; and -
FIG. 9A is a diagram illustrating a method of writing data stored in a magnetic storage unit to a semiconductor storage unit in a data storage device according to a sixth embodiment, andFIG. 9B is a diagram illustrating a method of updating a cache management table corresponding to the process ofFIG. 9A . - In general, according to one embodiment, a data storage device includes a first storage unit, a second storage unit, a first queue, a second queue, and a distributor. The second storage unit is used as a cache of the first storage unit and has a lower write transfer rate and a faster response time than the first storage unit. The first queue corresponds to the first storage unit. The second queue corresponds to the second storage unit. The distributor distributes a write command received presently from a host to one of the first and second queues in which the number of write commands registered presently is smaller.
- Hereinafter, a data storage device according to embodiments will be described in detail with reference to the accompanying drawings. It should be noted that the present invention is not limited to these embodiments.
-
FIG. 1 is a block diagram illustrating an overall configuration of a data storage device according to a first embodiment. - In
FIG. 1 , adata storage device 21 includes amagnetic storage unit 21A, asemiconductor storage unit 21B, ahost interface 12, asystem controller 13, and abuffer 14. Thesemiconductor storage unit 21B has a lower write transfer rate and a faster response time than themagnetic storage unit 21A. For example, the write transfer rate of themagnetic storage unit 21A can be set to 100 MB/sec and the response time (seek time) can be set to 2 msec to 30 msec. The write transfer rate of thesemiconductor storage unit 21B can be set to 40 MB/sec and the response time can be set to 300 μsec to 500 μsec. - A plurality of
magnetic disks magnetic storage unit 21A, disk surfaces M0 and M1 are provided on both surfaces of amagnetic disk 2, and disk surfaces M2 and M3 are provided on both surfaces of themagnetic disk 3. Themagnetic disks spindle 11. - Moreover, in the
magnetic storage unit 21A, magnetic heads H0 to H3 are provided to the respective disk surfaces M0 to M3, and the magnetic heads H0 to H3 are disposed so as to face the disk surfaces M0 to M3, respectively. Here, the magnetic heads H0 to H3 are held on the disk surfaces M0 to M3 by arms A0 to A3, respectively. The arms A0 to A3 can allow the magnetic heads H0 to H3 to slide within a horizontal surface. - A
voice coil motor 4 that drives the arms A0 to A3 is provided in themagnetic storage unit 21A, and aspindle motor 10 that rotates themagnetic disks spindle 11 is provided. Themagnetic disks voice coil motor 4, thespindle motor 10, and thespindle 11 are accommodated in acase 1. - In addition, a
magnetic recording controller 5 is provided in themagnetic storage unit 21A, ahead controller 6, apower controller 7, a read-writechannel 8, and ahard disk controller 9 are provided in themagnetic recording controller 5. A writecurrent controller 6A and areadback signal detector 6B are provided in thehead controller 6. Aspindle motor controller 7A and a voicecoil motor controller 7B are provided in thepower controller 7. - The
head controller 6 amplifies signals during recording and reading. The writecurrent controller 6A controls a write current flowing in the magnetic heads H0 to H3. Thereadback signal detector 6B detects signals detected by the magnetic heads H0 to H3. Thepower controller 7 drives thevoice coil motor 4 and thespindle motor 10. Thespindle motor controller 7A controls rotation of thespindle motor 10. The voicecoil motor controller 7B controls driving of thevoice coil motor 4. The read-writechannel 8 converts signals read by the magnetic heads H0 to H3 into a data format handled by ahost 17 and converts data output from thehost 17 into a signal format recorded by the magnetic heads H0 to H3. Examples of such format conversion include DA conversion and encoding. Moreover, the read-writechannel 8 decodes signals read by the magnetic heads H0 to H3 and modulates data codes output from thehost 17. Thehard disk controller 9 can control recording and reading based on a command from thesystem controller 13 and exchange data between thesystem controller 13 and the read-writechannel 8. - A
NAND controller 15 and aNAND memory 16 are provided in thesemiconductor storage unit 21B. TheNAND memory 16 caches data written to themagnetic disks NAND controller 15 can control theNAND memory 16. Examples of the control of theNAND memory 16 include control of reading and writing of theNAND memory 16, block selection, error correction, and the like. - The
host interface 12 can receive a write command and a read command from thehost 17 and output read data read from themagnetic disks NAND memory 16 to thehost 17. Thehost interface 12 is connected to thehost 17. Thehost 17 may be a personal computer that outputs a write command and a read command to thedata storage device 21 and may be an external interface. - The
system controller 13 can send a command for reading and writing data from and to themagnetic disks hard disk controller 9 and send a command for reading and writing data from and to theNAND memory 16 to theNAND controller 15. Thesystem controller 13, thehost interface 12, the read-write channel 8, theNAND controller 15, and a CPU (not illustrated) can be configured as a system on-chip (SoC), for example. The process of thesystem controller 13 can be controlled by firmware executed by a CPU (not illustrated). Acache manager 13A and adistributor 13B can be provided in thesystem controller 13. Thecache manager 13A can manage a cache management table 14C. Thedistributor 13B can distribute a write command received from thehost 17 to aqueue - The
buffer 14 can transfer read data read from theNAND memory 16 to thesystem controller 13 and receive write data written to theNAND memory 16 from thesystem controller 13. Thebuffer 14 may be DRAM or SRAM. Thequeues buffer 14. Thequeue 14A is provided so as to correspond to themagnetic storage unit 21A and can hold a job for themagnetic storage unit 21A. Thequeue 14B is provided so as to correspond to thesemiconductor storage unit 21B and can hold a job for thesemiconductor storage unit 21B. As the job, a write command received from thehost 17 can be held. The cache management table 14C can register a correspondence between a logical block address LBA of data stored in themagnetic storage unit 21A or thesemiconductor storage unit 21B and a storage address FPB of thesemiconductor storage unit 21B and register a dirty flag and the number of accesses for each LBA. - When data is read and written from and to the
magnetic disks magnetic disks spindle motor 10, and signals are read from the disk surfaces M0 to M3 by the magnetic heads H0 to H3, respectively, and are detected by thereadback signal detector 6B. The signals detected by thereadback signal detector 6B are converted into data by the read-write channel 8 and are sent to thehard disk controller 9. Moreover, thehard disk controller 9 performs tracking control of the magnetic heads H0 to H3 based on a burst pattern included in the signals detected by thereadback signal detector 6B. Further, the present positions of the magnetic heads H0 to H3 are calculated based on sector/cylinder information included in the signals detected by thereadback signal detector 6B, and seek control is performed so that the magnetic heads H0 to H3 approach target positions. - On the other hand, when data is written using the
NAND memory 16 as a write cache, thesystem controller 13 temporarily stores write data supplied from thehost 17 in thebuffer 14. Moreover, theNAND controller 15 transfers write data stored in thebuffer 14 to theNAND memory 16 and writes the write data to theNAND memory 16. When data is written to theNAND memory 16 as a write cache, thesystem controller 13 may exchange the write data supplied from thehost 17 with theNAND controller 15 without temporarily storing the same in thebuffer 14, and theNAND controller 15 may write the write data to theNAND memory 16. - In addition, when data is read using the
NAND memory 16 as a read cache, theNAND controller 15 reads read data from theNAND memory 16 and temporarily stores the same in thebuffer 14. Moreover, thesystem controller 13 transfers the read data stored in thebuffer 14 to thehost 17. When data is read using theNAND memory 16 as a read cache, theNAND controller 15 may exchange the read data read from theNAND memory 16 with thesystem controller 13 without temporarily storing the same in thebuffer 14, and thesystem controller 13 may transfer the read data to thehost 17. - Here, upon receiving a write command from the
host 17, thesystem controller 13 determines whether write data designated by the write command will be cached in theNAND memory 16. When the write data is to be cached in theNAND memory 16, thesystem controller 13 instructs theNAND controller 15 to record the write data in theNAND memory 16. On the other hand, when the write data is not to be cached in theNAND memory 16, thesystem controller 13 instructs thehard disk controller 9 to record the write data on themagnetic disks - On the other hand, upon receiving a read command from the
host 17, thesystem controller 13 determines whether read data designated by the read command is cached in theNAND memory 16. When the read data is cached in theNAND memory 16, thesystem controller 13 instructs theNAND controller 15 to read the read data from theNAND memory 16. On the other hand, when the read data is not cached in theNAND memory 16, thesystem controller 13 instructs thehard disk controller 9 to read the read data from themagnetic disks - Here, the
distributor 13B can distribute a write command received presently from thehost 17 to one of thequeues -
FIGS. 2A to 2G are block diagrams illustrating a queuing method of the data storage device according to the first embodiment. - In
FIG. 2A , it is assumed that a write command WM1 is sent from thehost 17 to thedata storage device 21. It is also assumed that X1 is included in the write command WM1 as an LBA and L1 is included as the length of data specified by the write command WM1. In this case, if thequeues distributor 13B distributes the write command WM1 to thequeue 14A. - Next, in
FIG. 2B , it is assumed that a write command WM2 is sent from thehost 17 to thedata storage device 21. It is also assumed that X2 is included in the write command WM2 as an LBA and L2 is included as the length of data specified by the write command WM2. In this case, the write command WM1 is registered in thequeue 14A. Thus, since the number of write commands registered presently in thequeue 14B is smaller than that of thequeue 14A, thedistributor 13B distributes the write command WM2 to thequeue 14B. - Next, in
FIG. 2C , it is assumed that a write command WM3 is sent from thehost 17 to thedata storage device 21. It is also assumed that X3 is included in the write command WM3 as an LBA and L3 is included as the length of data specified by the write command WM3. In this case, since the write command WM1 is registered in thequeue 14A and the write command WM2 is registered in thequeue 14B, thedistributor 13B distributes the write command WM3 to thequeue 14A. - Next, in
FIG. 2D , it is assumed that a write command WM4 is sent from thehost 17 to thedata storage device 21. It is also assumed that X4 is included in the write command WM4 as an LBA and L4 is included as the length of data specified by the write command WM4. In this case, the write commands WM1 and WM3 are registered in thequeue 14A, and the write command WM2 is registered in thequeue 14B. Thus, since the number of write commands registered presently in thequeue 14B is smaller than that of thequeue 14A, thedistributor 13B distributes the write command WM4 to thequeue 14B. - Next, in
FIG. 2E , when thedata storage device 21 sends a request for data WC2 designated by the write command WM2 to thehost 17, the data WC2 is sent from thehost 17 to thedata storage device 21. Moreover, as illustrated inFIG. 2F , the data WC2 is stored in thesemiconductor storage unit 21B and the write command WM2 is deleted from thequeue 14B. - Next, in
FIG. 2G , it is assumed that a write command WM5 is sent from thehost 17 to thedata storage device 21. It is also assumed that X5 is included in the write command WM5 as an LBA and L5 is included as the length of data specified by the write command WM5. In this case, the write commands WM1 and WM3 are registered in thequeue 14A, and the write command WM4 is registered in thequeue 14B. Thus, since the number of write commands registered presently in thequeue 14B is smaller than that of thequeue 14A, thedistributor 13B distributes the write command WM5 to thequeue 14B. -
FIG. 3 is a flowchart illustrating a queuing method of the data storage device according to the first embodiment. - In
FIG. 3 , when a write command is sent from thehost 17, thedistributor 13B determines whether the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or smaller than the number of jobs piled in thequeue 14B of thesemiconductor storage unit 21B (S1). Moreover, when the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or smaller than the number of jobs piled in thequeue 14B of thesemiconductor storage unit 21B (Yes in S1), thedistributor 13B distributes a job to thequeue 14A of themagnetic storage unit 21A (S2). On the other hand, when the number of jobs piled in thequeue 14A of themagnetic storage unit 21A exceeds the number of jobs piled in thequeue 14B of thesemiconductor storage unit 21B (No in S1), thedistributor 13B distributes a job to thequeue 14B of thesemiconductor storage unit 21B (S3). - Here, by distributing the write command received presently from the
host 17 to one of thequeues semiconductor storage unit 21B. Thus, it is possible to improve the response speed to an access request from thehost 17 as compared to a method of writing data to themagnetic storage unit 21A after performing write-caching to thesemiconductor storage unit 21B. For example, when the write transfer rate of themagnetic storage unit 21A is 100 MB/sec and the write transfer rate of thesemiconductor storage unit 21B is 40 MB/sec, the response speed to an access request from thehost 17 is 40 MB/sec for the method of writing data to themagnetic storage unit 21A after performing write-caching to thesemiconductor storage unit 21B. That is, this response speed corresponds to the write transfer rate of thesemiconductor storage unit 21B. In contrast, for the method of distributing the write command received presently from thehost 17 to one of thequeues host 17 can be increased to 140 MB/sec. That is, this response speed corresponds to the sum of the write transfer rate of themagnetic storage unit 21A and the write transfer rate of thesemiconductor storage unit 21B. - According to the first embodiment, it is possible to reduce the waiting time for performing write-caching to the
semiconductor storage unit 21B and to improve the response speed to an access request from thehost 17 as compared to a method of writing data to themagnetic storage unit 21A after performing write-caching to thesemiconductor storage unit 21B. - In
FIG. 1 , thedistributor 13B may distribute a write command having a smaller data length to thequeue 14B of thesemiconductor storage unit 21B preferentially than a write command having a larger data length. For example, a write command having a data length equal to or smaller than a predetermined value may be distributed to thequeue 14B, and a write command having a data length exceeding the predetermined value may be distributed to thequeue 14A. - In this manner, in the
magnetic storage unit 21A, it is possible to improve efficiency of sequential write and to reduce the seek time. Thus, it is possible to improve the response speed to an access request from thehost 17. On the other hand, in thesemiconductor storage unit 21B, it is possible to prevent a large volume of data from being stored and to prevent an increase in the capacity of thesemiconductor storage unit 21B. -
FIG. 4A is a diagram illustrating a relationship between management information and data designated by a write command in the data storage device according to the second embodiment, andFIG. 4B is a diagram illustrating storage locations designated by the LBAs of the management information and the data ofFIG. 4A . - In
FIG. 4A , a write command can designate management information J1 or a data body J2. Here, the management information J1 includes a storage location of the data body J2. Moreover, the management information J1 generally has a smaller data length than the data body J2. - On the other hand, in
FIG. 4B , it is assumed thatvalues 0 to 5000 are allocated to themagnetic disk 2 as LBAs. Here, for example, in themagnetic disk 2, it is assumed that the management information J1 is stored in the location corresponding to theLBAs 2400 to 2600, and the data body J2 is stored in the location corresponding to theLBAs 0 to 2399 and 2601 to 5000. - By distributing a write command having a smaller data length to the
queue 14B of thesemiconductor storage unit 21B preferentially to a write command having a larger data length, it is possible to improve the frequency in which the data body J2 is stored in themagnetic storage unit 21A while improving the frequency in which the management information J1 is stored in thesemiconductor storage unit 21B. Thus, it is possible to efficiently read the management information J1 from thesemiconductor storage unit 21B and to increase the frequency in which themagnetic storage unit 21A accesses the data body J2 only. Thus, it is possible to decrease the frequency in which themagnetic storage unit 21A alternately accesses the management information J1 and the data body J2. As a result, it is possible to reduce the seek time in themagnetic storage unit 21A and to efficiently write the data body J2 to themagnetic storage unit 21A. Therefore, it is possible to improve the response speed to an access request from thehost 17. -
FIG. 5 is a flowchart illustrating a queuing method of the data storage device according to the second embodiment. - In
FIG. 5 , when a write command is received from thehost 17, it is determined whether the number of jobs piled in thequeue 14B of thesemiconductor storage unit 21B is equal to or larger than X (X is a positive integer) (S11). When it is determined in S11 that the number of jobs piled in thequeue 14B of thesemiconductor storage unit 21B is equal to or larger than X (No in S11), it is determined whether the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or larger than Y (Y is a positive integer) (S12). When it is determined in S12 that the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or larger than Y (Yes in S12), the flow returns to S11. On the other hand, it is determined in S12 that the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is not equal to or larger than Y (No in S12), a job is distributed to thequeue 14A of themagnetic storage unit 21A (S13). - On the other hand, when it is determined in S11 that the number of jobs piled in the
queue 14B of thesemiconductor storage unit 21B is not equal to or larger than X (No in S11), it is determined whether the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or larger than Y (S14). When it is determined in S14 that the number of jobs piled in thequeue 14A of themagnetic storage unit 21A is equal to or larger than Y (Yes in S14), a job is distributed to thequeue 14B of thesemiconductor storage unit 21B. - On the other hand, when it is determined in S14 that the number of jobs piled in the
queue 14A of themagnetic storage unit 21A is not equal to or larger than Y (No in S14), it is determined whether the length of data designated by the write command is equal to or smaller than Z (Z is a positive integer) (S15). When it is determined in S15 that the length of data designated by the write command is equal to or smaller than Z (Yes in S15), a job is distributed to thequeue 14B of thesemiconductor storage unit 21B (S16). On the other hand, when it is determined in S15 that the length of data designated by the write command is not equal to or smaller than Z (No in S15), a job is distributed to thequeue 14A of themagnetic storage unit 21A (S13). - According to the second embodiment, it is possible to reduce the seek time in the
magnetic storage unit 21A and to efficiently write the data body J2 to themagnetic storage unit 21A. Thus, it is possible to improve the response speed to an access request from thehost 17. - In
FIG. 1 , thecache manager 13A can measure the number of accesses from thehost 17, to data stored in themagnetic storage unit 21A without being stored in thesemiconductor storage unit 21B and register the number of accesses in the cache management table 14C. Moreover, thedata storage device 21 may write data stored in themagnetic storage unit 21A to thesemiconductor storage unit 21B based on the number of accesses registered in the cache management table 14C. For example, the data stored in themagnetic storage unit 21A can be written to thesemiconductor storage unit 21B when the number of accesses is 2 or more. -
FIG. 6 is a diagram illustrating a method of updating a cache management table in a data storage device according to a third embodiment. - In
FIG. 6 , it is assumed that data has been written to themagnetic storage unit 21A based on a write command WM1 from thehost 17 ofFIG. 1 in which an LBA of X1 is included. In this case, in the cache management table 14C, X1 is registered in the LBA and the FPB is not designated. Moreover, in the cache management table 14C, “0” is set to a dirty flag, and “0” is registered in the number of accesses. The dirty flag “0” indicates that data has been written to themagnetic storage unit 21A regardless of whether data has been written to thesemiconductor storage unit 21B. - Next, it is assumed that data has been read from the
magnetic storage unit 21A only twice based on a read command from thehost 17 in which an LBA of X1 is included. In this case, in the cache management table 14C, the number of accesses is incremented by only “2.” Moreover, when data is read twice from themagnetic storage unit 21A, the read data is written to thesemiconductor storage unit 21B. In this case, if an FPB of Y1 is allocated so as to correspond to the LBA of X1, the FPB of Y1 is registered in the cache management table 14C so as to correspond to the LBA of X1. - According to the third embodiment, when data stored in the
magnetic storage unit 21A without being stored in thesemiconductor storage unit 21B is read from thehost 17, the data is written to thesemiconductor storage unit 21B, whereby the response speed to a read command for the data from thehost 17 can be improved. - In
FIG. 1 , thecache manager 13A may measure the number of accesses in response to a read command from thehost 17, to data stored in thesemiconductor storage unit 21B and reset the number of accesses to the data according to a write command from thehost 17. -
FIG. 7 is a diagram illustrating a method of updating a cache management table in a data storage device according to a fourth embodiment. - In
FIG. 7 , it is assumed that data has been written to thesemiconductor storage unit 21B based on a write command WM2 from thehost 17 ofFIG. 1 in which an LBA of X2 is included. In this case, if an FPB of Y2 is allocated so as to correspond to the LBA of X2, X2 is registered in the LBA and Y2 is registered in the FPB in the cache management table 14C. Moreover, in the cache management table 14C, “1” is set to a dirty flag, and “0” is registered in the number of accesses. The dirty flag “1” indicates that data has been written to thesemiconductor storage unit 21B without being written to themagnetic storage unit 21A. - Next, it is assumed that data in which an LBA of X2 is included is read from the
semiconductor storage unit 21B only five times based on a read command from thehost 17. In this case, in the cache management table 14C, the number of accesses is incremented by only “5.” By registering the number of accesses of thesemiconductor storage unit 21B on the cache management table 14C, it is possible to allow data having a larger number of accesses to be preferentially left in thesemiconductor storage unit 21B and to improve the response speed to an access request from thehost 17. - Next, it is assumed that a write command in which an LBA of X2 is included is issued from the
host 17, and X2 is designated by the write command as the LBA. In this case, in the cache management table 14C, the number of accesses at the LBA of X2 is reset to “0,” and the FPB is updated to an address that is newly allocated. - NAND flash memories have a property to deteriorate as the number of rewrites increases. According to the fourth embodiment, a larger number of accesses indicated in the cache management table 14C serve as an indicator that many effective hits have occurred with progress of deterioration in the NAND memory used as a cache. Thus, by preferentially recording data having a larger number of accesses to the cache while resetting the number of accesses to the LBA of X2 included in the write command issued from the
host 17 to “0,” it is possible to improve the hit rate while extending the effective working time of the cache that uses NAND flash memories. - In
FIG. 1 , thedata storage device 21 may write data stored in themagnetic storage unit 21A to thesemiconductor storage unit 21B and send the data to thehost 17 when a read command from thehost 17 is received and the length of data designated by the read command is equal to or smaller than a predetermined value. Moreover, thedata storage device 21 may send the data stored in themagnetic storage unit 21A to thehost 17 without writing the data to thesemiconductor storage unit 21B when the length of data designated by the read command from thehost 17 exceeds the predetermined value. -
FIGS. 8A and 8C are diagrams illustrating a method of transferring read data from a magnetic storage unit in a data storage device according to a fifth embodiment, andFIGS. 8B and 8D are diagrams illustrating a method of updating a cache management table corresponding to the processes ofFIGS. 8A and 8C . - In
FIGS. 8A and 8B , it is assumed that data DA having a length exceeding a predetermined value is stored in themagnetic storage unit 21A without being stored in thesemiconductor storage unit 21B. In this case, if the LBA of the data DA is X6, in the cache management table 14C, X6 is registered in the LBA, and the FPB is not designated. Moreover, in the cache management table 14C, “0” is set to the dirty flag, and “0” is registered in the number of accesses. - If X6 is designated as the LBA by a read command when the read command is received from the
host 17, the data DA stored in themagnetic storage unit 21A is sent to thehost 17 without being written to thesemiconductor storage unit 21B. In this case, in the cache management table 14C, the number of accesses is incremented by only “1.” - On the other hand, in
FIGS. 8C and 8D , it is assumed that data DB having a length equal to or smaller than a predetermined value is stored in themagnetic storage unit 21A without being stored in thesemiconductor storage unit 21B. In this case, if the LBA of the data DB is X7, in the cache management table 14C, X7 is registered in the LBA, and the FPB is not designated. Moreover, in the cache management table 14C, “0” is set to the dirty flag, and “0” is registered in the number of accesses. - Moreover, if X7 is designated as the LBA by a read command when the read command is received from the
host 17, the data DB stored in themagnetic storage unit 21A is written to an area of thesemiconductor storage unit 21B in which the FPB includes Y7 and is sent to thehost 17. In this case, in the cache management table 14C, the number of accesses is incremented by only “1.” Moreover, if the FPB of Y7 is allocated so as to correspond to the LBA of X7, in the cache management table 14C, the FPB of Y7 is registered so as to correspond to the LBA of X7. - According to the fifth embodiment, when a read command is received from the
host 17, data having a length equal to or smaller than a predetermined value is written to thesemiconductor storage unit 21B, and data having a length exceeding the predetermined value is not written to thesemiconductor storage unit 21B. By doing so, it is possible to improve the frequency in which data having a large length is read from themagnetic storage unit 21A and data having a small length is read from thesemiconductor storage unit 21B. Thus, it is possible to effectively utilize the higher write transfer rate of themagnetic storage unit 21A than thesemiconductor storage unit 21B while reducing the seek time of themagnetic storage unit 21A and to improve the response speed to an access request from thehost 17. - In
FIG. 1 , thedata storage device 21 may write data stored in themagnetic storage unit 21A to thesemiconductor storage unit 21B when no access has been made from thehost 17 for a predetermined period or more and the number of accesses is a predetermined value or more. -
FIG. 9A is a diagram illustrating a method of writing data stored in a magnetic storage unit to a semiconductor storage unit in a data storage device according to a sixth embodiment, andFIG. 9B is a diagram illustrating a method of updating a cache management table corresponding to the process ofFIG. 9A . - In
FIGS. 9A and 9B , it is assumed that data DC is stored in themagnetic storage unit 21A without being stored in thesemiconductor storage unit 21B. In this case, if the LBA of the data DC is X8, in the cache management table 14C, X8 is registered in the LBA and the FPB is not designated. Moreover, in the cache management table 14C, “0” is set to the dirty flag. Moreover, if the data DC is read only five times based on a read command from thehost 17, the number of accesses of the cache management table 14C is incremented by only “5.” - After that, the data DC stored in the
magnetic storage unit 21A is written to thesemiconductor storage unit 21B when no access has been made from thehost 17 for a predetermined period or more and the number of accesses is equal to or larger than a predetermined value (for example, 5). In this case, if the FPB of Y8 is allocated so as to correspond to the LBA of X8, in the cache management table 14C, the FPB of Y8 is registered so as to correspond to the LBA of X8. - According to the sixth embodiment, the data DC stored in the
magnetic storage unit 21A is written to thesemiconductor storage unit 21B when no access has been made from thehost 17 for a predetermined period or more and the number of accesses is equal to or larger than a predetermined value. By doing so, it is possible to improve the response speed to an access request from thehost 17 while effectively utilizing the vacant period of thedata storage device 21. - In
FIG. 1 , when it is notified from thehost 17 that an expectation value (priority level) of an access to data designated by a write command is high, thedistributor 13B may distribute the write command preferentially to thequeue 14B of thesemiconductor storage unit 21B. - In this manner, according to the seventh embodiment, it is possible to allow data having a higher access expectation value to be efficiently cached to the
semiconductor storage unit 21B and to improve the response speed to an access request from thehost 17. - While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (6)
1. A data storage device comprising:
a first storage unit;
a second storage unit that has a lower write transfer rate and a faster response time than the first storage unit; and
a controller configured to:
measure a number of accesses, in response to a read command from a host, to first data stored in the first storage unit, and
control to write the first data stored in the first storage unit to the second storage unit in a case where the measured number of accesses is a certain number or more, wherein
the controller controls to write second data to the first storage unit in a case where a write command to overwrite the first data with the second data is received from the host.
2. The data storage device according to claim 1 , wherein
in a case where the read command is received from the host when the first data is stored in the second storage unit, the controller sends the first data from the second storage unit to the host.
3. The data storage device according to claim 1 , wherein
the read command and the write command include a same logical address, and
the controller records the number of accesses in association with the logical address, and resets the number of accesses according to the write command.
4. The data storage device according to claim 1 , wherein
after writing the second data to the first storage unit, the controller measures a number of accesses to the second data stored in the first storage unit, and controls to write the second data stored in the first storage unit to the second storage unit in a case where the measured number of accesses to the second data is the certain number or more.
5. The data storage device according to claim 1 , wherein the controller writes the first data stored in the first storage unit to the second storage unit in a case where no access has been made from the host for a certain period or more.
6. The data storage device according to claim 1 , wherein
in a case where the read command from the host, the controller writes the first data stored in the first storage unit to the second storage unit and sends the first data to the host when the length of the first data is a predetermined value or smaller, and sends the first data stored in the first storage unit to the host without writing the first data to the second storage unit when the length of the first data exceeds the predetermined value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/366,304 US20190220215A1 (en) | 2013-06-05 | 2019-03-27 | Data storage device and data storage control method |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2013-118505 | 2013-06-05 | ||
JP2013118505A JP2014235677A (en) | 2013-06-05 | 2013-06-05 | Data storage device and data storage control method |
US14/021,032 US20140365737A1 (en) | 2013-06-05 | 2013-09-09 | Data storage device and data storage control method |
US15/223,347 US10268415B2 (en) | 2013-06-05 | 2016-07-29 | Data storage device including a first storage unit and a second storage unit and data storage control method thereof |
US16/366,304 US20190220215A1 (en) | 2013-06-05 | 2019-03-27 | Data storage device and data storage control method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/223,347 Continuation US10268415B2 (en) | 2013-06-05 | 2016-07-29 | Data storage device including a first storage unit and a second storage unit and data storage control method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
US20190220215A1 true US20190220215A1 (en) | 2019-07-18 |
Family
ID=52006497
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/021,032 Abandoned US20140365737A1 (en) | 2013-06-05 | 2013-09-09 | Data storage device and data storage control method |
US15/223,347 Active 2034-06-09 US10268415B2 (en) | 2013-06-05 | 2016-07-29 | Data storage device including a first storage unit and a second storage unit and data storage control method thereof |
US16/366,304 Abandoned US20190220215A1 (en) | 2013-06-05 | 2019-03-27 | Data storage device and data storage control method |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/021,032 Abandoned US20140365737A1 (en) | 2013-06-05 | 2013-09-09 | Data storage device and data storage control method |
US15/223,347 Active 2034-06-09 US10268415B2 (en) | 2013-06-05 | 2016-07-29 | Data storage device including a first storage unit and a second storage unit and data storage control method thereof |
Country Status (2)
Country | Link |
---|---|
US (3) | US20140365737A1 (en) |
JP (1) | JP2014235677A (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109933293B (en) * | 2019-03-25 | 2022-06-07 | 深圳忆联信息系统有限公司 | Data writing method and device based on SpiFlash and computer equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070005889A1 (en) * | 2005-06-29 | 2007-01-04 | Matthews Jeanna N | Method, device, and system to avoid flushing the contents of a cache by not inserting data from large requests |
US20110153931A1 (en) * | 2009-12-22 | 2011-06-23 | International Business Machines Corporation | Hybrid storage subsystem with mixed placement of file contents |
US20150149709A1 (en) * | 2013-11-27 | 2015-05-28 | Alibaba Group Holding Limited | Hybrid storage |
US20150277761A1 (en) * | 2014-03-26 | 2015-10-01 | Seagate Technology Llc | Storage media performance management |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH06195265A (en) * | 1992-12-25 | 1994-07-15 | Matsushita Electric Ind Co Ltd | Operation of cache memory |
JPH11224164A (en) * | 1998-02-06 | 1999-08-17 | Hitachi Ltd | Magnetic disk sub system |
US9104315B2 (en) | 2005-02-04 | 2015-08-11 | Sandisk Technologies Inc. | Systems and methods for a mass data storage system having a file-based interface to a host and a non-file-based interface to secondary storage |
JP2009181314A (en) | 2008-01-30 | 2009-08-13 | Toshiba Corp | Information recording device and control method thereof |
JP2010176305A (en) * | 2009-01-28 | 2010-08-12 | Toshiba Corp | Information processing apparatus and data storage device |
US20120166749A1 (en) * | 2009-09-08 | 2012-06-28 | International Business Machines Corporation | Data management in solid-state storage devices and tiered storage systems |
JP2011209973A (en) * | 2010-03-30 | 2011-10-20 | Hitachi Ltd | Disk array configuration program, computer and computer system |
US8954669B2 (en) * | 2010-07-07 | 2015-02-10 | Nexenta System, Inc | Method and system for heterogeneous data volume |
US8775720B1 (en) * | 2010-08-31 | 2014-07-08 | Western Digital Technologies, Inc. | Hybrid drive balancing execution times for non-volatile semiconductor memory and disk |
JP2012221038A (en) * | 2011-04-05 | 2012-11-12 | Toshiba Corp | Memory system |
KR20130070178A (en) * | 2011-12-19 | 2013-06-27 | 한국전자통신연구원 | Hybrid storage device and operating method thereof |
WO2014002126A1 (en) * | 2012-06-25 | 2014-01-03 | Hitachi, Ltd. | Computer system and method of controlling i/o with respect to storage apparatus |
-
2013
- 2013-06-05 JP JP2013118505A patent/JP2014235677A/en active Pending
- 2013-09-09 US US14/021,032 patent/US20140365737A1/en not_active Abandoned
-
2016
- 2016-07-29 US US15/223,347 patent/US10268415B2/en active Active
-
2019
- 2019-03-27 US US16/366,304 patent/US20190220215A1/en not_active Abandoned
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070005889A1 (en) * | 2005-06-29 | 2007-01-04 | Matthews Jeanna N | Method, device, and system to avoid flushing the contents of a cache by not inserting data from large requests |
US20110153931A1 (en) * | 2009-12-22 | 2011-06-23 | International Business Machines Corporation | Hybrid storage subsystem with mixed placement of file contents |
US20150149709A1 (en) * | 2013-11-27 | 2015-05-28 | Alibaba Group Holding Limited | Hybrid storage |
US20150277761A1 (en) * | 2014-03-26 | 2015-10-01 | Seagate Technology Llc | Storage media performance management |
Also Published As
Publication number | Publication date |
---|---|
US20160335026A1 (en) | 2016-11-17 |
US20140365737A1 (en) | 2014-12-11 |
US10268415B2 (en) | 2019-04-23 |
JP2014235677A (en) | 2014-12-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8819375B1 (en) | Method for selective defragmentation in a data storage device | |
CN103135940B (en) | Implementing enhanced fragmented stream handling in a shingled disk drive | |
US10614852B2 (en) | Data-center drive with split-actuator that increases read/write performance via data striping | |
JP4282733B1 (en) | Disk storage device and data writing method | |
US7853761B2 (en) | Classifying write commands into groups based on cumulated flush time | |
KR101674015B1 (en) | Data storage medium access method, data storage device and recording medium thereof | |
US10152236B2 (en) | Hybrid data storage device with partitioned local memory | |
US20100079904A1 (en) | Storage control method, storage control unit and storage apparatus | |
US10802739B1 (en) | Data storage device configuration for accessing data in physical realms | |
US20050289262A1 (en) | Disk drive system on chip with integrated buffer memory and support for host memory access | |
US20160378357A1 (en) | Hybrid storage device and method for operating the same | |
US8862856B2 (en) | Implementing remapping command with indirection update for indirected storage | |
US7913029B2 (en) | Information recording apparatus and control method thereof | |
US9727265B2 (en) | Disk device and control method that controls amount of data stored in buffer | |
US8345370B2 (en) | Magnetic disk drive and refresh method for the same | |
JP2015082240A (en) | Storage device, cache controller, and method for writing data in nonvolatile storage medium | |
US8736994B2 (en) | Disk storage apparatus and write control method | |
JP2017068634A (en) | Storage device and method | |
US20190220215A1 (en) | Data storage device and data storage control method | |
US7898757B2 (en) | Hard disk drive with divided data sectors and hard disk drive controller for controlling the same | |
US20110022774A1 (en) | Cache memory control method, and information storage device comprising cache memory | |
US9588898B1 (en) | Fullness control for media-based cache operating in a steady state | |
JP2012521032A (en) | SSD controller and operation method of SSD controller | |
US20170371554A1 (en) | Internal Data Transfer Management in a Hybrid Data Storage Device | |
US9658964B2 (en) | Tiered data storage system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |