US20160041762A1 - Memory system, host device and information processing system - Google Patents
Memory system, host device and information processing system Download PDFInfo
- Publication number
- US20160041762A1 US20160041762A1 US14/817,625 US201514817625A US2016041762A1 US 20160041762 A1 US20160041762 A1 US 20160041762A1 US 201514817625 A US201514817625 A US 201514817625A US 2016041762 A1 US2016041762 A1 US 2016041762A1
- Authority
- US
- United States
- Prior art keywords
- ssd
- power
- extensive
- host device
- host
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0604—Improving or facilitating administration, e.g. storage management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3275—Power saving in memory, e.g. RAM, cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1008—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
- G06F11/1068—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in sector programmable memories, e.g. flash disk
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0619—Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0673—Single storage device
- G06F3/0679—Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C29/00—Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
- G11C29/52—Protection of memory contents; Detection of errors in memory contents
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03M—CODING; DECODING; CODE CONVERSION IN GENERAL
- H03M13/00—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes
- H03M13/29—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes
- H03M13/2906—Coding, decoding or code conversion, for error detection or error correction; Coding theory basic assumptions; Coding bounds; Error probability evaluation methods; Channel models; Simulation or testing of codes combining two or more codes or code structures, e.g. product codes, generalised product codes, concatenated codes, inner and outer codes using block codes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/023—Free address space management
- G06F12/0238—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
- G06F12/0246—Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/21—Employing a record carrier using a specific recording technology
- G06F2212/214—Solid state disk
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/04—Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS
- G11C16/0483—Erasable programmable read-only memories electrically programmable using variable threshold transistors, e.g. FAMOS comprising cells having several storage transistors connected in series
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C16/00—Erasable programmable read-only memories
- G11C16/02—Erasable programmable read-only memories electrically programmable
- G11C16/06—Auxiliary circuits, e.g. for writing into memory
- G11C16/24—Bit-line control circuits
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- Embodiments described herein relate generally to a memory system, a host device and an information processing system.
- SSD solid-state drive
- HDD hard disk drive
- FIG. 1 is a block diagram illustrating an outline of the embodiments.
- FIG. 2 is a perspective view illustrating an information processing system in FIG. 1 .
- FIG. 3 is a block diagram illustrating an information processing system according to a first embodiment.
- FIG. 4 is a diagram illustrating a relationship between time and the number of empty blocks as to garbage collection of an information processing system according to a second embodiment.
- FIG. 5 is a block diagram illustrating an information processing system according to a third embodiment.
- FIG. 6 is a diagram schematically illustrating blocks according to an eighth embodiment and blocks of a comparative example.
- FIG. 7 is a block diagram illustrating an information processing system according to a ninth embodiment.
- FIG. 8 is a block diagram showing a general structure of information processing system according to the tenth embodiment.
- FIG. 9 is a block diagram showing a detailed structure of the information processing system according to the tenth embodiment.
- FIG. 10 is a table showing a table T 1 according to the tenth embodiment.
- FIG. 11 is a graph showing an example of theoretical power/performance characteristics of the information processing system according to the tenth embodiment.
- FIG. 12 is a flowchart showing a power distribution determination process according to the tenth embodiment.
- FIG. 13 is a graph showing an example of actual power/performance characteristics of the information processing system according to the tenth embodiment.
- FIG. 14 is a table showing the updated table T 1 .
- FIG. 15 is a view showing power distribution to be changed.
- FIG. 16 is a view showing changed power distribution.
- FIG. 17 is a view schematically showing a storage architecture according to the tenth embodiment.
- FIG. 18 is a view schematically showing a storage architecture according to a comparative example.
- FIG. 19 is a block diagram showing a detailed structure of a information processing system according to the eleventh embodiment.
- FIG. 20 is a flowchart showing a power distribution determination process according to the eleventh embodiment.
- FIG. 21 is a block diagram showing a general structure of an information processing system according to the twelfth embodiment.
- FIG. 22 is a block diagram showing a general structure of an information processing system according to the thirteenth embodiment.
- FIG. 23 is a table showing a table T 3 according to modified example 1.
- FIG. 24 is a perspective view showing an example of the exterior of the information processing system according to the first to thirteenth embodiments and modified example 1.
- a memory system includes a nonvolatile memory and a controller which controls the nonvolatile memory.
- the controller notifies to an outside an extensive signal which indicates a predetermined state of the nonvolatile memory or the controller.
- drawings are merely examples, and may differ from when the embodiments are actually realized in terms of, for example, the relationship between thickness and planar dimension and the ratio of thickness of layers. Further, in the drawings, the relationship or ratio of dimensions may be different from figure to figure.
- a solid-state drive (SSD) is given as an example as a memory system 10 .
- an information processing system 100 includes a plurality of SSDs 10 and a host device 20 .
- Each of the plurality of SSDs 10 includes a NAND flash memory (NAND memory) 11 and an SSD controller 12 .
- NAND memory NAND flash memory
- the NAND memory 11 is a nonvolatile memory physically including a plurality of chips (for example, five chips), although not shown.
- Each of the NAND memory 11 is constituted by a plurality of physical blocks having a plurality of memory cells arranged on the intersection point between a word line and a bit line.
- data is erased collectively per physical block. That is, the physical block is a unit of data erasure. Data write and data read are performed per page (word line) in each block.
- the SSD controller (memory controller) 12 controls the whole operation of the SSD 10 .
- the SSD controller 12 controls access (data read, data write, data delete, etc.) to the NAND memory 11 in accordance with an instruction (request or command COM) from the host device 20 .
- the host device 20 transmits, for example, a read command COMR and an address ADD to each SSD 10 .
- a control unit for example, CPU, processor or MUP, which is not shown, of the host device 20 receives from the SSD 10 read data DATA corresponding to a request of the read command COMR.
- the control unit of the host device 20 issues to the SSD 10 an extensive command eCOM, which is for deliberately (intentionally) detecting various states (for example, a state of a bad block of the NAND memory 11 ) of the SSD 10 and is defined differently from the above-mentioned read command COMR and a write command COMW. It may not be limited to the command eCOM but may be a different extensive (or extended) predetermined signal (information, request, instruction, etc.).
- the SSD controller 12 of the SSD 10 returns its own state (SSD 10 ) to the host device 20 as an extensive status signal ReS, based on the received extensive command eCOM. It may not be limited to the status signal ReS but may be a different extensive (or extended) predetermined signal (information, return, response, etc.).
- the host device 20 can detect various states of the SSD 10 based on the returned extensive status signal ReS. This enables the host device 20 to improve the detected state of the SSD 10 as necessary.
- the above-mentioned extensive command eCOM and extensive status signal ReS may be transmitted in any order. That is, it is possible to firstly transmit an extensive predetermined signal from the SSD 10 to the host device 20 and secondly transmit the extensive predetermined signal from the host device 20 to the SSD 10 .
- the SSD 10 as shown is, for example, a relatively small module, and has an outside dimension of, for example, approximately 120 mm ⁇ 130 mm. Note that the size and dimension of the SSD 10 may not be limited thereto but may be appropriately modified to various ones. Also, the SSD 10 can be used by being mounted to the server-like host device 20 in, for example, a data center or a cloud computing system operated by a company (enterprise). Therefore, the SSD 10 may be an enterprise SSD (eSSD).
- eSSD enterprise SSD
- the host device 20 includes a plurality of connectors (for example, slots) 30 which are opened upward, for example.
- Each connector 30 is, for example, a serial attached SCSI (SAS) connector.
- SAS serial attached SCSI
- This SAS connector enables the host device 20 and each SSD 10 to perform high-speed communication with each other by means of a dual port of 6 GPBS.
- each connector 30 may not be limited thereto but may be, for example, PCI express (PCIe) or NVM express (NVMe).
- each shape of the SSD 10 of the present embodiment is 2.5 inches small form factor (SFF).
- SFF small form factor
- the SSD 10 is not limited for enterprise one.
- the SSD 10 is certainly applicable as a storage medium of an electronic device for consumer such as a notebook portable computer and a tablet device.
- the first embodiment relates to an example of setting of reducing the time of error correction when performing data read from the SSD 10 .
- the following gives an example where error correction is performed by means of a BCH code at the time of data read from the SSD 10 .
- the host device 20 issues to the SSD 10 an extensive command of error correction (not shown).
- An acceptable latency time (acceptable latency) of error correction is added to the command as an attribute.
- the processing time may be designated qualitatively such as “as soon as possible” or quantitatively.
- the SSD 10 switches a switch SW 1 by the received attribute signal and transmits read data to the host device 20 .
- the SSD 10 switches the switch SW 1 to the upper column side (fast decoder [weak decoder]) of the NAND memory 11 in FIG. 3 .
- Read data RDF is then transmitted to the host device 20 from the fast decoder, where the amount of error correction is relatively small.
- the SSD 10 switches the switch SW 1 to the lower column side (strong decoder [slow decoder]) of the NAND memory 11 .
- Error correction is performed by means of a BCH code more intensively for read data RDS on the lower side (strong decoder [slow decoder]) of the NAND memory 11 , where the amount of error correction is relatively large and intensive error correction is required.
- the read data RDS is then transmitted to the host device 20 in a similar manner.
- the SSD 10 If an error cannot be corrected, the SSD 10 returns the error to the host device 20 .
- the structure and operation of the first embodiment it is possible to perform data read based on a latency time accepted by the host device 20 . It is therefore possible to reduce the read time of error correction when performing a read operation. In other words, it is possible to make a setting in a read operation so that an error is returned without spending more time on correction than necessary when an error occurs.
- the second embodiment relates to an example of garbage collection (GC).
- GC garbage collection
- the host device 20 issues to the SSD 10 an extensive command of garbage collection (not shown).
- the SSD 10 returns to the host device 20 a state of garbage collection as an extensive status signal (not shown), based on an extensive command of garbage collection.
- the host device 20 performs control to make the SSD 10 perform garbage collection and secures the number of empty blocks at idle time, etc., of the host device 20 , based on the received extensive status signal.
- the SSD 10 autonomously stops garbage collection in order to perform data write and autonomously resumes garbage collection after completing data write.
- the host device 20 issues a command to cause the SSD 10 to perform garbage collection and increases the number of empty blocks at idle time, based on the received extensive status signal.
- the host device 20 stops garbage collection to perform a data write operation to the SSD 10 .
- the host device 20 resumes garbage collection after completing a data write operation. Note that between times t 1 and t 2 , the number of secured empty blocks decreases in response to the data write operation.
- the SSD 10 ends garbage collection.
- garbage collection it is possible to perform garbage collection to increase the number of empty blocks in advance, and secure it at free time such as idle time. Therefore, when the SSD 10 is busy such as in a busy state, garbage collection (GC) is less likely to occur and an average response time can be reduced.
- GC garbage collection
- the third embodiment relates to an example of controlling a data write operation, i.e., an example of controlling the kinds of a NAND memory, which is a write destination, according to an attribute of write data.
- the NAND memory 11 of the third embodiment includes plural kinds of single-level cell (SLC) 111 , multi-level cell (MLC) 112 , triple-level cell (TLC) 113 and quad-level cell (QLC) 114 .
- SLC single-level cell
- MLC multi-level cell
- TLC triple-level cell
- QLC quad-level cell
- the SSD controller 12 of the third embodiment includes a control unit 121 which writes data separately for the above-mentioned kinds ( 111 to 114 ) of the NAND memory 11 .
- the host device 20 firstly issues Lo the SSD 10 an extensive write command to which an attribute such as data update frequency is added. Secondly, the SSD 10 returns an extensive status signal (not shown) to the host device 20 in response to the extensive write command.
- the write control unit 121 of the SSD 10 writes write data in the above-mentioned kinds ( 111 to 114 ) of the NAND memory 11 , based on the above-mentioned received extensive write command.
- write control unit 121 writes the data to the SLC 111 based on the received extensive write command. This is because the meta data, etc., is rewritten frequently.
- the write control unit 121 writes the write data to the MLC 112 , the TLC 113 and the QLC 114 , based on the received extensive write command. This is because the user data, etc., is rewritten infrequently.
- the host device 20 issues an extensive write command with an attribute such as data update frequency. It is thereby possible to change the kinds of the NAND memory 11 to be used as necessary according to data attribute and to improve data write efficiency.
- the fourth embodiment relates to an example of distributing power to the SSD 10 .
- the host device 20 issues to the SSD 10 an extensive command about consumption power.
- the SSD 10 returns to the host device 20 information (achievement and prediction), which indicates the correspondence relationship between consumption power and performance, as an extensive status signal based on the extensive command about the consumption power.
- the host device 20 determines distribution (budget) of consumption power to each SSD 10 in view of the performance of each SSD 10 within the acceptable range of the total consumption power of the plurality of the SSDs 10 , and notifies each SSD 10 of the determined result.
- the fourth embodiment it is possible to distribute power that remains in one of the SSDs 10 to another SSD 10 based on an attribute of the SSD 10 . It is therefore possible to use redundant power within the acceptable range of the total consumption power and to improve the whole performance of the plurality of SSDs 10 .
- the fifth embodiment relates to an example of dividing the SSDs 10 into necessary groups (namespaces [partitions]) to perform control.
- the host device 20 issues to the SSD 10 an extensive command of predetermined grouping (not shown).
- the SSD 10 returns to the host device 20 an extensive status signal (not shown) that indicates its own state (SSD 10 ), in response to the extensive command of the predetermined grouping.
- the host device 20 divides into predetermined groups (namespaces) to perform control necessary for each of the groups, based on the received extensive status signal.
- the host device 20 performs the following controls:
- the fifth embodiment it is possible to divide the SSDs 10 into groups as necessary and to improve performance.
- the sixth embodiment relates to an example of allocating the physical blocks of the NAND memory 11 by each attribute.
- the host device 20 adds an attribute corresponding to a file data, etc., and allocates physical blocks of the NAND memory 11 by each attribute.
- GC garbage collection
- the seventh embodiment relates to an example of providing advantageous information, etc.
- advanced information for the host device 20 such as write-amplification factor (WAF), information of used block and information of empty block is transmitted from the SSD 10 to the host device 20 on a regular basis.
- WAF write-amplification factor
- the host device 20 performs necessary control based on the transmitted advanced information.
- the eighth embodiment relates to an example of providing NAND block boundary information.
- the SSD 10 shows the host device 20 information indicating “how many more times of writing would fill the NAND blocks.” Based on the information shown in the SSD 10 , the host device 20 can perform data write per physical block of the NAND memory 11 until it reaches an appropriate state. It is therefore possible to reduce garbage collection.
- the host device 20 cannot recognize the write state of the NAND blocks in the comparative example. Therefore, data having different file names, etc., is written together with the physical blocks of the NAND memory. This increases WAF caused by garbage collection, etc.
- the host device 20 recognizes the write state of the NAND blocks in the eighth embodiment. Therefore, it is possible to write to the physical blocks of the NAND memory 11 separately for predetermined information such as file name. This reduces WAF caused by garbage collection, etc.
- the ninth embodiment relates to an example of dynamic resizing of the SSD 10 .
- the portions that substantially overlap with the above-mentioned embodiments will not be described.
- the host device 20 designates a place (address) of data in a logical block address (LBA) when performing read and write of the data to the SSD 10 .
- the SSD 10 manages mapping from an LBA to a physical block address (PBA) in a lookup table (LUT) 123 .
- LBA being used is mapped to the used block of the LUT.
- the SSD controller 12 of the SSD 10 of the ninth embodiment includes a bad block examination unit 121 , a storage capacity information reception unit 122 and the lookup table (LUT) 123 .
- the bad block examination unit 121 receives an extensive command of a bad block from the host device 20 , responses thereto, and returns it to the host device 20 as an extensive status signal ReS 9 of a bad block. Further, the bad block examination unit 121 notifies the signal to the storage capacity information reception unit 122 .
- the bad block examination unit 121 can adopt two means of notifying the increase in number of bad blocks from the SSD 10 to the host device 20 .
- the first means is to add the number of bad blocks to the statistic information of the SSD 10 . If this information is read by, for example, polling from the host device 20 on a regular basis, it is possible to make an indirect notification.
- the second means is to make a direct notification from the SSD 10 to the host device 20 by means of a callback mechanism.
- the protocol of the interface of the SSD 10 is extended so as to issue a notification.
- the storage capacity information reception unit 122 receives the above-mentioned signal from the bad block examination unit 121 and control from the host device 20 , and updates information of the LUT 123 .
- the host device 20 of the ninth embodiment includes a bad block information reception unit 211 , a storage capacity determination unit 212 and a use capacity reduction unit 213 .
- the bad block information reception unit 211 receives the above-mentioned extensive status signal ReS 9 from the SSD 10 and transmits bad block information to the storage capacity determination unit 212 .
- the storage capacity determination unit 212 receives the bad block information from the bad block information reception unit 211 , determines storage capacity, and notifies the use capacity reduction unit 213 and the storage capacity information reception unit 122 of the SSD 10 . In other words, the storage capacity determination unit 212 notifies the SSD 10 of the decrease in use capacity in accordance with the determination of use capacity.
- the storage capacity determination unit 212 can adopt any of three means of notifying the decrease in use capacity from the host device 20 to the SSD 10 .
- the first means is to create in the SSD 10 a new command of setting the maximum value (user capacity) of an LBA and to issue this command from the host device 20 .
- the mapping of the LEA exceeding the maximum value can be released in the LUT 123 on the side of the SSD 10 .
- the second means is to use a TRIM or UNMAP command. By means of these commands, it is possible to notify an LBA that is not used by the host device 20 . On receipt of the command, the mapping of the LBA of the LUT 123 can be released on the side of the SSD 10 .
- the third means is to extend a bad sector designation command (WRITE_UNCORRECTABLE_EXT). This command, when reading and writing the designated LBA thereafter, causes the SSD 10 to return an error. In addition, extension is made so as to release the mapping of the LBA.
- WRITE_UNCORRECTABLE_EXT a bad sector designation command
- the use capacity reduction unit 213 reduces capacity to be used in accordance with the storage capacity received from the storage capacity determination unit 212 .
- the use capacity reduction unit 213 can adopt either of two means of reducing use capacity of the host device 20 on receipt of the above-mentioned notification from the storage capacity determination unit 212 .
- the first means is to reduce use capacity by deleting data that can be deleted such as cache data (stored data in which the same data overlaps in another SSD [storage device] 10 and an HDD).
- the second means is to transfer data from one of the SSD 10 to another SSD 10 having allowance to reduce use capacity of the former SSD 10 , when a combination of the plurality of SSDs 10 is used as in a logical volume manager (LVM).
- LVM logical volume manager
- the ninth embodiment it is possible to obtain at least an advantageous effect of prolonging the life of the SSD 10 by reducing capacity used for the host 20 , when the number of bad blocks of the NAND memory 11 increases, in addition to the above-mentioned outline and the effects of the embodiments.
- an SSD is a storage device electrically connected to a host device (for example, calculator) to perform read and write of data from the host device.
- a host device for example, calculator
- a NAND memory is used as a nonvolatile memory and is managed per block.
- the block of a NAND memory is classified into three types of blocks including a bad block that cannot be used due to, for example, manufacturing defect or life, a used block that stores data written from a host, and an empty block that is not used. When the number of bad blocks and used blocks increases, the number of empty blocks decreases accordingly.
- the storage capacity (use capacity) of a storage device used by a host device is limited by user capacity.
- the user capacity is the remaining capacity in which the capacity of allowance (over-provisioning) is subtracted from the capacity (physical capacity) corresponding to all the blocks of an SSD.
- the bad block detection unit 121 which receives an extensive command of a bad block and notifies the increase in number of bad blocks from the SSD 10 to the host 20 as the information ReS 9 , is included.
- the host device 20 which has received the notification, reduces use capacity by means of the use capacity reduction unit 213 so as not to cause any problem even when use capacity decreases. Further, the reduced use capacity is notified from the host 20 to the SSD 10 by means of the storage capacity determination unit 212 .
- the SSD 10 then reduces the number of used blocks by means of the storage capacity information reception unit 122 .
- the host device 20 reduces use capacity as necessary and the SSD 10 can secure the empty blocks accordingly. It is thereby possible to prolong and the life of the SSD 10 and to prevent the reduction of the speed of response to the SSD 10 .
- a TRIM and/or UNMAP command may be a means for notifying the decrease of use capacity from a host to an SSD. However, they are used when an application program on the host device 20 individually deletes data. That is, they are irrelevant to the increase in number of bad blocks in the NAND memory 11 of the SSD 10 .
- FIG. 8 represents power paths by broken lines and signal paths by solid lines.
- the information processing system 100 of the tenth embodiment is driven by power Pmax supplied from a power supply unit 50 , and executes a process and request (for example, a request to write data, etc.) of external devices 220 which access the information processing system 100 from the outside 200 via a network 210 .
- a process and request for example, a request to write data, etc.
- the information processing system 100 comprises SSD 0 to SSDn- 1 (n is a positive integer), which are storage devices 10 , and a host 20 which controls the storage devices 10 .
- SSDs Solid-state drives
- the storage devices 10 are not limited to SSDs and may be, for example, hard disc drives (HDDs) or other storage devices and memories. The detailed structure of the storage devices 10 and the host 20 will be described later.
- the power supply unit 50 converts external power supplied from an external power source VC to the predetermined power Pmax.
- the converted power Pmax is almost equally divided into power components P 0 to Pn- 1 to be supplied to the storage devices 10 , respectively.
- the total power Pmax supplied to the information processing system 100 is predetermined and the value is substantially constant. Therefore, the value of power Pmax supplied from the power supply unit 50 is not greater than the sum total of power components P 0 to Pn- 1 supplied to SSD 0 to SSDn- 1 , respectively, that is
- the external devices 220 access the information processing system 100 from the outside 200 of the information processing system 100 via the network 210 , and performs a predetermined process or makes a predetermined request (for example, data reading, data writing, data erasing, etc.) to the accessed information processing system 100 .
- the network 210 is not limited to wired or wireless.
- the information processing system 100 of the tenth embodiment changes power components to be distributed to the storage devices 10 and optimizes the power components (P 0 to Pn- 1 ⁇ P 0 ′′ to Pn- 1 ′′) in accordance with a load on the storage devices 10 (SSD 0 to SSDn- 1 ). According to such a structure, the information processing system 100 of the tenth embodiment can improve efficiency of the system. The effect and advantage will be described later in detail.
- the information processing system 100 comprises SSD 0 to SSDn- 1 , which are the storage devices 10 , and the host 20 which controls the storage devices 10 .
- Each of SSD 0 to SSD 9 which are the storage (storage units) 10 , comprises a NAND flash memory (hereinafter referred to as a “NAND memory”) 11 , a memory controller 12 and a power conversion unit 13 .
- NAND memory NAND flash memory
- the NAND memory 11 is a nonvolatile semiconductor memory which comprises blocks (physical blocks) and stores data in each block. Each block comprises memory cells positioned at intersections of word lines and bit lines. Each memory cell comprises a control gate and a floating gate and stores data in a nonvolatile manner by the presence or absence of electrons injected into the floating gate.
- the word lines are commonly connected to the control gates of the memory cells.
- a page exists in each word line. Data reading and writing operations are performed per page. Therefore, a page is a unit of data reading and writing. Data is erased per block. Therefore, a block is a unit of data erasing.
- the NAND memory 11 of the tenth embodiment may be multi-level cell (MLC) capable of storing multibit data in a memory cell and/or single-level cell (SLC) capable of storing one-bit data in a memory cell MC.
- MLC multi-level cell
- SLC single-level cell
- the memory controller 12 controls the operation of the whole of the storage device 10 in accordance with a request from the host 20 .
- the memory controller 12 writes write data to a predetermined address of the NAND memory 11 in accordance with a write command which is a request to write data from the host 20 .
- the memory controller 12 of the tenth embodiment further receives an extended command eCOM transmitted from the host 20 to confirm minimum power required for the operation of each of SSD 0 to SSD 9 .
- the extended command eCOM is a signal transmitted on purpose to detect various states of the storage device 10 (for example, a state of power consumption of the storage device 10 in this case), and is defined as a signal different from the above-described write command, etc.
- the extended command eCOM is not limited to a command eCOM and may be any extended predetermined signal (information, request, instruction, etc.).
- the memory controller 12 of each of SSD 0 to SSD 9 transmits a status signal ReS (P 0 ′ to P 9 ′) indicative of the minimum power required for the operation in reply to the received request eCOM.
- the signal transmitted in reply is not limited to the status signal ReS and may be any extended predetermined signal (information, request, instruction, etc.).
- the memory controller 12 of each of SSD 0 to SSD 9 controls the power conversion unit 13 to operate based on the changed power component (P 0 ′′ to P 9 ′′) notified by the host 20 . The operation will be described later in detail.
- the power conversion unit 13 converts the power component (P 0 to P 9 ) supplied from the power supply unit 50 under the control of the memory controller 12 .
- the storage device 10 performs a predetermined operation in accordance with the power supplied from the power conversion unit 13 .
- each memory controller 12 may comprise an address mapping (address translation) table indicative of a correspondence relationship between logical addresses managed by the host 20 and physical addresses managed by the storage device 10 .
- address mapping address translation
- the host 20 controls each storage device 10 in accordance with a request from the external devices 220 which access from the outside via the network 210 .
- the host 20 comprises a data position management unit 221 , a power distribution determination unit 223 and a central processing unit (CPU) 222 .
- CPU central processing unit
- the data position management unit 221 manages, for example, position information of write data stored in the storage devices 10 under the control of the CPU 222 .
- the data position management unit 221 comprises a table (first table) T 1 .
- Table T 1 indicates at least a power/performance characteristic of each of SSD 0 to SSD 9 as described later.
- the power distribution determination unit 223 determines power to be distributed to each of SSD 0 to SSD 9 under the control of the CPU 222 . More specifically, the power distribution determination unit 223 determines power components P 0 ′′ to P 9 ′′ to be redistributed to SSD 0 to SSD 9 , respectively, based on the corrected characteristics PP 0 ′ to PP 9 ′ of the storage devices 10 transmitted from the CPU 222 . The CPU 222 is notified of the determined power components P 0 ′′ to P 9 ′′.
- the CPU 222 controls the data position management unit 221 and the power distribution determination unit 223 and controls the operation of the whole of the host 20 .
- the host 20 is riot limited to the above-described structure.
- the host 20 may comprise an interface to communicate with the storage devices 10 , etc.
- FIG. 10 is a table showing table T 1 of the tenth embodiment.
- SSD 0 to SSD 9 which are the storage devices 10 , are associated with theoretical power/performance characteristics (electrical characteristics) PP 0 to PP 9 , respectively, in table T 1 .
- power/performance characteristics PP 0 to PP 9 is shown as a typical characteristic based on the assumption that the performance varies depending on the amount of supplied power.
- FIG. 11 shows a power/performance characteristic PP 0 of SSD 0 .
- the performance increases from the origin 0 proportionately with the supplied power in theory. More specifically, when the supplied power is power component P 0 , SSD 0 can deliver performance S 0 proportionately with power component P 0 .
- a proportionality coefficient of the performance decreases when the supplied power increases to some degree. For example, when the supplied power exceeds power component P 0 , the proportionality coefficient of the performance decreases. This is because, for example, the amount of heat produced in the controller 12 increases when the supplied power increases to some degree.
- the performance (performance index)” may include all operations and functions performed by the NAND memory 11 depending on the supplied power.
- the performance of the NAND memory 11 may include data writing, data reading, data erasing, garbage collection (compaction), inputs/outputs per second (IPOS), megabytes per second (MB/s), etc.
- IPOS is the number of times data can be written to the NAND memory 11 per second.
- MB/s is a communication speed between the host 20 and the NAND memory 11 .
- Power/performance characteristics PP 1 to PP 9 of the other SSD 1 to SSD 9 are the same as PP 0 .
- a distribution power determination process of the information processing system 100 of the tenth embodiment is described with reference to FIG. 12 .
- the description below is based on the assumption that a specified SSD 5 is intensively accessed by the external devices 220 and the CPU 222 of the host 20 determines that the larger load (larger power) is necessary for SSD 5 .
- step S 11 the CPU 222 of the host 20 transmits an extended command (first command) eCOM to confirm the minimum power required for the operation of each of SSD 0 to SSD 9 .
- step S 12 the memory controller 12 of each storage device 10 transmits a status signal ReS (P 0 ′ to P 9 ′) indicative of the minimum power required for the operation in reply to the received request eCOM.
- the memory controller 12 of SSD 0 first detects the minimum power component P 0 ′ required for the operation of the NAND memory 11 of SSD 0 based on the relationship between the performance and power component P 0 supplied to the NAND memory 11 , in accordance with the received request eCOM.
- the memory controller 12 of SSD 0 transmits the detected minimum power component P 0 ′ to the host 20 as a status signal ReS (P 0 ′).
- step S 13 the CPU 222 of the host 20 corrects the power/performance characteristic of each SSD based on the transmitted status signal ReS (P 0 ′ to P 9 ′). More specifically, for example, the power distribution determination unit 223 of the host 20 increases the initial value of characteristic PP 0 from the origin 0 to P 0 ′ based on the status signal ReS (P 0 ′) indicative of the minimum power required for the operation of SSD 0 , as shown in FIG. 13 . The power distribution determination unit 223 further corrects characteristic PP 0 by performing parallel translation of characteristic PP 0 and thereby calculates an actual characteristic PP 0 ′.
- the minimum power required for driving components other than the NAND memory 11 for example, the memory controller 12 and the other peripheral circuits can be considered by calculating characteristic PP 0 ′.
- the characteristic can be calculated with more precision based on the actual status of each storage device 10 .
- the other characteristics PP 1 ′ to PP 9 ′ are also calculated in the same manner as PP 0 ′.
- step S 14 the CPU 222 of the host 20 stores the corrected power/performance characteristics PP 0 ′ to PP 9 ′ of SSD 0 to SSD 9 in table T 1 and thereby updates table T 1 .
- the CPU 222 stores calculated allowable power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′ and changed power components P 5 ′′ in table T 1 .
- step S 15 the power distribution determination unit 223 of the host 20 calculates allowable power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′ to be distributed to SSDs other than SSD 5 under a load, i.e., SSD 0 to SSD 4 and SSD 6 to SSD 9 , based on the corrected power/performance characteristics PP 0 ′ to PP 9 ′. More specifically, as shown in FIG. 13 , the power distribution determination unit 223 calculates suppressible power component (surplus power component) P 0 ′′ from the currently supplied power component P 0 based on the corrected characteristic PP 0 ′. “The allowable power (suppressible power, surplus power)” may be any power as long as the NAND memory 11 can continuously operate. The other allowable power components P 1 ′′ to P 9 ′′ are calculated in the same manner as allowable power component P 0 ′′.
- step S 16 the power distribution determination unit 223 of the host 20 calculates power component P 5 ′′ changed to be supplied to SSD 5 under a load, from the calculated allowable power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′. More specifically, as shown in FIG. 13 , the power distribution determination unit 223 first calculates differences AP 0 to AP 4 and AP 6 to AP 9 between the currently-distributed power components P 0 to P 4 , P 6 to P 9 and the calculated suppressible power components P 0 ′′ to P 4 ′′, P 6 ′′ to P 9 ′′, respectively.
- the power distribution determination unit 223 adds the calculated difference power components AP 0 to AP 4 and AP 6 to AP 9 to power component P 5 assigned to SSD 5 .
- step S 17 SSD 0 to SSD 9 are notified of the changed power components P 0 ′′ to P 9 ′′ calculated by the host 20 .
- step S 18 SSD 0 to SSD 9 operate based on the notified changed power components P 0 ′′ to P 9 ′′. More specifically, the power conversion units 13 of SSD 0 to SSD 9 convert power components P 0 to P 9 supplied from the power supply unit 50 into power components P 0 ′′ to P 9 ′′ notified by the memory controllers 12 .
- the specified SSD 5 operates based on power component P 5 ′′ which is larger than the previous power component P 5 .
- the other SSD 0 to SSD 4 and SSD 6 to SSD 9 operate based on power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′ which have been obtained by subtracting the suppressible power from the previous power components P 0 to P 4 and P 6 to P 9 and are lower than the previous power components P 0 to P 4 and P 6 to P 9 .
- each storage device 10 transmits a status signal ReS (P 0 ′ to P 9 ′) indicative of the minimum power required for the operation to the host 20 in reply (S 12 in FIG. 5 ).
- the host 20 corrects power/performance characteristics of SSDs based on the status signals ReS (P 0 ′ to P 9 ′) and calculates the changed power components P 0 ′′ to P 9 ′′ from the corrected characteristics PP 0 ′ to PP 9 ′ (S 13 to S 16 in FIG. 5 ). After that, the storage devices 10 operate based on the calculated changed power components P 0 ′′ to P 9 ′′.
- the efficiency of the whole information processing system 100 can be improved by intensively injecting allocatable power to SSD 5 under a load to improve the processing performance of SSD 5 .
- SSD 0 to SSD 9 operate based on power components P 0 to P 9 almost evenly distributed under the control of the host 20 as shown in FIG. 15 .
- the total amount of power Pmax supplied to the information processing system 100 is predetermined as expressed by expression (I)
- This is based on the premise that the performance of the storage devices 10 varies depending on the power consumption as shown in FIG. 11 and FIG. 13 .
- SSD 5 is intensively accessed and is required to perform a large amount of processes.
- the power is changed to increase the power supplied to SSD 5 .
- SSD 5 which requires the larger power, can thereby operate based on the larger power component P 5 ′′.
- the other SSD 0 to SSD 4 and SSD 6 to SSD 9 can continuously operate based on power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′ obtained by subtracting the suppressible power.
- the processing capability of the storage devices 10 can be substantially hierarchical based on the supplied amount of power as shown in FIG. 17 even if the system is constituted by one type of storage devices 10 . More specifically, with respect to data required to be frequently accessed (in this case, data stored in SSD 5 ), the supplied power is increased and the processing ability and speed are improved.
- the information processing system 100 of the tenth embodiment has an advantage that an arbitrary storage device 10 can be used as a high-speed layer (higher layer) and the efficiency of the whole system can be improved.
- a comparative example has a hierarchical structure constituted by several types of storage devices as shown in FIG. 18 .
- a high-speed interface SSD is used as a high-speed layer (higher layer).
- a low-speed interface SSD or a high-speed HDD is used as a medium-speed layer (medium layer).
- a low-speed HDD is used as a low-speed layer (lower layer).
- the hierarchical storage architecture as in the comparative example however, physical device and interface are different depending on layer. Therefore, it is impossible to increase the speed of a specified storage device. In addition, even if data required to be frequently accessed is stored in the high-speed layer (higher layer), accesses do not necessarily center on only the data stored in the higher layer. As described above, the information processing system of the comparative example has a disadvantage that the efficiency of the whole system is hardly improved after forming the hierarchical structure.
- the eleventh embodiment relates to a case where each storage device 10 determines its own performance. In the description below, the description overlapping the tenth embodiment is omitted.
- the information processing system 100 of the eleventh embodiment is different from that of the tenth embodiment in that the NAND memory 11 comprises a table T 2 and each storage device 10 comprises a self-performance determination unit 14 .
- table (second table) T 2 of the NAND memory 11 an actual characteristic (PP 0 ′ to PP 9 ′) of the storage device 10 is stored.
- actual characteristic PP 0 ′ of SSD 0 is stored in table T 2 of SSD 0 .
- Table T 2 is updated by the memory controller 12 at arbitrary intervals. The storage location of table T 2 is not limited to the NAND memory 11 .
- the self-performance determination unit 14 determines the performance of the storage device 10 under the control of the memory controller 12 and notifies the memory controller 12 of a result of the determination. For example, when receiving a command eCOM, the self-performance determination unit 14 of SSD 0 refers to table T 2 and determines the minimum power component P 0 ′ required for the operation of SSD 0 based on the actual characteristic PP 0 ′. The self-performance determination unit 14 of SSD 0 further notifies the memory controller 12 of the determined power component P 0 ′.
- a distribution power determination process of the information processing system 100 of the eleventh embodiment having the above-described structure is described with reference to FIG. 20 .
- the description below is based on the assumption that a specified SSD 5 is intensively accessed by the external devices 220 and the CPU 222 of the host 20 determines that the larger load (larger power) is necessary for SSD 5 , as an example.
- step S 21 the CPU 222 of the host 20 transmits an extended command eCOM to each storage device 10 to detect the minimum power required for the operation of each SSD.
- step S 22 in response to the command eCOM, the self-performance determination unit 14 of each storage device 10 refers to table T 2 and determines the minimum power component (P 0 ′ to P 9 ′) required for the operation based on the actual characteristic (PP 0 ′ to PP 9 ′) stored in table T 2 .
- step S 23 the self-performance determination unit 14 of each storage device 10 refers to table T 2 and calculates performance (S 0 ′ to S 9 ′) expected from the calculated power component (P 0 ′ to P 9 ′) based on the characteristic (PP 0 ′ to PP 9 ′).
- step S 24 the memory controller 12 of each storage device 10 transmits the calculated power component (P 0 ′ to P 9 ′) and the expected performance (S 0 ′ to S 9 ′) to the host 20 as a status signal ReS.
- step S 25 the power distribution determination unit 223 of the host 20 determines allowable power components P 0 ′′ to P 4 ′′ and P 6 ′′ to P 9 ′′ and power component P 5 ′′ changed to be supplied to SSD 5 under a load, based on the received status signals ReS (P 0 ′ to P 9 ′ and S 0 ′ to S 9 ′).
- step S 26 the CPU 222 of the host 20 notifies the storage devices 10 of the determined power components P 0 ′′ to P 9 ′′.
- step S 27 the storage devices 10 operate based on power components P 0 ′′ to P 9 ′′ notified by the host 20 .
- each storage device 10 may determine its own performance and power consumption.
- the twelfth embodiment relates to a case where the host notifies each storage device 10 of required performance.
- the description below the description overlapping the above-described embodiments is omitted.
- the information processing system 100 of the twelfth embodiment is different from the first and eleventh embodiments in that the host 20 further notifies each storage device 10 of required performance (S 0 ′′ to Sn- 1 ′′).
- performance S 0 ′′ is performance expected from the calculated power component P 0 ′′ based on characteristic PP 0 ′.
- the power distribution determination unit 223 of the host 20 calculates power components P 0 ′′ to P 9 ′′ based on characteristics PP 0 ′ to PP 9 ′.
- the power distribution determination unit 223 calculates performances S 0 ′′ to S 9 ′′ expected from the calculated power components P 0 ′′ to P 9 ′′ based on the characteristics PP 0 ′ to PP 9 ′.
- the storage devices 10 are notified of the calculated performances S 0 ′′ to S 9 ′′ together with power components P 0 ′′ to P 9 ′′.
- the host 20 may notify the storage devices 10 of the calculated performances S 0 ′′ to S 9 ′′ instead of power components P 0 ′′ to P 9 ′′.
- the performances S 0 ′′ to S 9 ′′ may be calculated by the storage devices 10 instead of the host 20 .
- the storage devices 10 can be directly controlled based on the required performances S 0 ′′ to S 9 ′′. Therefore, each required performance can be achieved more directly.
- the thirteenth embodiment relates to a case where the total amount of supplied power Pmax is variable. In the description below, the description overlapping the above-described embodiments is omitted.
- a information processing system 100 A of the thirteenth embodiment is different from the first to twelfth embodiments in that a maximum value of total power Pmax supplied to the information processing system 100 A can be varied by a control signal CS 50 notified to a power supply unit 50 A by the host 20 .
- power supply unit 50 A also supplies power to a information processing system 100 B different from information processing system 100 A.
- the CPU 222 of the host 20 transmits a control signal CS 50 to power supply unit 50 A to increase the maximum value of power Pmax.
- power supply unit 50 A increases the maximum value of power Pmax and supplies information processing system 100 A with the increased power under the control of the host 20 .
- the thirteenth embodiment has an advantage that the efficiency of the system can be further improved.
- the information processing system 100 is not limited to the first to thirteenth embodiments and may be changed as appropriate as described below.
- the power consumption of the storage devices 10 is not necessarily determined by using the power/performance characteristics.
- a table (third table) T 3 in which logs (operation history) of SSD 0 to SSD 9 constituting the storage devices 10 are recorded may be comprised.
- table T 3 power supplied to each of SSD 0 to SSD 9 constituting the storage devices and performance achieved by the power are recorded.
- (S 01 , P 01 ), (S 02 , P 02 ), . . . are recorded as a log of SSD 0 .
- Logs of the other SSD 1 to SSD 9 are recorded in the same manner.
- the host 20 or the storage device 10 may determine predetermined power and performance from the characteristic by referring to table T 3 . Of course, both the characteristics and the logs may be used.
- first to third tables T 1 to T 0 are described as an example, but the form is not limited to a table form.
- a predetermined formula, function and the like may be used.
- the means for distributing power is not limited to supplying a specified storage device with surplus allowable power subtracted from the total power Pmax, and may be changed as necessary.
- the host 20 may distribute power to the storage devices 10 based on the status of all the storage devices 10 such that a specified process at a specified time is completed first.
- the power consumed by the storage devices 10 is changed by not only the performance and the operation status of the storage devices 10 but also, for example, the environment (temperature, etc.) of the storage devices 10 . Therefore, a temperature and an amount of heat of the storage devices 10 may also be detected as an index of the performance of the storage devices 10 .
- the information processing system 100 comprises the storage devices 10 and the host 20 which controls the storage devices 10 .
- SSDs are described as an example of the storage devices 10 .
- the storage devices 10 can be attached to the host 20 in a data center and a cloud computing system of an enterprise.
- the storage devices 10 can access an external device 220 such as an external server via the network 210 under the control of the host 20 . Therefore, SSD 0 to SSD 9 may be enterprise SSDs (eSSDs).
- SSD 0 to SSD 9 is not limited for enterprises.
- SSD 0 to SSD 9 can be of course applied as a storage medium of an electronic device for consumer such as a notebook computer and a tablet.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Quality & Reliability (AREA)
- Probability & Statistics with Applications (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 62/035,243, filed Aug. 8, 2014, the entire contents of which are incorporated herein by reference.
- Embodiments described herein relate generally to a memory system, a host device and an information processing system.
- There is a solid-state drive (SSD) including a nonvolatile semiconductor memory as a storage medium and having the same interface as a hard disk drive (HDD).
-
FIG. 1 is a block diagram illustrating an outline of the embodiments. -
FIG. 2 is a perspective view illustrating an information processing system inFIG. 1 . -
FIG. 3 is a block diagram illustrating an information processing system according to a first embodiment. -
FIG. 4 is a diagram illustrating a relationship between time and the number of empty blocks as to garbage collection of an information processing system according to a second embodiment. -
FIG. 5 is a block diagram illustrating an information processing system according to a third embodiment. -
FIG. 6 is a diagram schematically illustrating blocks according to an eighth embodiment and blocks of a comparative example. -
FIG. 7 is a block diagram illustrating an information processing system according to a ninth embodiment. -
FIG. 8 is a block diagram showing a general structure of information processing system according to the tenth embodiment. -
FIG. 9 is a block diagram showing a detailed structure of the information processing system according to the tenth embodiment. -
FIG. 10 is a table showing a table T1 according to the tenth embodiment. -
FIG. 11 is a graph showing an example of theoretical power/performance characteristics of the information processing system according to the tenth embodiment. -
FIG. 12 is a flowchart showing a power distribution determination process according to the tenth embodiment. -
FIG. 13 is a graph showing an example of actual power/performance characteristics of the information processing system according to the tenth embodiment. -
FIG. 14 is a table showing the updated table T1. -
FIG. 15 is a view showing power distribution to be changed. -
FIG. 16 is a view showing changed power distribution. -
FIG. 17 is a view schematically showing a storage architecture according to the tenth embodiment. -
FIG. 18 is a view schematically showing a storage architecture according to a comparative example. -
FIG. 19 is a block diagram showing a detailed structure of a information processing system according to the eleventh embodiment. -
FIG. 20 is a flowchart showing a power distribution determination process according to the eleventh embodiment. -
FIG. 21 is a block diagram showing a general structure of an information processing system according to the twelfth embodiment. -
FIG. 22 is a block diagram showing a general structure of an information processing system according to the thirteenth embodiment. -
FIG. 23 is a table showing a table T3 according to modified example 1. -
FIG. 24 is a perspective view showing an example of the exterior of the information processing system according to the first to thirteenth embodiments and modified example 1. - Various embodiments will be described hereinafter with reference to the accompanying drawings.
- In general, according to one embodiment, a memory system includes a nonvolatile memory and a controller which controls the nonvolatile memory. The controller notifies to an outside an extensive signal which indicates a predetermined state of the nonvolatile memory or the controller.
- In this specification, some components are expressed by two or more terms. These terms are merely examples and these components may be expressed by another or other terms. In addition, other components which are not expressed by two or more terms may be expressed by another or other terms.
- Also, the drawings are merely examples, and may differ from when the embodiments are actually realized in terms of, for example, the relationship between thickness and planar dimension and the ratio of thickness of layers. Further, in the drawings, the relationship or ratio of dimensions may be different from figure to figure.
- (Outline)
- To begin with, the outline of the embodiments will be briefly described with reference to
FIG. 1 , before describing each embodiment. A solid-state drive (SSD) is given as an example as amemory system 10. - As shown, an
information processing system 100 includes a plurality ofSSDs 10 and ahost device 20. - Each of the plurality of
SSDs 10 includes a NAND flash memory (NAND memory) 11 and anSSD controller 12. - The
NAND memory 11 is a nonvolatile memory physically including a plurality of chips (for example, five chips), although not shown. Each of theNAND memory 11 is constituted by a plurality of physical blocks having a plurality of memory cells arranged on the intersection point between a word line and a bit line. In theNAND memory 11, data is erased collectively per physical block. That is, the physical block is a unit of data erasure. Data write and data read are performed per page (word line) in each block. - The SSD controller (memory controller) 12 controls the whole operation of the
SSD 10. For example, theSSD controller 12 controls access (data read, data write, data delete, etc.) to theNAND memory 11 in accordance with an instruction (request or command COM) from thehost device 20. - The
host device 20 transmits, for example, a read command COMR and an address ADD to eachSSD 10. A control unit (for example, CPU, processor or MUP), which is not shown, of thehost device 20 receives from theSSD 10 read data DATA corresponding to a request of the read command COMR. - In addition to the above-mentioned structure and operation, the following is performed in the embodiments.
- Firstly, the control unit of the
host device 20 issues to theSSD 10 an extensive command eCOM, which is for deliberately (intentionally) detecting various states (for example, a state of a bad block of the NAND memory 11) of theSSD 10 and is defined differently from the above-mentioned read command COMR and a write command COMW. It may not be limited to the command eCOM but may be a different extensive (or extended) predetermined signal (information, request, instruction, etc.). - Secondly, the
SSD controller 12 of the SSD 10 returns its own state (SSD 10) to thehost device 20 as an extensive status signal ReS, based on the received extensive command eCOM. It may not be limited to the status signal ReS but may be a different extensive (or extended) predetermined signal (information, return, response, etc.). - Therefore, the
host device 20 can detect various states of theSSD 10 based on the returned extensive status signal ReS. This enables thehost device 20 to improve the detected state of theSSD 10 as necessary. - Note that the above-mentioned extensive command eCOM and extensive status signal ReS may be transmitted in any order. That is, it is possible to firstly transmit an extensive predetermined signal from the
SSD 10 to thehost device 20 and secondly transmit the extensive predetermined signal from thehost device 20 to theSSD 10. - (Exterior)
- Next, an exterior of the
information processing system 100 will be briefly described with reference toFIG. 2 , before describing each embodiment. - The
SSD 10 as shown is, for example, a relatively small module, and has an outside dimension of, for example, approximately 120 mm×130 mm. Note that the size and dimension of theSSD 10 may not be limited thereto but may be appropriately modified to various ones. Also, theSSD 10 can be used by being mounted to the server-like host device 20 in, for example, a data center or a cloud computing system operated by a company (enterprise). Therefore, theSSD 10 may be an enterprise SSD (eSSD). - The
host device 20 includes a plurality of connectors (for example, slots) 30 which are opened upward, for example. Eachconnector 30 is, for example, a serial attached SCSI (SAS) connector. This SAS connector enables thehost device 20 and eachSSD 10 to perform high-speed communication with each other by means of a dual port of 6 GPBS. Note that eachconnector 30 may not be limited thereto but may be, for example, PCI express (PCIe) or NVM express (NVMe). - Also, the plurality of
SSDs 10 are mounted to theconnector 30 of thehost device 20, respectively, to be supported side by side with each other in a posture of standing in a substantially vertical direction. According to such a structure, it is possible to compactly mount the plurality of theSSDs 10 and to reduce thehost device 20 in size. Further, each shape of theSSD 10 of the present embodiment is 2.5 inches small form factor (SFF). Such a shape makes theSSD 10 compatible with an enterprise HDD (eHDD) in shape and realizes an easy system compatibility with an eHDD. - Note that the
SSD 10 is not limited for enterprise one. For example, theSSD 10 is certainly applicable as a storage medium of an electronic device for consumer such as a notebook portable computer and a tablet device. - Subsequently, each embodiment will be described in detail.
- To begin with, the first embodiment will be described with reference to
FIG. 3 . The first embodiment relates to an example of setting of reducing the time of error correction when performing data read from theSSD 10. The following gives an example where error correction is performed by means of a BCH code at the time of data read from theSSD 10. - The
host device 20 issues to theSSD 10 an extensive command of error correction (not shown). An acceptable latency time (acceptable latency) of error correction is added to the command as an attribute. The processing time may be designated qualitatively such as “as soon as possible” or quantitatively. - Then, the
SSD 10 switches a switch SW1 by the received attribute signal and transmits read data to thehost device 20. - More specifically, when the acceptable latency time is short, the
SSD 10 switches the switch SW1 to the upper column side (fast decoder [weak decoder]) of theNAND memory 11 inFIG. 3 . Read data RDF is then transmitted to thehost device 20 from the fast decoder, where the amount of error correction is relatively small. - On the other hand, when the acceptable latency time is long, the
SSD 10 switches the switch SW1 to the lower column side (strong decoder [slow decoder]) of theNAND memory 11. Error correction is performed by means of a BCH code more intensively for read data RDS on the lower side (strong decoder [slow decoder]) of theNAND memory 11, where the amount of error correction is relatively large and intensive error correction is required. The read data RDS is then transmitted to thehost device 20 in a similar manner. - If an error cannot be corrected, the
SSD 10 returns the error to thehost device 20. - As described above, according to the structure and operation of the first embodiment, it is possible to perform data read based on a latency time accepted by the
host device 20. It is therefore possible to reduce the read time of error correction when performing a read operation. In other words, it is possible to make a setting in a read operation so that an error is returned without spending more time on correction than necessary when an error occurs. - Note that it is desirable that when an error occurs in one of the
SSDs 10, read be performed from anotherSSD 10, etc., since thehost device 20 makes a predetermined setting such as making redundant to the plurality ofSSDs 10. - Next, the second embodiment will be described with reference to
FIG. 4 . The second embodiment relates to an example of garbage collection (GC). - To begin with, the
host device 20 issues to theSSD 10 an extensive command of garbage collection (not shown). Next, theSSD 10 returns to the host device 20 a state of garbage collection as an extensive status signal (not shown), based on an extensive command of garbage collection. - Then, the
host device 20 performs control to make theSSD 10 perform garbage collection and secures the number of empty blocks at idle time, etc., of thehost device 20, based on the received extensive status signal. - Note that when a data write operation is performed from the
host device 20 during garbage collection, theSSD 10 autonomously stops garbage collection in order to perform data write and autonomously resumes garbage collection after completing data write. - However, when the number of empty blocks decreases markedly to be below the minimum amount necessary for data write, garbage collection is performed in advance.
- For example, as shown in
FIG. 4 , at time t0, thehost device 20 issues a command to cause theSSD 10 to perform garbage collection and increases the number of empty blocks at idle time, based on the received extensive status signal. - Next, at time t1, when the idle state ends, the
host device 20 stops garbage collection to perform a data write operation to theSSD 10. - Then, at time t2, the
host device 20 resumes garbage collection after completing a data write operation. Note that between times t1 and t2, the number of secured empty blocks decreases in response to the data write operation. - Subsequently, at time t3, when the
host device 20 resumes the data write operation to theSSD 10, theSSD 10 stops garbage collection that has been performed. - Thereafter, at time t4, when a data write operation ends, the
SSD 10 resumes garbage collection that has been stopped. - After that, at time t5, when the sufficient number of empty blocks is secured, the
SSD 10 ends garbage collection. - As described above, according to the structure and operation of the second embodiment, it is possible to perform garbage collection to increase the number of empty blocks in advance, and secure it at free time such as idle time. Therefore, when the
SSD 10 is busy such as in a busy state, garbage collection (GC) is less likely to occur and an average response time can be reduced. - Next, the third embodiment will be described with reference to
FIG. 5 . The third embodiment relates to an example of controlling a data write operation, i.e., an example of controlling the kinds of a NAND memory, which is a write destination, according to an attribute of write data. - As shown, the
NAND memory 11 of the third embodiment includes plural kinds of single-level cell (SLC) 111, multi-level cell (MLC) 112, triple-level cell (TLC) 113 and quad-level cell (QLC) 114. - Also, the
SSD controller 12 of the third embodiment includes acontrol unit 121 which writes data separately for the above-mentioned kinds (111 to 114) of theNAND memory 11. - In the above-mentioned structure, the
host device 20 firstly issues Lo theSSD 10 an extensive write command to which an attribute such as data update frequency is added. Secondly, theSSD 10 returns an extensive status signal (not shown) to thehost device 20 in response to the extensive write command. - Then, the
write control unit 121 of theSSD 10 writes write data in the above-mentioned kinds (111 to 114) of theNAND memory 11, based on the above-mentioned received extensive write command. - For example, when write data is meta data, etc., (for example, time stamp data) which is outside a file, the
write control unit 121 writes the data to theSLC 111 based on the received extensive write command. This is because the meta data, etc., is rewritten frequently. - Also, when write data is user data, etc., which is inside a file, the
write control unit 121 writes the write data to theMLC 112, theTLC 113 and theQLC 114, based on the received extensive write command. This is because the user data, etc., is rewritten infrequently. - As described above, in the third embodiment, the
host device 20 issues an extensive write command with an attribute such as data update frequency. It is thereby possible to change the kinds of theNAND memory 11 to be used as necessary according to data attribute and to improve data write efficiency. - Next, the fourth embodiment will be described. The fourth embodiment relates to an example of distributing power to the
SSD 10. - To begin with, the
host device 20 issues to theSSD 10 an extensive command about consumption power. - Next, the
SSD 10 returns to thehost device 20 information (achievement and prediction), which indicates the correspondence relationship between consumption power and performance, as an extensive status signal based on the extensive command about the consumption power. - Then, based on the received extensive status signal, the
host device 20 determines distribution (budget) of consumption power to eachSSD 10 in view of the performance of eachSSD 10 within the acceptable range of the total consumption power of the plurality of theSSDs 10, and notifies eachSSD 10 of the determined result. - As described above, according to the fourth embodiment, it is possible to distribute power that remains in one of the
SSDs 10 to anotherSSD 10 based on an attribute of theSSD 10. It is therefore possible to use redundant power within the acceptable range of the total consumption power and to improve the whole performance of the plurality ofSSDs 10. - Next, the fifth embodiment will be described. The fifth embodiment relates to an example of dividing the
SSDs 10 into necessary groups (namespaces [partitions]) to perform control. - To begin with, the
host device 20 issues to theSSD 10 an extensive command of predetermined grouping (not shown). Next, theSSD 10 returns to thehost device 20 an extensive status signal (not shown) that indicates its own state (SSD 10), in response to the extensive command of the predetermined grouping. - The
host device 20 divides into predetermined groups (namespaces) to perform control necessary for each of the groups, based on the received extensive status signal. - For example, the
host device 20 performs the following controls: - 1) To perform the garbage collection (GC) control described in the second embodiment for each of the predetermined groups. In other words, the
host device 20 performs control for theSSD 10 so that garbage collection (GC) in one of the groups does not affect the performance of another group; - 2) To set, for each group, a management unit of a lookup table (LUT) of a flash translation layer (FTL);
- 3) To set, for each group, the presence or absence and the amount of thin-provisioning (showing larger capacity than the capacity of the
NAND memory 11 to a user); - 4) To set, for each group, the presence or absence and the amount of over-provisioning (showing smaller capacity than the capacity of the
NAND memory 11 to a user); - 5) To set, for each group, control as to the kinds of the
NAND memory 11 described in the above-mentioned third embodiment; and - 6) To divide groups into hot data (frequently-updated data) and cold data (infrequently-updated data) and to allocate the blocks of the
NAND memory 11 for each group. By thus grouping in advance, it is possible to perform wear leveling for each group and to reduce the frequency of the garbage collection. - As described above, according to the fifth embodiment, it is possible to divide the SSDs 10 into groups as necessary and to improve performance.
- Next, the sixth embodiment will be described. The sixth embodiment relates to an example of allocating the physical blocks of the
NAND memory 11 by each attribute. - As to the control of an extensive command and status signal, a detailed description is omitted as being substantially the same as above.
- In the sixth embodiment, the
host device 20, etc., adds an attribute corresponding to a file data, etc., and allocates physical blocks of theNAND memory 11 by each attribute. Under the above-mentioned control, since the whole of the physical block of theNAND memory 11 becomes empty when the data having the same attribute is deleted simultaneously, it is possible to reduce garbage collection (GC). - Next, the seventh embodiment will be described. The seventh embodiment relates to an example of providing advantageous information, etc.
- As to the control of an extensive command and status signal, a detailed description is omitted as being substantially the same as above.
- In the seventh embodiment, advanced information for the
host device 20 such as write-amplification factor (WAF), information of used block and information of empty block is transmitted from theSSD 10 to thehost device 20 on a regular basis. Thehost device 20 performs necessary control based on the transmitted advanced information. - Next, the eighth embodiment will be described with reference to
FIG. 6 . The eighth embodiment relates to an example of providing NAND block boundary information. - As to the control of an extensive command and status signal, a detailed description is omitted as being substantially the same as above.
- In the eighth embodiment, the
SSD 10 shows thehost device 20 information indicating “how many more times of writing would fill the NAND blocks.” Based on the information shown in theSSD 10, thehost device 20 can perform data write per physical block of theNAND memory 11 until it reaches an appropriate state. It is therefore possible to reduce garbage collection. - For example, as shown in
FIG. 6 (a), thehost device 20 cannot recognize the write state of the NAND blocks in the comparative example. Therefore, data having different file names, etc., is written together with the physical blocks of the NAND memory. This increases WAF caused by garbage collection, etc. - On the other hand, as shown in
FIG. 6 (b), thehost device 20 recognizes the write state of the NAND blocks in the eighth embodiment. Therefore, it is possible to write to the physical blocks of theNAND memory 11 separately for predetermined information such as file name. This reduces WAF caused by garbage collection, etc. - Next, the ninth embodiment will be described. The ninth embodiment relates to an example of dynamic resizing of the
SSD 10. The portions that substantially overlap with the above-mentioned embodiments will not be described. - [9-1] Structure and Operation
- To begin with, the structure and operation of the ninth embodiment will be described with reference to
FIG. 7 . - In the
information processing system 100, thehost device 20 designates a place (address) of data in a logical block address (LBA) when performing read and write of the data to theSSD 10. On the other hand, theSSD 10 manages mapping from an LBA to a physical block address (PBA) in a lookup table (LUT) 123. The - LBA being used is mapped to the used block of the LUT.
- [9-1-1]
SSD 10 - As shown, the
SSD controller 12 of theSSD 10 of the ninth embodiment includes a badblock examination unit 121, a storage capacityinformation reception unit 122 and the lookup table (LUT) 123. - The bad
block examination unit 121 receives an extensive command of a bad block from thehost device 20, responses thereto, and returns it to thehost device 20 as an extensive status signal ReS9 of a bad block. Further, the badblock examination unit 121 notifies the signal to the storage capacityinformation reception unit 122. - The bad
block examination unit 121 can adopt two means of notifying the increase in number of bad blocks from theSSD 10 to thehost device 20. - The first means is to add the number of bad blocks to the statistic information of the
SSD 10. If this information is read by, for example, polling from thehost device 20 on a regular basis, it is possible to make an indirect notification. - The second means is to make a direct notification from the
SSD 10 to thehost device 20 by means of a callback mechanism. In detail, when the number of bad blocks increases to the fixed predetermined number or the number defined by thehost device 20, the protocol of the interface of theSSD 10 is extended so as to issue a notification. - The storage capacity
information reception unit 122 receives the above-mentioned signal from the badblock examination unit 121 and control from thehost device 20, and updates information of theLUT 123. - The correspondence relationship of an LBA and a PBA is mapped on the LUT (logical physical address conversion table) 123.
- [9-1-2]
Host Device 20 - The
host device 20 of the ninth embodiment includes a bad blockinformation reception unit 211, a storagecapacity determination unit 212 and a usecapacity reduction unit 213. - The bad block
information reception unit 211 receives the above-mentioned extensive status signal ReS9 from theSSD 10 and transmits bad block information to the storagecapacity determination unit 212. - The storage
capacity determination unit 212 receives the bad block information from the bad blockinformation reception unit 211, determines storage capacity, and notifies the usecapacity reduction unit 213 and the storage capacityinformation reception unit 122 of theSSD 10. In other words, the storagecapacity determination unit 212 notifies theSSD 10 of the decrease in use capacity in accordance with the determination of use capacity. - The storage
capacity determination unit 212 can adopt any of three means of notifying the decrease in use capacity from thehost device 20 to theSSD 10. - The first means is to create in the SSD 10 a new command of setting the maximum value (user capacity) of an LBA and to issue this command from the
host device 20. On receipt of the command, the mapping of the LEA exceeding the maximum value can be released in theLUT 123 on the side of theSSD 10. - The second means is to use a TRIM or UNMAP command. By means of these commands, it is possible to notify an LBA that is not used by the
host device 20. On receipt of the command, the mapping of the LBA of theLUT 123 can be released on the side of theSSD 10. - The third means is to extend a bad sector designation command (WRITE_UNCORRECTABLE_EXT). This command, when reading and writing the designated LBA thereafter, causes the
SSD 10 to return an error. In addition, extension is made so as to release the mapping of the LBA. - The use
capacity reduction unit 213 reduces capacity to be used in accordance with the storage capacity received from the storagecapacity determination unit 212. - The use
capacity reduction unit 213 can adopt either of two means of reducing use capacity of thehost device 20 on receipt of the above-mentioned notification from the storagecapacity determination unit 212. - The first means is to reduce use capacity by deleting data that can be deleted such as cache data (stored data in which the same data overlaps in another SSD [storage device] 10 and an HDD).
- The second means is to transfer data from one of the
SSD 10 to anotherSSD 10 having allowance to reduce use capacity of theformer SSD 10, when a combination of the plurality ofSSDs 10 is used as in a logical volume manager (LVM). - [9-2] Advantageous Effect
- As described above, according to the ninth embodiment, it is possible to obtain at least an advantageous effect of prolonging the life of the
SSD 10 by reducing capacity used for thehost 20, when the number of bad blocks of theNAND memory 11 increases, in addition to the above-mentioned outline and the effects of the embodiments. - In the following, the comparative example and the ninth embodiment will be described.
- As described above, an SSD is a storage device electrically connected to a host device (for example, calculator) to perform read and write of data from the host device.
- In an SSD, a NAND memory is used as a nonvolatile memory and is managed per block. The block of a NAND memory is classified into three types of blocks including a bad block that cannot be used due to, for example, manufacturing defect or life, a used block that stores data written from a host, and an empty block that is not used. When the number of bad blocks and used blocks increases, the number of empty blocks decreases accordingly.
- The storage capacity (use capacity) of a storage device used by a host device is limited by user capacity. The user capacity is the remaining capacity in which the capacity of allowance (over-provisioning) is subtracted from the capacity (physical capacity) corresponding to all the blocks of an SSD.
- In an SSD, when the number of empty blocks decreases, the above-mentioned WAF (write-amplification factor) increase and the life and response speed decrease markedly.
- When the number of empty blocks decreases because the capacity used by a host device increases and the number of used blocks increases, the host device recognizes its use capacity. It is therefore possible to manage the balance between the decrease of life, response speed, etc., and use capacity.
- However, when the number of bad blocks increases and the number of empty blocks decreases, a host device cannot recognize it. Therefore, when the number of bad blocks increases, there is a tendency that the number of empty blocks decreases while a host device does not recognize it and the life and response speed of an SSD decreases accordingly.
- It is possible to curb this tendency by increasing the above-mentioned over-provisioning and reducing user capacity. However, if user capacity is small from the beginning, user's convenience and marketability are reduced.
- In comparison with the above-mentioned comparative example, according to the ninth embodiment, at least the bad
block detection unit 121, which receives an extensive command of a bad block and notifies the increase in number of bad blocks from theSSD 10 to thehost 20 as the information ReS9, is included. Thehost device 20, which has received the notification, reduces use capacity by means of the usecapacity reduction unit 213 so as not to cause any problem even when use capacity decreases. Further, the reduced use capacity is notified from thehost 20 to theSSD 10 by means of the storagecapacity determination unit 212. TheSSD 10 then reduces the number of used blocks by means of the storage capacityinformation reception unit 122. - According to such a structure and operation, the
host device 20 reduces use capacity as necessary and theSSD 10 can secure the empty blocks accordingly. It is thereby possible to prolong and the life of theSSD 10 and to prevent the reduction of the speed of response to theSSD 10. - As describe above, it is obvious in the comparative example that at least the bad
block detection unit 121 which notifies the increase in number of bad blocks from an SSD to a host, and the usecapacity reduction unit 213 in which a host reduces use capacity on receipt of a notification are not included. Note that a TRIM and/or UNMAP command may be a means for notifying the decrease of use capacity from a host to an SSD. However, they are used when an application program on thehost device 20 individually deletes data. That is, they are irrelevant to the increase in number of bad blocks in theNAND memory 11 of theSSD 10. - [10-1. Structure]
- [10-1-1. General Structure]
- A general structure including an
information processing system 100 of the tenth embodiment is described with reference toFIG. 8 .FIG. 8 represents power paths by broken lines and signal paths by solid lines. - As shown in
FIG. 8 , theinformation processing system 100 of the tenth embodiment is driven by power Pmax supplied from apower supply unit 50, and executes a process and request (for example, a request to write data, etc.) ofexternal devices 220 which access theinformation processing system 100 from the outside 200 via anetwork 210. - The
information processing system 100 comprises SSD0 to SSDn-1 (n is a positive integer), which arestorage devices 10, and ahost 20 which controls thestorage devices 10. Solid-state drives (SSDs) are described as an example of thestorage devices 10. Thestorage devices 10 are not limited to SSDs and may be, for example, hard disc drives (HDDs) or other storage devices and memories. The detailed structure of thestorage devices 10 and thehost 20 will be described later. - The
power supply unit 50 converts external power supplied from an external power source VC to the predetermined power Pmax. The converted power Pmax is almost equally divided into power components P0 to Pn-1 to be supplied to thestorage devices 10, respectively. In the tenth embodiment, the total power Pmax supplied to theinformation processing system 100 is predetermined and the value is substantially constant. Therefore, the value of power Pmax supplied from thepower supply unit 50 is not greater than the sum total of power components P0 to Pn-1 supplied to SSD0 to SSDn-1, respectively, that is - (I) Pmax≦ΣPi,
- where i=0, 1, 2, . . . , n-1.
- The
external devices 220 access theinformation processing system 100 from the outside 200 of theinformation processing system 100 via thenetwork 210, and performs a predetermined process or makes a predetermined request (for example, data reading, data writing, data erasing, etc.) to the accessedinformation processing system 100. Thenetwork 210 is not limited to wired or wireless. - In the above structure, the
information processing system 100 of the tenth embodiment changes power components to be distributed to thestorage devices 10 and optimizes the power components (P0 to Pn-1→P0″ to Pn-1″) in accordance with a load on the storage devices 10 (SSD0 to SSDn-1). According to such a structure, theinformation processing system 100 of the tenth embodiment can improve efficiency of the system. The effect and advantage will be described later in detail. - [10-1-2. Information Processing System]
- The detailed structure of the
information processing system 100 of the tenth embodiment is described with reference toFIG. 9 . As described above, theinformation processing system 100 comprises SSD0 to SSDn-1, which are thestorage devices 10, and thehost 20 which controls thestorage devices 10. In the description below, theinformation processing system 100 comprises ten SSDs, i.e., SSD0 to SSD9 (n=10), as an example. - [Storage]
- Each of SSD0 to SSD9, which are the storage (storage units) 10, comprises a NAND flash memory (hereinafter referred to as a “NAND memory”) 11, a
memory controller 12 and apower conversion unit 13. - The
NAND memory 11 is a nonvolatile semiconductor memory which comprises blocks (physical blocks) and stores data in each block. Each block comprises memory cells positioned at intersections of word lines and bit lines. Each memory cell comprises a control gate and a floating gate and stores data in a nonvolatile manner by the presence or absence of electrons injected into the floating gate. The word lines are commonly connected to the control gates of the memory cells. A page exists in each word line. Data reading and writing operations are performed per page. Therefore, a page is a unit of data reading and writing. Data is erased per block. Therefore, a block is a unit of data erasing. TheNAND memory 11 of the tenth embodiment may be multi-level cell (MLC) capable of storing multibit data in a memory cell and/or single-level cell (SLC) capable of storing one-bit data in a memory cell MC. - The
memory controller 12 controls the operation of the whole of thestorage device 10 in accordance with a request from thehost 20. For example, thememory controller 12 writes write data to a predetermined address of theNAND memory 11 in accordance with a write command which is a request to write data from thehost 20. Thememory controller 12 of the tenth embodiment further receives an extended command eCOM transmitted from thehost 20 to confirm minimum power required for the operation of each of SSD0 to SSD9. The extended command eCOM is a signal transmitted on purpose to detect various states of the storage device 10 (for example, a state of power consumption of thestorage device 10 in this case), and is defined as a signal different from the above-described write command, etc. The extended command eCOM is not limited to a command eCOM and may be any extended predetermined signal (information, request, instruction, etc.). - The
memory controller 12 of each of SSD0 to SSD9 transmits a status signal ReS (P0′ to P9′) indicative of the minimum power required for the operation in reply to the received request eCOM. The signal transmitted in reply is not limited to the status signal ReS and may be any extended predetermined signal (information, request, instruction, etc.). - The
memory controller 12 of each of SSD0 to SSD9 controls thepower conversion unit 13 to operate based on the changed power component (P0″ to P9″) notified by thehost 20. The operation will be described later in detail. - The
power conversion unit 13 converts the power component (P0 to P9) supplied from thepower supply unit 50 under the control of thememory controller 12. Thestorage device 10 performs a predetermined operation in accordance with the power supplied from thepower conversion unit 13. - Of course, the
storage devices 10 are not limited to the above-described structure. For example, eachmemory controller 12 may comprise an address mapping (address translation) table indicative of a correspondence relationship between logical addresses managed by thehost 20 and physical addresses managed by thestorage device 10. There is no order as to which of the extended command eCOM and the extended status signal ReS should be transmitted first. That is, the extended predetermined signal may be first transmitted from thestorage device 10 to thehost 20 and then the extended predetermined signal may be transmitted from thehost 20 to thestorage device 10. - [Host]
- The
host 20 controls eachstorage device 10 in accordance with a request from theexternal devices 220 which access from the outside via thenetwork 210. Thehost 20 comprises a dataposition management unit 221, a powerdistribution determination unit 223 and a central processing unit (CPU) 222. - The data
position management unit 221 manages, for example, position information of write data stored in thestorage devices 10 under the control of theCPU 222. The dataposition management unit 221 comprises a table (first table) T1. Table T1 indicates at least a power/performance characteristic of each of SSD0 to SSD9 as described later. - The power
distribution determination unit 223 determines power to be distributed to each of SSD0 to SSD9 under the control of theCPU 222. More specifically, the powerdistribution determination unit 223 determines power components P0″ to P9″ to be redistributed to SSD0 to SSD9, respectively, based on the corrected characteristics PP0′ to PP9′ of thestorage devices 10 transmitted from theCPU 222. TheCPU 222 is notified of the determined power components P0″ to P9″. - The
CPU 222 controls the dataposition management unit 221 and the powerdistribution determination unit 223 and controls the operation of the whole of thehost 20. - Of course, the
host 20 is riot limited to the above-described structure. For example, thehost 20 may comprise an interface to communicate with thestorage devices 10, etc. - [10-1-3. Table T1]
- Table T1 of the tenth embodiment is described in detail with reference to
FIG. 10 andFIG. 11 .FIG. 10 is a table showing table T1 of the tenth embodiment. - As shown in
FIG. 10 , SSD0 to SSD9, which are thestorage devices 10, are associated with theoretical power/performance characteristics (electrical characteristics) PP0 to PP9, respectively, in table T1. Each of power/performance characteristics PP0 to PP9 is shown as a typical characteristic based on the assumption that the performance varies depending on the amount of supplied power. - For example,
FIG. 11 shows a power/performance characteristic PP0 of SSD0. As shown inFIG. 11 , in characteristic PP0, the performance increases from theorigin 0 proportionately with the supplied power in theory. More specifically, when the supplied power is power component P0, SSD0 can deliver performance S0 proportionately with power component P0. However, a proportionality coefficient of the performance decreases when the supplied power increases to some degree. For example, when the supplied power exceeds power component P0, the proportionality coefficient of the performance decreases. This is because, for example, the amount of heat produced in thecontroller 12 increases when the supplied power increases to some degree. - “The performance (performance index)” may include all operations and functions performed by the
NAND memory 11 depending on the supplied power. For example, the performance of theNAND memory 11 may include data writing, data reading, data erasing, garbage collection (compaction), inputs/outputs per second (IPOS), megabytes per second (MB/s), etc. IPOS is the number of times data can be written to theNAND memory 11 per second. MB/s is a communication speed between thehost 20 and theNAND memory 11. Power/performance characteristics PP1 to PP9 of the other SSD1 to SSD9 are the same as PP0. - [10-2. Operation]
- Next, the operation of the
information processing system 100 of the tenth embodiment having the above structure is described. - [10-2-1. Distribution Power Determination Process]
- A distribution power determination process of the
information processing system 100 of the tenth embodiment is described with reference toFIG. 12 . As an example, the description below is based on the assumption that a specified SSD5 is intensively accessed by theexternal devices 220 and theCPU 222 of thehost 20 determines that the larger load (larger power) is necessary for SSD5. - First, in step S11, the
CPU 222 of thehost 20 transmits an extended command (first command) eCOM to confirm the minimum power required for the operation of each of SSD0 to SSD9. - In step S12, the
memory controller 12 of eachstorage device 10 transmits a status signal ReS (P0′ to P9′) indicative of the minimum power required for the operation in reply to the received request eCOM. For example, thememory controller 12 of SSD0 first detects the minimum power component P0′ required for the operation of theNAND memory 11 of SSD0 based on the relationship between the performance and power component P0 supplied to theNAND memory 11, in accordance with the received request eCOM. Next, thememory controller 12 of SSD0 transmits the detected minimum power component P0′ to thehost 20 as a status signal ReS (P0′). - In step S13, the
CPU 222 of thehost 20 corrects the power/performance characteristic of each SSD based on the transmitted status signal ReS (P0′ to P9′). More specifically, for example, the powerdistribution determination unit 223 of thehost 20 increases the initial value of characteristic PP0 from theorigin 0 to P0′ based on the status signal ReS (P0′) indicative of the minimum power required for the operation of SSD0, as shown inFIG. 13 . The powerdistribution determination unit 223 further corrects characteristic PP0 by performing parallel translation of characteristic PP0 and thereby calculates an actual characteristic PP0′. As described above, the minimum power required for driving components other than theNAND memory 11, for example, thememory controller 12 and the other peripheral circuits can be considered by calculating characteristic PP0′. As a result, the characteristic can be calculated with more precision based on the actual status of eachstorage device 10. The other characteristics PP1′ to PP9′ are also calculated in the same manner as PP0′. - In step S14, as shown in
FIG. 14 , theCPU 222 of thehost 20 stores the corrected power/performance characteristics PP0′ to PP9′ of SSD0 to SSD9 in table T1 and thereby updates table T1. In the following steps S15 and S16, too, theCPU 222 stores calculated allowable power components P0″ to P4″ and P6″ to P9″ and changed power components P5″ in table T1. - In step S15, the power
distribution determination unit 223 of thehost 20 calculates allowable power components P0″ to P4″ and P6″ to P9″ to be distributed to SSDs other than SSD5 under a load, i.e., SSD0 to SSD4 and SSD6 to SSD9, based on the corrected power/performance characteristics PP0′ to PP9′. More specifically, as shown inFIG. 13 , the powerdistribution determination unit 223 calculates suppressible power component (surplus power component) P0″ from the currently supplied power component P0 based on the corrected characteristic PP0′. “The allowable power (suppressible power, surplus power)” may be any power as long as theNAND memory 11 can continuously operate. The other allowable power components P1″ to P9″ are calculated in the same manner as allowable power component P0″. - In step S16, the power
distribution determination unit 223 of thehost 20 calculates power component P5″ changed to be supplied to SSD5 under a load, from the calculated allowable power components P0″ to P4″ and P6″ to P9″. More specifically, as shown inFIG. 13 , the powerdistribution determination unit 223 first calculates differences AP0 to AP4 and AP6 to AP9 between the currently-distributed power components P0 to P4, P6 to P9 and the calculated suppressible power components P0″ to P4″, P6″ to P9″, respectively. Next, the powerdistribution determination unit 223 adds the calculated difference power components AP0 to AP4 and AP6 to AP9 to power component P5 assigned to SSD5. As a result, the powerdistribution determination unit 223 calculates power component P5″ (=P5+[AP0 to AP4 and AP6 to AP9]) as the power component changed to be supplied to SSD5. - In step S17, SSD0 to SSD9 are notified of the changed power components P0″ to P9″ calculated by the
host 20. - In step S18, SSD0 to SSD9 operate based on the notified changed power components P0″ to P9″. More specifically, the
power conversion units 13 of SSD0 to SSD9 convert power components P0 to P9 supplied from thepower supply unit 50 into power components P0″ to P9″ notified by thememory controllers 12. - As a result, the specified SSD5 operates based on power component P5″ which is larger than the previous power component P5. The other SSD0 to SSD4 and SSD6 to SSD9 operate based on power components P0″ to P4″ and P6″ to P9″ which have been obtained by subtracting the suppressible power from the previous power components P0 to P4 and P6 to P9 and are lower than the previous power components P0 to P4 and P6 to P9.
- [10-3. Advantageous Effect]
- As described above, according to the structure and operation of the
information processing system 100 of the tenth embodiment, at least the mentioned same effects and the following effect (1) can be achieved. - (1) The efficiency of the system can be improved.
- For example, if the
host 20 determines that the larger load (larger power) is necessary for a specified SSD5, thehost 20 transmits an extended command eCOM to ascertain the status and characteristic (in this case, the minimum power) of each of SSD0 to SSD9 (S11 inFIG. 5 ). Next, when receiving the command eCOM, eachstorage device 10 transmits a status signal ReS (P0′ to P9′) indicative of the minimum power required for the operation to thehost 20 in reply (S12 inFIG. 5 ). Thehost 20 corrects power/performance characteristics of SSDs based on the status signals ReS (P0′ to P9′) and calculates the changed power components P0″ to P9″ from the corrected characteristics PP0′ to PP9′ (S13 to S16 inFIG. 5 ). After that, thestorage devices 10 operate based on the calculated changed power components P0″ to P9″. - According to the above-described structure and operation, the efficiency of the whole
information processing system 100 can be improved by intensively injecting allocatable power to SSD5 under a load to improve the processing performance of SSD5. - For example, before the power is changed, SSD0 to SSD9 operate based on power components P0 to P9 almost evenly distributed under the control of the
host 20 as shown inFIG. 15 . If the total amount of power Pmax supplied to theinformation processing system 100 is predetermined as expressed by expression (I), it is not necessarily preferable to evenly distribute power components P0 to P9 to SSD0 to SSD9 assuming that the maximum performance should be provided by the limited power Pmax. This is based on the premise that the performance of thestorage devices 10 varies depending on the power consumption as shown inFIG. 11 andFIG. 13 . For example, when a group of servers, which are theexternal devices 220, accesses the same SSD5 as described above and stores and refers to data of the application, etc., SSD5 is intensively accessed and is required to perform a large amount of processes. - Therefore, as shown in
FIG. 16 , the power is changed to increase the power supplied to SSD5. SSD5, which requires the larger power, can thereby operate based on the larger power component P5″. The other SSD0 to SSD4 and SSD6 to SSD9 can continuously operate based on power components P0″ to P4″ and P6″ to P9″ obtained by subtracting the suppressible power. - As a result, according to the tenth embodiment, the processing capability of the
storage devices 10 can be substantially hierarchical based on the supplied amount of power as shown inFIG. 17 even if the system is constituted by one type ofstorage devices 10. More specifically, with respect to data required to be frequently accessed (in this case, data stored in SSD5), the supplied power is increased and the processing ability and speed are improved. As described above, theinformation processing system 100 of the tenth embodiment has an advantage that anarbitrary storage device 10 can be used as a high-speed layer (higher layer) and the efficiency of the whole system can be improved. - in contrast to the tenth embodiment, a comparative example has a hierarchical structure constituted by several types of storage devices as shown in
FIG. 18 . For example, a high-speed interface SSD is used as a high-speed layer (higher layer). For example, a low-speed interface SSD or a high-speed HDD is used as a medium-speed layer (medium layer). For example, a low-speed HDD is used as a low-speed layer (lower layer). - In the hierarchical storage architecture as in the comparative example, however, physical device and interface are different depending on layer. Therefore, it is impossible to increase the speed of a specified storage device. In addition, even if data required to be frequently accessed is stored in the high-speed layer (higher layer), accesses do not necessarily center on only the data stored in the higher layer. As described above, the information processing system of the comparative example has a disadvantage that the efficiency of the whole system is hardly improved after forming the hierarchical structure.
- Next, the eleventh embodiment is described with reference to
FIG. 19 andFIG. 20 . The eleventh embodiment relates to a case where eachstorage device 10 determines its own performance. In the description below, the description overlapping the tenth embodiment is omitted. - [Structure]
- [Information Processing System]
- The detailed structure of the
information processing system 100 of the eleventh embodiment is described with reference toFIG. 19 . As shown inFIG. 19 , theinformation processing system 100 of the eleventh embodiment is different from that of the tenth embodiment in that theNAND memory 11 comprises a table T2 and eachstorage device 10 comprises a self-performance determination unit 14. - In table (second table) T2 of the
NAND memory 11, an actual characteristic (PP0′ to PP9′) of thestorage device 10 is stored. For example, actual characteristic PP0′ of SSD0 is stored in table T2 of SSD0. Table T2 is updated by thememory controller 12 at arbitrary intervals. The storage location of table T2 is not limited to theNAND memory 11. - The self-
performance determination unit 14 determines the performance of thestorage device 10 under the control of thememory controller 12 and notifies thememory controller 12 of a result of the determination. For example, when receiving a command eCOM, the self-performance determination unit 14 of SSD0 refers to table T2 and determines the minimum power component P0′ required for the operation of SSD0 based on the actual characteristic PP0′. The self-performance determination unit 14 of SSD0 further notifies thememory controller 12 of the determined power component P0′. - Since the other structure is substantially the same as that of the tenth embodiment, the detailed description is omitted.
- [Operation]
- [Distribution Power Determination Process]
- A distribution power determination process of the
information processing system 100 of the eleventh embodiment having the above-described structure is described with reference toFIG. 20 . The description below is based on the assumption that a specified SSD5 is intensively accessed by theexternal devices 220 and theCPU 222 of thehost 20 determines that the larger load (larger power) is necessary for SSD5, as an example. - In step S21, the
CPU 222 of thehost 20 transmits an extended command eCOM to eachstorage device 10 to detect the minimum power required for the operation of each SSD. - In step S22, in response to the command eCOM, the self-
performance determination unit 14 of eachstorage device 10 refers to table T2 and determines the minimum power component (P0′ to P9′) required for the operation based on the actual characteristic (PP0′ to PP9′) stored in table T2. - In step S23, the self-
performance determination unit 14 of eachstorage device 10 refers to table T2 and calculates performance (S0′ to S9′) expected from the calculated power component (P0′ to P9′) based on the characteristic (PP0′ to PP9′). - In step S24, the
memory controller 12 of eachstorage device 10 transmits the calculated power component (P0′ to P9′) and the expected performance (S0′ to S9′) to thehost 20 as a status signal ReS. - In step S25, the power
distribution determination unit 223 of thehost 20 determines allowable power components P0″ to P4″ and P6″ to P9″ and power component P5″ changed to be supplied to SSD5 under a load, based on the received status signals ReS (P0′ to P9′ and S0′ to S9′). - In step S26, the
CPU 222 of thehost 20 notifies thestorage devices 10 of the determined power components P0″ to P9″. - In step S27, the
storage devices 10 operate based on power components P0″ to P9″ notified by thehost 20. - Since the other operation is substantially the same as that of the tenth embodiment, the detailed description is omitted.
- [Advantageous Effects]
- As described above, according to the structure and operation of the
information processing system 100 of the eleventh embodiment, at least the same effect as the above-described effect (1) can be achieved. As described in the eleventh embodiment, eachstorage device 10 may determine its own performance and power consumption. - Next, the twelfth embodiment is described with reference to
FIG. 21 . The twelfth embodiment relates to a case where the host notifies eachstorage device 10 of required performance. In the description below, the description overlapping the above-described embodiments is omitted. - [Structure and Operation]
- As shown in
FIG. 21 , theinformation processing system 100 of the twelfth embodiment is different from the first and eleventh embodiments in that thehost 20 further notifies eachstorage device 10 of required performance (S0″ to Sn-1″). For example, as shown inFIG. 13 , performance S0″ is performance expected from the calculated power component P0″ based on characteristic PP0′. - More specifically, in steps S14 and S15, the power
distribution determination unit 223 of thehost 20 calculates power components P0″ to P9″ based on characteristics PP0′ to PP9′. Next, the powerdistribution determination unit 223 calculates performances S0″ to S9″ expected from the calculated power components P0″ to P9″ based on the characteristics PP0′ to PP9′. Thestorage devices 10 are notified of the calculated performances S0″ to S9″ together with power components P0″ to P9″. - The
host 20 may notify thestorage devices 10 of the calculated performances S0″ to S9″ instead of power components P0″ to P9″. The performances S0″ to S9″ may be calculated by thestorage devices 10 instead of thehost 20. - Since the other structure and operation are substantially the same as those of the first and eleventh embodiments, the detailed description is omitted.
- [Advantageous Effects]
- As described above, according to the structure and operation of the
information processing system 100 of the twelfth embodiment, at least the same effect as the above-described effect (1) can be achieved. In addition, according to the twelfth embodiment, thestorage devices 10 can be directly controlled based on the required performances S0″ to S9″. Therefore, each required performance can be achieved more directly. - Next, the thirteenth embodiment is described with reference to
FIG. 22 . The thirteenth embodiment relates to a case where the total amount of supplied power Pmax is variable. In the description below, the description overlapping the above-described embodiments is omitted. - [Structure and Operation]
- As shown in
FIG. 22 , ainformation processing system 100A of the thirteenth embodiment is different from the first to twelfth embodiments in that a maximum value of total power Pmax supplied to theinformation processing system 100A can be varied by a control signal CS50 notified to apower supply unit 50A by thehost 20. - For example, it is assumed that
power supply unit 50A also supplies power to a information processing system 100B different frominformation processing system 100A. In such a case, when the operation of information processing system 100B is stopped, there is a surplus of power Pmax supplied frompower supply unit 50A. Therefore, when detecting the surplus power, theCPU 222 of thehost 20 transmits a control signal CS50 topower supply unit 50A to increase the maximum value of power Pmax. When receiving the control signal CS50,power supply unit 50A increases the maximum value of power Pmax and suppliesinformation processing system 100A with the increased power under the control of thehost 20. - Since the other structure and operation are substantially the same as those of the first to twelfth embodiments, the detailed description is omitted.
- [Advantageous Effects]
- As described above, according to the structure and operation of the
information processing system 100A of the thirteenth embodiment, at least the same effect as the above-described effect (1) can be achieved. In addition, according to the thirteenth embodiment, the maximum value of total power Pmax supplied toinformation processing system 100A can be changed and the value of power Pmax can be increased by the control signal CS50 notified to thepower supply unit 50A by thehost 20. Therefore, the thirteenth embodiment has an advantage that the efficiency of the system can be further improved. - The
information processing system 100 is not limited to the first to thirteenth embodiments and may be changed as appropriate as described below. - [Structure and Operation]
- The power consumption of the
storage devices 10 is not necessarily determined by using the power/performance characteristics. For example, as shown inFIG. 23 , a table (third table) T3 in which logs (operation history) of SSD0 to SSD9 constituting thestorage devices 10 are recorded may be comprised. In table T3, power supplied to each of SSD0 to SSD9 constituting the storage devices and performance achieved by the power are recorded. For example, (S01, P01), (S02, P02), . . . are recorded as a log of SSD0. Logs of the other SSD1 to SSD9 are recorded in the same manner. Thehost 20 or thestorage device 10 may determine predetermined power and performance from the characteristic by referring to table T3. Of course, both the characteristics and the logs may be used. - In addition, the first to third tables T1 to T0 are described as an example, but the form is not limited to a table form. For example, a predetermined formula, function and the like may be used.
- The means for distributing power is not limited to supplying a specified storage device with surplus allowable power subtracted from the total power Pmax, and may be changed as necessary. For example, the
host 20 may distribute power to thestorage devices 10 based on the status of all thestorage devices 10 such that a specified process at a specified time is completed first. - The power consumed by the
storage devices 10 is changed by not only the performance and the operation status of thestorage devices 10 but also, for example, the environment (temperature, etc.) of thestorage devices 10. Therefore, a temperature and an amount of heat of thestorage devices 10 may also be detected as an index of the performance of thestorage devices 10. - (Exterior)
- An example of the exterior of the information processing system which can be applied to the first to thirteenth embodiments and the modified example with reference to
FIG. 24 . - As shown in
FIG. 24 , theinformation processing system 100 comprises thestorage devices 10 and thehost 20 which controls thestorage devices 10. SSDs are described as an example of thestorage devices 10. - For example, the
storage devices 10 can be attached to thehost 20 in a data center and a cloud computing system of an enterprise. Thestorage devices 10 can access anexternal device 220 such as an external server via thenetwork 210 under the control of thehost 20. Therefore, SSD0 to SSD9 may be enterprise SSDs (eSSDs). - The use of SSD0 to SSD9 is not limited for enterprises. For example, SSD0 to SSD9 can be of course applied as a storage medium of an electronic device for consumer such as a notebook computer and a tablet.
- While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the inventions. Indeed, the novel embodiments described herein may be embodied in a variety of other forms; furthermore, various omissions, substitutions and changes in the form of the embodiments described herein may be made without departing from the spirit of the inventions. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the inventions.
Claims (9)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/817,625 US20160041762A1 (en) | 2014-08-08 | 2015-08-04 | Memory system, host device and information processing system |
PCT/IB2015/056002 WO2016020886A1 (en) | 2014-08-08 | 2015-08-07 | Memory system, host device and information processing system |
US15/632,450 US10866733B2 (en) | 2014-08-08 | 2017-06-26 | Memory system, host device and information processing system for error correction processing |
US17/078,547 US11704019B2 (en) | 2014-08-08 | 2020-10-23 | Memory system, host device and information processing system for error correction processing |
US18/204,854 US20230305701A1 (en) | 2014-08-08 | 2023-06-01 | Memory system, host device and information processing system for error correction processing |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462035243P | 2014-08-08 | 2014-08-08 | |
US14/817,625 US20160041762A1 (en) | 2014-08-08 | 2015-08-04 | Memory system, host device and information processing system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/632,450 Continuation US10866733B2 (en) | 2014-08-08 | 2017-06-26 | Memory system, host device and information processing system for error correction processing |
Publications (1)
Publication Number | Publication Date |
---|---|
US20160041762A1 true US20160041762A1 (en) | 2016-02-11 |
Family
ID=55263244
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/817,625 Abandoned US20160041762A1 (en) | 2014-08-08 | 2015-08-04 | Memory system, host device and information processing system |
US15/632,450 Active 2036-01-08 US10866733B2 (en) | 2014-08-08 | 2017-06-26 | Memory system, host device and information processing system for error correction processing |
US17/078,547 Active 2035-08-29 US11704019B2 (en) | 2014-08-08 | 2020-10-23 | Memory system, host device and information processing system for error correction processing |
US18/204,854 Pending US20230305701A1 (en) | 2014-08-08 | 2023-06-01 | Memory system, host device and information processing system for error correction processing |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/632,450 Active 2036-01-08 US10866733B2 (en) | 2014-08-08 | 2017-06-26 | Memory system, host device and information processing system for error correction processing |
US17/078,547 Active 2035-08-29 US11704019B2 (en) | 2014-08-08 | 2020-10-23 | Memory system, host device and information processing system for error correction processing |
US18/204,854 Pending US20230305701A1 (en) | 2014-08-08 | 2023-06-01 | Memory system, host device and information processing system for error correction processing |
Country Status (2)
Country | Link |
---|---|
US (4) | US20160041762A1 (en) |
WO (1) | WO2016020886A1 (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160188220A1 (en) * | 2014-12-24 | 2016-06-30 | Kabushiki Kaisha Toshiba | Memory system and information processing system |
WO2017195928A1 (en) * | 2016-05-13 | 2017-11-16 | 주식회사 맴레이 | Flash-based storage device and computing device comprising same |
JP2018101322A (en) * | 2016-12-21 | 2018-06-28 | 日本電気株式会社 | Storage array device, power consumption adjusting method, and power consumption adjusting program |
US10042405B2 (en) * | 2015-10-22 | 2018-08-07 | Qualcomm Incorporated | Adjusting source voltage based on stored information |
US10049047B1 (en) | 2017-03-10 | 2018-08-14 | Toshiba Memory Corporation | Multibit NAND media using pseudo-SLC caching technique |
US10095626B2 (en) | 2017-03-10 | 2018-10-09 | Toshiba Memory Corporation | Multibit NAND media using pseudo-SLC caching technique |
US10198061B2 (en) * | 2015-09-01 | 2019-02-05 | Toshiba Memory Corporation | Storage and storage system |
US10275172B2 (en) | 2016-07-27 | 2019-04-30 | Samsung Electronics Co., Ltd. | Solid state drive devices and methods of operating thereof |
US10346039B2 (en) | 2015-04-21 | 2019-07-09 | Toshiba Memory Corporation | Memory system |
US10387353B2 (en) | 2016-07-26 | 2019-08-20 | Samsung Electronics Co., Ltd. | System architecture for supporting active pass-through board for multi-mode NMVE over fabrics devices |
US10411024B2 (en) | 2017-02-28 | 2019-09-10 | Toshiba Memory Corporation | Memory system and method for controlling nonvolatile memory |
KR20200047244A (en) * | 2018-10-24 | 2020-05-07 | 삼성전자주식회사 | Semiconductor memory device, control unit, and memory system |
US10762023B2 (en) | 2016-07-26 | 2020-09-01 | Samsung Electronics Co., Ltd. | System architecture for supporting active pass-through board for multi-mode NMVe over fabrics devices |
US20210073121A1 (en) * | 2019-09-09 | 2021-03-11 | Micron Technology, Inc. | Dynamically adjusted garbage collection workload |
US20210357320A1 (en) * | 2018-09-05 | 2021-11-18 | SK Hynix Inc. | Memory controller, memory system and operating method of memory device |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10248333B1 (en) * | 2017-02-07 | 2019-04-02 | Crossbar, Inc. | Write distribution techniques for two-terminal memory wear leveling |
US10409714B1 (en) | 2017-02-09 | 2019-09-10 | Crossbar, Inc. | Logical to physical translation for two-terminal memory |
US20240004573A1 (en) * | 2022-06-29 | 2024-01-04 | Western Digital Technologies, Inc. | Performance indicator on a data storage device |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140218767A1 (en) * | 2013-02-01 | 2014-08-07 | Canon Kabushiki Kaisha | Image forming apparatus, memory management method for image forming apparatus, and program |
Family Cites Families (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7373559B2 (en) | 2003-09-11 | 2008-05-13 | Copan Systems, Inc. | Method and system for proactive drive replacement for high availability storage systems |
JP2008508632A (en) * | 2004-08-02 | 2008-03-21 | コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ | Data storage and playback device |
JP2007193449A (en) * | 2006-01-17 | 2007-08-02 | Toshiba Corp | Information recorder, and control method therefor |
US7900118B2 (en) * | 2007-02-12 | 2011-03-01 | Phison Electronics Corp. | Flash memory system and method for controlling the same |
JP5288899B2 (en) | 2008-06-20 | 2013-09-11 | 株式会社日立製作所 | Storage apparatus for estimating power consumption and power estimation method for storage apparatus |
JP4838832B2 (en) | 2008-08-29 | 2011-12-14 | 富士通株式会社 | Storage system control method, storage system, and storage apparatus |
WO2011016081A1 (en) | 2009-08-04 | 2011-02-10 | Hitachi, Ltd. | Storage system, control method thereof, and program to adjust power consumption per workload |
US8443263B2 (en) | 2009-12-30 | 2013-05-14 | Sandisk Technologies Inc. | Method and controller for performing a copy-back operation |
JP5017407B2 (en) | 2010-03-24 | 2012-09-05 | 株式会社東芝 | Semiconductor memory device |
JP2012008651A (en) * | 2010-06-22 | 2012-01-12 | Toshiba Corp | Semiconductor memory device, its control method, and information processor |
US8738994B2 (en) | 2011-04-25 | 2014-05-27 | Samsung Electronics Co., Ltd. | Memory controller, memory system, and operating method |
EP2702491A4 (en) | 2011-04-26 | 2015-02-25 | Lsi Corp | Variable over-provisioning for non-volatile storage |
JP5779147B2 (en) | 2012-07-06 | 2015-09-16 | 株式会社東芝 | Memory system |
US8959263B2 (en) * | 2013-01-08 | 2015-02-17 | Apple Inc. | Maintaining I/O priority and I/O sorting |
US20140229655A1 (en) * | 2013-02-08 | 2014-08-14 | Seagate Technology Llc | Storing Error Correction Code (ECC) Data In a Multi-Tier Memory Structure |
US20140351515A1 (en) * | 2013-05-21 | 2014-11-27 | International Business Machines Corporation | Providing data attributes to a storage manager to use to select a storage tier to use for a data set management operation |
JP6160294B2 (en) * | 2013-06-24 | 2017-07-12 | 富士通株式会社 | Storage system, storage apparatus, and storage system control method |
US10198061B2 (en) | 2015-09-01 | 2019-02-05 | Toshiba Memory Corporation | Storage and storage system |
-
2015
- 2015-08-04 US US14/817,625 patent/US20160041762A1/en not_active Abandoned
- 2015-08-07 WO PCT/IB2015/056002 patent/WO2016020886A1/en active Application Filing
-
2017
- 2017-06-26 US US15/632,450 patent/US10866733B2/en active Active
-
2020
- 2020-10-23 US US17/078,547 patent/US11704019B2/en active Active
-
2023
- 2023-06-01 US US18/204,854 patent/US20230305701A1/en active Pending
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140218767A1 (en) * | 2013-02-01 | 2014-08-07 | Canon Kabushiki Kaisha | Image forming apparatus, memory management method for image forming apparatus, and program |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10402097B2 (en) | 2014-12-24 | 2019-09-03 | Toshiba Memory Corporation | Memory system and information processing system utilizing space information |
US9857984B2 (en) * | 2014-12-24 | 2018-01-02 | Toshiba Memory Corporation | Memory system with garbage collection |
US20160188220A1 (en) * | 2014-12-24 | 2016-06-30 | Kabushiki Kaisha Toshiba | Memory system and information processing system |
US10095410B2 (en) | 2014-12-24 | 2018-10-09 | Toshiba Memory Corporation | Memory system with garbage collection |
US10346039B2 (en) | 2015-04-21 | 2019-07-09 | Toshiba Memory Corporation | Memory system |
US10824217B2 (en) | 2015-09-01 | 2020-11-03 | Toshiba Memory Corporation | Storage and storage system |
US10198061B2 (en) * | 2015-09-01 | 2019-02-05 | Toshiba Memory Corporation | Storage and storage system |
US10042405B2 (en) * | 2015-10-22 | 2018-08-07 | Qualcomm Incorporated | Adjusting source voltage based on stored information |
WO2017195928A1 (en) * | 2016-05-13 | 2017-11-16 | 주식회사 맴레이 | Flash-based storage device and computing device comprising same |
US11487691B2 (en) | 2016-07-26 | 2022-11-01 | Samsung Electronics Co., Ltd. | System architecture for supporting active pass-through board for multi-mode NMVe over fabrics devices |
US10762023B2 (en) | 2016-07-26 | 2020-09-01 | Samsung Electronics Co., Ltd. | System architecture for supporting active pass-through board for multi-mode NMVe over fabrics devices |
US10387353B2 (en) | 2016-07-26 | 2019-08-20 | Samsung Electronics Co., Ltd. | System architecture for supporting active pass-through board for multi-mode NMVE over fabrics devices |
US10275172B2 (en) | 2016-07-27 | 2019-04-30 | Samsung Electronics Co., Ltd. | Solid state drive devices and methods of operating thereof |
JP2018101322A (en) * | 2016-12-21 | 2018-06-28 | 日本電気株式会社 | Storage array device, power consumption adjusting method, and power consumption adjusting program |
US10411024B2 (en) | 2017-02-28 | 2019-09-10 | Toshiba Memory Corporation | Memory system and method for controlling nonvolatile memory |
US10095626B2 (en) | 2017-03-10 | 2018-10-09 | Toshiba Memory Corporation | Multibit NAND media using pseudo-SLC caching technique |
US10049047B1 (en) | 2017-03-10 | 2018-08-14 | Toshiba Memory Corporation | Multibit NAND media using pseudo-SLC caching technique |
US20210357320A1 (en) * | 2018-09-05 | 2021-11-18 | SK Hynix Inc. | Memory controller, memory system and operating method of memory device |
US11775427B2 (en) | 2018-09-05 | 2023-10-03 | SK Hynix Inc. | Memory controller, memory system and operating method of memory device |
US11797437B2 (en) * | 2018-09-05 | 2023-10-24 | SK Hynix Inc. | Memory controller, memory system and operating method of memory device |
KR20200047244A (en) * | 2018-10-24 | 2020-05-07 | 삼성전자주식회사 | Semiconductor memory device, control unit, and memory system |
KR102629457B1 (en) | 2018-10-24 | 2024-01-26 | 삼성전자주식회사 | Semiconductor memory device, control unit, and memory system |
US20210073121A1 (en) * | 2019-09-09 | 2021-03-11 | Micron Technology, Inc. | Dynamically adjusted garbage collection workload |
US11550711B2 (en) * | 2019-09-09 | 2023-01-10 | Micron Technology, Inc. | Dynamically adjusted garbage collection workload |
Also Published As
Publication number | Publication date |
---|---|
US10866733B2 (en) | 2020-12-15 |
US11704019B2 (en) | 2023-07-18 |
US20170293525A1 (en) | 2017-10-12 |
US20230305701A1 (en) | 2023-09-28 |
US20210042033A1 (en) | 2021-02-11 |
WO2016020886A1 (en) | 2016-02-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11704019B2 (en) | Memory system, host device and information processing system for error correction processing | |
US10824217B2 (en) | Storage and storage system | |
CN110321247B (en) | Memory system | |
US9934151B2 (en) | System and method for dynamic optimization for burst and sustained performance in solid state drives | |
US8700881B2 (en) | Controller, data storage device and data storage system having the controller, and data processing method | |
US20170336990A1 (en) | Multi-tier scheme for logical storage management | |
TWI418980B (en) | Memory controller, method for formatting a number of memory arrays and a solid state drive in a memory system, and a solid state memory system | |
US20140281173A1 (en) | Nonvolatile memory system, system including the same, and method of adaptively adjusting user storage region in the same | |
WO2014184941A1 (en) | Storage device | |
WO2014141411A1 (en) | Storage system and method for controlling storage system | |
KR20210111527A (en) | Apparatus and method for performing garbage collection in a memory system | |
US11687262B2 (en) | Memory system and method of operating the same | |
CN114730300B (en) | Enhanced file system support for zone namespace memory | |
US11150819B2 (en) | Controller for allocating memory blocks, operation method of the controller, and memory system including the controller | |
US20150113305A1 (en) | Data storage device | |
US20200293221A1 (en) | Storage device and computing device including storage device | |
KR20210000877A (en) | Apparatus and method for improving input/output throughput of memory system | |
US11543987B2 (en) | Storage system and method for retention-based zone determination | |
US20210248078A1 (en) | Flash memory persistent cache techniques | |
US9946476B2 (en) | Memory management method, memory control circuit unit and memory storage apparatus | |
US10942848B2 (en) | Apparatus and method for checking valid data in memory system | |
US20220391318A1 (en) | Storage device and operating method thereof | |
US9727453B2 (en) | Multi-level table deltas | |
TW201606778A (en) | Memory system, host device and information processing system | |
US9158678B2 (en) | Memory address management system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KANNO, SHINICHI;NISHIMURA, HIROSHI;YOSHIDA, HIDEKI;AND OTHERS;REEL/FRAME:036347/0366 Effective date: 20150804 |
|
AS | Assignment |
Owner name: TOSHIBA MEMORY CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KABUSHIKI KAISHA TOSHIBA;REEL/FRAME:043529/0709 Effective date: 20170628 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |