WO2014038073A1 - 記憶装置システム - Google Patents
記憶装置システム Download PDFInfo
- Publication number
- WO2014038073A1 WO2014038073A1 PCT/JP2012/072961 JP2012072961W WO2014038073A1 WO 2014038073 A1 WO2014038073 A1 WO 2014038073A1 JP 2012072961 W JP2012072961 W JP 2012072961W WO 2014038073 A1 WO2014038073 A1 WO 2014038073A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- control circuit
- write
- data
- write data
- storage
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0602—Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
- G06F3/0614—Improving the reliability of storage systems
- G06F3/0616—Improving the reliability of storage systems in relation to life time, e.g. increasing Mean Time Between Failures [MTBF]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0646—Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
- G06F3/0652—Erasing, e.g. deleting, data cleaning, moving of data to a wastebasket
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0655—Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
- G06F3/0659—Command handling arrangements, e.g. command buffers, queues, command scheduling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0668—Interfaces specially adapted for storage systems adopting a particular infrastructure
- G06F3/0671—In-line storage system
- G06F3/0683—Plurality of storage devices
- G06F3/0688—Non-volatile semiconductor memory arrays
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/08—Error detection or correction by redundancy in data representation, e.g. by using checking codes
- G06F11/10—Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
- G06F11/1076—Parity data used in redundant arrays of independent storages, e.g. in RAID systems
- G06F11/108—Parity data distribution in semiconductor storages, e.g. in SSD
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/06—Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
- G06F3/0601—Interfaces specially adapted for storage systems
- G06F3/0628—Interfaces specially adapted for storage systems making use of a particular technique
- G06F3/0653—Monitoring storage devices or systems
Definitions
- the present invention relates to a storage device system, for example, a control technique in an external storage device (storage) such as an SSD (Solid State Drive).
- storage such as an SSD (Solid State Drive).
- SSD Solid State Drive
- NAND flash memory has an upper limit on the number of times of erasure and the data write size and the data erase size are greatly different.
- Patent Document 1 discloses a technology that realizes hierarchical capacity virtualization and reduces the number of erasures in the entire storage system. Specifically, there is shown a technique for performing so-called static wear leveling in which the number of times of erasure is leveled by appropriately moving data that has already been written.
- Patent Document 2 discloses a management method for a storage system using a flash memory.
- Patent Document 3 discloses a control technique at the time of power-off of a storage system.
- Patent Document 4 discloses an SSD control technique.
- Patent Document 5 discloses a data transfer technique between an SSD and an HDD.
- the present inventors examined a method for controlling a NAND flash memory used for storage such as an SSD (Solid State Drive) or a memory card.
- a NAND flash memory used for storage such as an SSD (Solid State Drive) or a memory card.
- the minimum data unit at the time of erasing is, for example, 1 Mbyte
- the minimum data unit at the time of writing is, for example, 4 Kbytes. That is, in order to write 4K bytes of data, it is necessary to secure an erased memory area of 1M bytes. In order to secure this 1 Mbyte erased memory area, an operation called garbage collection is required inside the SSD.
- the SSD first reads the currently effective data from the already written 1 Mbyte nonvolatile memory areas A and B, collects these data, and writes them to the RAM. Next, the nonvolatile memory areas A and B are erased. Finally, the data written in the RAM is collectively written in the nonvolatile memory area A.
- the 1-Mbyte nonvolatile memory area B becomes an erased memory area, and new data can be written to the nonvolatile memory area B.
- this garbage collection operation causes data movement from one nonvolatile positive memory area to another nonvolatile memory area within the SSD.
- the data size is larger than the write data size requested from the host controller to the SSD.
- a large data size is written. For this reason, the reliability and life of the SSD may be reduced.
- a storage controller that controls a large number of SSDs can perform garbage collection operations or wear leveling that each SSD performs independently. The actual write data amount including the increased write data amount cannot be grasped. For this reason, there is a risk that the reliability and life of the storage system (storage device system) may be reduced.
- the storage device system includes a plurality of memory modules and a first control circuit that controls the plurality of memory modules.
- Each of the plurality of memory modules includes a plurality of nonvolatile memories and a second control circuit for controlling them.
- the second control circuit grasps the second write data amount actually written to the plurality of nonvolatile memories, and appropriately notifies the second control data amount to the first control circuit.
- the first control circuit grasps, for each of the plurality of memory modules, a first write data amount associated with a write command that has already been issued to the plurality of memory modules, and a ratio between the first write data amount and the second write data amount, The first ratio is calculated for each of the plurality of memory modules. Then, the first control circuit selects the memory module that issues the next write command from the plurality of memory modules, reflecting the calculation result.
- FIG. 1 is a block diagram showing a schematic configuration example of an information processing system to which the storage device system according to Embodiment 1 of the present invention is applied.
- FIG. 2 is a block diagram illustrating a detailed configuration example of a storage controller in FIG. 1.
- FIG. 3 is a flowchart showing an example of a write operation performed by a storage module control circuit in FIGS. 1 and 2.
- FIG. 4 is a supplementary diagram of FIG. 3 and is a diagram showing an example of contents held in a table (MGTBL) held by a storage module control circuit.
- FIG. 4 is a supplementary diagram of FIG. 3 and is a diagram showing an example of other stored contents of a table (MGTBL) held by the storage module control circuit.
- FIG. 3 is a diagram illustrating an example of contents held in a table (STGTBL) held by the storage module control circuit of FIG. 2. It is a figure which shows an example of the other held content of the table (STGTBL) which the control circuit for storage modules of FIG. 2 has.
- FIG. 10 is a diagram showing an example of still another held content of a table (STGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 10 is a diagram showing an example of still another held content of a table (STGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 2 is a block diagram illustrating a detailed configuration example of a storage module in FIG. 1. It is a flowchart which shows an example of the wear leveling method performed within the storage module of FIG.
- FIG. 2 is a flowchart showing an example of a read operation performed by a storage module control circuit and a storage control circuit in FIG. 1.
- 5 is a flowchart showing an example of a write operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to Embodiment 2 of the present invention.
- FIG. 11 is a supplementary diagram of FIG. 10, illustrating an example of contents held in a table (MGTBL) held by the storage module control circuit.
- FIG. 11 is a supplementary diagram of FIG. 10, illustrating an example of another held content of a table (MGTBL) held by the storage module control circuit.
- FIG. 7 is a flowchart showing an example of a write operation and a garbage collection management operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to Embodiment 3 of the present invention.
- FIG. 13 is a supplementary diagram of FIG. 12, showing an example of contents held in a table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 13 is a supplementary diagram of FIG. 12, illustrating an example of other held contents of the table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 13 is a supplementary diagram of FIG. 12, and is a diagram illustrating an example of still another held content of a table (MGTBL) possessed by the storage module control circuit of FIG. 2.
- FIG. 13 is a supplementary diagram of FIG.
- FIG. 13 is a supplementary diagram of FIG. 12, showing an example of other held contents of a table (GETBL) held by the storage module control circuit of FIG. 2.
- FIG. 13 is a supplementary diagram of FIG. 12, and is a diagram showing an example of still another held content of a table (GETBL) held by the storage module control circuit of FIG. 2.
- 7 is a flowchart showing an example of a write operation and an erase operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage device system according to Embodiment 4 of the present invention.
- FIG. 16 is a supplementary diagram of FIG.
- FIG. 16 is a supplementary diagram of FIG. 15, illustrating an example of contents held in a table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 16 is a supplementary diagram of FIG. 15, illustrating an example of other held contents of the table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 16 is a supplementary diagram of FIG. 15, and is a diagram showing an example of still another held content of a table (MGTBL) held by the storage module control circuit of FIG. 2.
- 7 is a flowchart illustrating an example of a write operation and a garbage collection management operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage device system according to the fifth embodiment of the present invention.
- FIG. 18 is a supplementary diagram of FIG.
- FIG. 18 is a supplementary diagram of FIG. 17, illustrating an example of contents held in a table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 18 is a supplementary diagram of FIG. 17, illustrating an example of other held contents of a table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 18 is a supplementary diagram of FIG. 17, and is a diagram showing an example of still another held content of a table (MGTBL) held by the storage module control circuit of FIG. 2.
- FIG. 11 is an explanatory diagram showing an example of a read operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage device system according to the fifth embodiment of the present invention.
- FIG. 11 is an explanatory diagram showing an example of a read operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage device system according to the fifth embodiment of the present invention.
- FIG. 11 is an explanatory diagram showing an example of data retention characteristics of a storage module in a storage system according to Embodiment 2 of the present invention. It is a figure which shows an example which prescribed
- FIG. 9 is a block diagram showing a detailed configuration example of a storage module in FIG. 1 in a storage device system according to Embodiment 6 of the present invention.
- FIG. 8 is a supplementary diagram for schematically explaining the flow of FIG. 7.
- FIG. 9 is a supplementary diagram for schematically explaining the flow of FIG. 8.
- the constituent elements are not necessarily indispensable unless otherwise specified or apparently indispensable in principle.
- the shapes when referring to the shapes, positional relationships, etc. of the components, etc., the shapes are substantially the same unless otherwise specified, or otherwise apparent in principle. And the like are included. The same applies to the above numbers and the like (including the number, numerical value, quantity, range, etc.).
- FIG. 1 is a block diagram showing a schematic configuration example of an information processing system to which the storage device system according to Embodiment 1 of the present invention is applied.
- the information processing system shown in FIG. 1 includes information processing devices SRV0 to SRVm and a storage system (storage device system) STRGSYS.
- the STRGSYS includes a plurality of storage modules (memory modules) STG0 to STGn + 4 and a storage controller STRGCONT that controls the STG0 to STGn + 4 in response to requests from the SRV0 to SRVm.
- the information processing devices SRV0 to SRVm and the storage controller STRGCONT are connected by an interface signal H2S_IF, and the storage controller STRGCONT and the storage modules STG0 to STGn + 4 are connected by an interface signal H2D_IF.
- a serial interface signal method for example, a serial interface signal method, a parallel interface signal method, an optical interface signal method, or the like can be used.
- Typical interface signal systems include SCSI (Small Computer System Interface), SATA (Serial Advanced Technology Attachment), SAS (Serial Attached SCSI), and FC (Fibre Channel). Can be used.
- the information processing apparatuses SRV0 to SRVm are server apparatuses, for example, and are apparatuses that execute various applications on various OSs.
- Each of SRV0 to SRVm includes a processor unit CPU having a plurality of processor cores CPUCR0 to CPUCRk, a random access memory RAM, a backplane interface circuit BIF, and a storage system interface circuit STIF.
- the BIF is a circuit for performing communication between the SRV0 to SRVm via the backplane BP.
- the STIF makes various requests (write requests) to the storage system (storage device system) STRGSYS using the interface signal H2S_IF. (WQ), read request (RQ), etc.).
- Storage modules (memory modules) STG0 to STGn + 4 store data, applications, OSs, and the like necessary for the information processing apparatuses SRV0 to SRVm.
- STG0 to STGn + 4 are not particularly limited, but correspond to, for example, SSD (Solid State Drive).
- SSD Solid State Drive
- Each of STG0 to STGn + 4 has the same configuration.
- STG0 will be described as a representative example.
- Nonvolatile memories NVM0 to NVM7, random access memory RAMst, and a storage control circuit STCT0 for controlling access to these memories. It has.
- a NAND flash memory, a NOR flash memory, a phase change memory, a resistance change memory, a magnetic memory, a ferroelectric memory, and the like can be applied.
- the information processing apparatuses SRV0 to SRVm issue a read request (RQ) of a necessary program or data to the storage controller STRGCONT when executing an application, for example.
- SRV0 to SRVm issue a write request (write command) (WQ) to the storage controller STRGCONT in order to store their execution results and data.
- the read request (RQ) includes a logical address (LAD), a data read command (RD), a sector count (SEC), and the write request (WQ) includes a logical address (LAD) and a data write command (WRT). ), Sector count (SEC), write data (WDATA), and the like.
- the storage controller STRGCONT includes internal memories including cache memories CM0 to CM3 and random access memories RAM0 to RAM3, host control circuits HCTL0 and HCTL1, and storage module control circuits DKCTL0 and DKCTL1.
- the STRGCONT includes a storage system interface circuit STIF (00, 01,..., M0, m1) and a storage module interface circuit SIFC (00, 01,..., N0, n1).
- the host control circuits HCTL0 and HCTL1 mainly receive communications with the information processing devices SRV0 to SRVm (for example, accept various requests (such as read request (RQ) and write request (WQ)) from SRV0 to SRVm, and , Responses to SRV0 to SRVm, etc.).
- the interface circuit STIF performs protocol conversion according to the communication method of the interface signal H2S_IF.
- the storage module control circuits DKCTL0 and DKCTL1 are circuits that mainly perform communication such as access requests to the storage modules STG0 to STGn + 4 in response to various requests from SRV0 to SRVm received by HCTL0 and HCTL1. .
- the interface circuit SIFC performs protocol conversion corresponding to the communication method of the interface signal H2D_IF.
- the host control circuits HCTL0 and HCTL1 are provided in two systems, and one (for example, HCTL1) is provided as a spare when the other (for example, HCTL0) fails.
- the storage module control circuits DKCTL0 and DKCTL1 are provided as spares for the other to improve fault tolerance.
- the configuration is such that five storage modules (memory modules) (herein, STG0 to STG4) are connected to one SIFC (for example, SIFC00). This number should be determined appropriately in consideration of, for example, RAID (Redundant Arrays of Inexpensive Disks) specifications, etc. (for example, data is divided into four STGs and the parity is written to the remaining one). Is possible.
- the control circuits HCTL0 and HCTL1 for the host send read requests (RQ), write requests (write commands) (WQ), etc. from the information processing apparatuses SRV0 to SRVm to the storage system interface circuits STIF (00, 01,..., M0). , M1).
- RQ read request
- WQ write commands
- the host control circuits HCTL0 and HCTL1 read the data (RDATA) from the CM0 to CM3, and the interface circuit STIF ( The data is transferred to the information processing devices SRV0 to SRVm through the interface signal H2S_IF).
- the HCTL0 and HCTL1 notify the storage module control circuits DKCTL0 and DKCTL1.
- DKCTL0 and DKCTL1 issue read access requests (RREQ) to the storage modules (memory modules) STG0 to STGn + 4 via the interface circuit SIFC (00, 01,..., N0, n1) (interface signal H2D_IF). To do. Thereafter, DKCTL0 and DKCTL1 transfer data (RDATA) read from STG0 to STGn + 4 to CM0 to CM3, and further transfer to SRV0 to SRVm through HCTL0, HCTL1 and STIF (H2S_IF).
- RDATA read access requests
- the host control circuits HCTL0 and HCTL1 When the host control circuits HCTL0 and HCTL1 receive a write request (WQ) from the information processing apparatuses SRV0 to SRVm, first, the logical address (LAD) included in the write request (WQ) is stored in the cache memory CM0. It is determined whether it matches any of the logical addresses (LAD) entered in CM3. Here, if they match, that is, if they are hits, the HCTL0 and HCTL1 write the write data (WDATA) included in the write request (WQ) to CM0 to CM3.
- WQ write request
- HCTL0 and HCTL1 for example, write data (WDT) for the logical address (LAD) used the oldest among CM0 to CM3 to random access memories RAM0 to RAM3 once. Then, write data (WDATA) is written to CM0 to CM3. Thereafter, HCTL0 and HCTL1 notify DKCTL0 and DKCTL1, and in response, DKCTL0 and DKCTL1 issue write access requests (write commands) (WREQ) to storage modules (memory modules) STG0 to STGn + 4. .
- write commands write commands
- the write data (WDT) transferred to the RAM0 to RAM3 in response to the write access request (WREQ) is transferred to the STG0 to STGn + 4 via the interface circuit SIFC (00, 01,..., N0, n1) (interface signal H2D_IF). Write to (write back).
- FIG. 2 is a block diagram showing a detailed configuration example of the storage controller STRGCONT in FIG.
- the storage controller STRGCONT shown in FIG. 2 shows a detailed configuration example regarding the host control circuits HCTL0 and HCTL1 and the storage module control circuits DKCTL0 and DKCTL1 shown in FIG.
- Each of HCTL0 and HCTL1 includes a cache control circuit CMCTL, a read control circuit HRDCTL, a write control circuit HWTCTL, and a diagnostic circuit HDIAG.
- the cache control circuit CMCTL performs hit / miss hit determination of the cache memories CM0 to CM3, access control of the CM0 to CM3, and the like.
- the read control circuit HRDCTL When receiving a read request (RQ) from the information processing devices (hosts) SRV0 to SRVm, the read control circuit HRDCTL performs various processes accompanying the read request (RQ) as described in FIG. 1 together with CMCTL.
- the write control circuit HWTCTL accepts a write request (WQ) from SRV0 to SRVm, it performs various processes associated with the write request (WQ) as shown in FIG. 1 together with CMCTL.
- the diagnostic circuit HDIAG has a function of testing whether its various functions are normal in order to improve the fault tolerance described in FIG. For example, when an abnormality is detected in the HDIAG in the control circuit HCTL0, the HDIAG notifies the control circuit HCTL1, thereby deactivating the regular HCTL0 and activating the spare HCTL
- Each of the control circuits DKCTL0 and DKCTL1 includes a write control circuit WTCTL, a read control circuit RDCTL, a data erase control circuit ERSCTL, a garbage collection control circuit GCCTL, a diagnostic circuit DIAG, and three tables MGTBL, STGTBL, and GETBL.
- DIAG has a function of testing its own internal function as in the case of the diagnostic circuit HDIAG in HCTL0 and HCTL1, and the activation / deactivation of DKCTL0 and DKCTL1 is switched according to the result.
- the various tables MGTBL, STGTBL, and GETBL are stored information in the random access memories RAM0 to RAM3, for example, and the control circuits DKCTL0 and DKCTL1 manage the various tables.
- Wstg indicates the data size that each storage module (memory module) STG0 to STGn + 4 actually wrote to its own nonvolatile memory NVM0 to NVM7.
- Wh2d indicates a data size at which the control circuits DKCTL0 and DKCTL1 have already issued a write access request (write command) (WREQ) to each of the storage modules STG0 to STGn + 4.
- the data size Wh2d is equal to the size of the write data (WDT) transferred to the random access memories RAM0 to RAM3 when the cache memories CM0 to CM3 are missed, and as a result, the information processing devices SRV0 to SRV0 It is equal to the size of the write data (WDATA) included in the write request (WQ) from SRVm.
- NtW indicates the data size of the write data (WDT) included in the next write access request (write command) (WREQ).
- Ret indicates the data retention time of each storage module STG0 to STGn + 4.
- Esz indicates the number of physical blocks in the erased state included in each of the storage modules STG0 to STGn + 4.
- the write control circuit WTCTL uses these tables as appropriate to issue a write access request (write command) (WREQ) to STG0 to STGn + 4.
- the read control circuit RDCTL issues a read access request (RREQ)
- the data erase control circuit ERSCTL issues an erase access request
- the garbage collection control circuit GCCTL issues a garbage collection request.
- the non-volatile memories NVM0 to NVM7 used for the respective storage modules (memory modules) STG0 to STGn + 4 have a limit on the number of rewrites, and a specific number in a specific non-volatile memory When writing to the address concentrates, the life of STG0 to STGn + 4 is shortened.
- the storage control circuit STCT manages the number of writes of NVM0 to NVM7 in a certain data size unit, and writes to NVM0 to NVM7 so as to equalize the number of writes. Is going.
- control circuit STCT uses, for example, a method called static wear leveling described in the cited document 1 or the like to change from one nonvolatile memory to another nonvolatile memory. Data may be moved. In this case, in the nonvolatile memory, a size larger than the size of the write data (WDT) requested from the control circuits DKCTL0 and DKCTL1 is written.
- WDT write data
- the control circuit STCT may perform a garbage collection operation. In this garbage collection operation, the STCT, for example, first reads the currently valid data from the already written 1 Mbyte nonvolatile memory areas “A” and “B”, and collects these data. Write to the random access memory RAMst.
- the nonvolatile memory areas “A” and “B” are erased. Finally, the data written to the RAMst is collectively written to the nonvolatile memory area “A”. As a result, the 1-Mbyte nonvolatile memory area “B” becomes the erased memory area, and new data can be written to the nonvolatile memory area “B”. However, in this case, data is moved from a certain nonvolatile positive memory area to another nonvolatile memory area. Therefore, in the nonvolatile memory, the size is larger than the size of the write data (WDT) requested from the control circuits DKCTL0 and DKCTL1. The size will be written.
- WDT write data
- each storage module STG0 to STGn + 4 has an error such as parity data generated for the write data together with the write data (WDT) requested from the control circuits DKCTL0 and DKCTL1.
- a detection / correction code may be written.
- a size larger than the size of the write data (WDT) requested from the control circuits DKCTL0 and DKCTL1 is written.
- the data size Wstg actually written to the nonvolatile memories NVM0 to NVM7 is the write data (WDT) requested by the control circuits DKCTL0 and DKCTL1 to STG0 to STGn + 4.
- WDT write data
- data size Wh2d data size Wh2d
- how much the data size Wstg actually written increases with respect to the data size Wh2d accompanying the write access request (write command) (WREQ) depends on, for example, the write request (WQ) (write access request (WREQ)).
- WQ write request
- WREQ write access request
- Varies depending on the locality of the address since the address is appropriately assigned to any one of STG0 to STGn + 4, the extent to which the data size Wstg increases with respect to the data size Wh2d may vary greatly between STG0 to STGn + 4. is there.
- the storage controller STRGCONT predicts the write data amount that is actually written based on the write data amount in the write access request (WREQ) that is the current processing target, and the predicted write data amount is small.
- a function is provided for selecting a storage module and issuing the write access request (WREQ) thereto.
- WREQ write access request
- a function for performing dynamic wear leveling between storage modules is provided.
- dynamic wear leveling is different from so-called static wear leveling, which performs leveling (wear leveling) by moving data that has already been written as appropriate, and leveling by appropriately selecting the writing destination when writing data. It is a method to perform.
- FIG. 3 is a flowchart showing an example of a write operation performed by the storage module control circuit in FIGS. 1 and 2.
- 4A and 4B are supplementary diagrams of FIG. 3, and are diagrams illustrating an example of contents held in a table (MGTBL) held by the storage module control circuit.
- 5A, FIG. 5B, FIG. 5C, and FIG. 5D are diagrams showing examples of contents held in a table (STGTBL) held by the storage module control circuit of FIG.
- ntW is the data size of the write data (WDT) included in the write access request (write command) (WREQ) that is the current processing target.
- Wh2d is the data size of the write data (WDT) accompanying the write access request (WREQ) already issued to the predetermined storage module by the control circuit DKCTL0.
- Wstg is the data size that each storage module STG0 to STG3 actually writes to its own nonvolatile memory.
- the data size Wstg can be recognized by the STCT because the control circuit STCT in the storage module STG in FIG. 1 actually writes data to the nonvolatile memories NVM0 to NVM7, and DKCTL0 is The data size can be recognized by acquiring from the STCT.
- the values of Wh2d and Wstg are updated cumulatively, for example, from the first use of the storage module until the end of its life.
- STCT is a write access request from DKCTL0 (for example, when a garbage collection operation (+ wear leveling operation) as described later in FIG. 8 is performed, or when a static wear leveling operation is performed).
- the value of Wstg is increased irrespective of the write command (WREQ).
- the static wear leveling operation is not particularly limited, and examples thereof include a method for reducing a difference between the number of writes at a physical address where data is valid and the number of writes at a physical address where data is invalid.
- a physical address for which data is valid continues to be valid unless a write command for the physical address is subsequently generated, and as a result, the number of writes at the physical address does not increase, but the physical address for which data is invalid Becomes a candidate for the write destination after being erased, and the number of times of writing at the physical address increases.
- the difference may increase between the number of valid physical address writes and the number of invalid physical address writes.
- the logical address (LAD) included in the write access request (write command) (WREQ) and the storage module STG0 storing the data of the LAD are stored.
- ⁇ STG3 number (STG No) and valid information VLD indicating whether the LAD data is valid or invalid are stored.
- the logical address (LAD) included in the write access request (WREQ) is equal to the logical address included in the write request (write command) (WQ) from the information processing devices (hosts) SRV0 to SRVm, or the write request (WQ) ) Is uniquely determined from the logical address included in If the valid information VLD is “1”, the LAD data means valid, and if it is “0”, it means invalid.
- the control circuit DKCTL0 issues a notification request for the data sizes Wstg and Wh2d to the storage modules STG0 to STG3 through the interface signal H2D_IF as necessary (for example, immediately after turning on the power to the storage controller STRGCONT).
- the storage modules STG0 to STG3 have the data sizes Wstg (Wstg0 (40), Wstg1 (15), Wstg2 (10), Wstg3 (20)) and Wh2d (Wh2d0 (10), Wh2d1 (5), Wh2d2 (5), Wh2d3 (10)) is returned to DKCTL0 (FIG. 3: Step 1, FIG. 4A).
- Wstg0 (40) as an example, here, the number “40” in parentheses represents the necessary information (in this case, the data size).
- DKCTL0 sets these data sizes Wstg and Wh2d in the table MGTBL and updates them (FIG. 3: Step 2, FIG. 4A).
- DKCTL0 is currently a write access request (write command) including write data (WDT [1]) of data size ntW1 (here, 10) and a logical address (here, 123). ) (WREQ [1]).
- the write control circuit WTCTL in DKCTL0 uses Wstg and Wh2d in MGTBL to determine the write data size ratio WAF for each of STG0 to STG3, and in addition to this, the expected write data size eWd using ntW1 Is set in MGTBL and updated (FIG. 3: Step 3, FIG. 4A).
- the write control circuit WTCTL in the control circuit DKCTL0 selects the storage module (here, STG2) in which the expected write data size eWd is the minimum value (Min.) (FIG. 3: Step 4, FIG. 4A).
- STG2 the storage module in which the expected write data size eWd is the minimum value (Min.)
- Step 4A the storage module in which the expected write data size eWd is the minimum value (Min.)
- a write access request (write command) (WREQ [1]) is issued (FIG. 4A).
- the storage module STG2 notifies the control circuit DKCTL0 of the data size Wstg (Wstg2 (30)) (FIG. 3: Step 8, FIG. 4B) and returns to Step 2. .
- the data size “20” is written by the garbage collection operation or the static wear leveling operation. As a result, a total value of “30” is notified as Wstg2.
- the write control circuit WTCTL of the control circuit DKCTL0 sets and updates the notified data size Wstg (Wstg2 (30)) in the table MGTBL (FIG. 3: Step 2, FIG. 4B).
- DKCTL0 is currently a write access request (write command) including write data (WDT [2]) of data size ntW2 (here, 10) and a logical address (here, 535). ) (WREQ [2]).
- the WTCTL uses the data sizes Wstg and Wh2d in the MGTBL to obtain the write data size ratio WAF for each of the storage modules STG0 to STG3, and in addition to this, obtains the expected write data size eWd using ntW2.
- MGTBL is set and updated (FIG. 3: Step 3, FIG. 4B).
- the write control circuit WTCTL in the control circuit DKCTL0 selects the storage module (here, STG3) whose expected write data size eWd is the minimum value (Min.) (FIG. 3: Step 4, FIG. 4B).
- STG3 the storage module whose expected write data size eWd is the minimum value (Min.)
- Step 4B the minimum value
- a write access request (write command) (WREQ [2]) is issued (FIG. 4B).
- Wh2d is updated from “10” to “20” in STG3 in MGTBL of FIG. 4B.
- Step 6 in the storage module (in this case, STG2 and STG3) from which the write access request (WREQ) is issued, the logical address LAD included in the write access request (WREQ) is changed to the physical addresses (NVM0 to NVM7). PAD) and write data is written to the converted physical address (PAD).
- Step 1 the control circuit DKCTL0 obtains the values of Wstg and Wh2d by making a notification request for the data sizes Wstg and Wh2d to the storage modules STG0 to STG3.
- the values of Wstg and Wh2d are updated as needed without depending on the notification request. That is, for example, for Wh2d, DKCTL0 itself updates the value while grasping it as needed (Step 7).
- DKCTL0 saves the value of Wh2d to STG0 to STG3, for example, when the power is shut off.
- Step 1 A case was assumed where it is read at the next power-on (Step 1). Also, for example, for Wstg, STG0 to STG3 update their values as needed, and automatically send the value of Wstg to DKCTL0 when the write operation associated with the write access request (WREQ) is completed (Step 8). DKCTL0 assumed a case where the value of Wstg is grasped from this transmitted value (Step 2). In this case, actually, STG0 to STG3 can stock a plurality of write access requests (WREQ) by an internal buffer, and for example, the value of Wstg may be transmitted continuously.
- Step 5 is provided.
- the control circuit DKCTL0 immediately before issuing the write access request (WREQ), the control circuit DKCTL0 issues a Wstg notification request to the STG0 to STG3, and the STG0 to STG3 return the value of Wstg only when the notification request is received.
- DKCTL0 can make a flow for selecting a storage module that issues a WREQ.
- STG0 to STG3 transmit the Wstg value to DKCTL0 not only at the time of completion of writing accompanying the write access request (WREQ) but also at the time of completion of writing accompanying garbage collection operation or the like (that is, Wstg) It is also possible to make a flow in which DKCTL0 sequentially updates the values of the table MGTBL in response to this.
- FIG. 6 is a block diagram illustrating a detailed configuration example of the storage module in FIG.
- the storage module STG is used as the storage modules STG0 to STGn + 4 in FIG.
- the storage module STG shown in FIG. 6 includes nonvolatile memories NVM0 to NVM7, a random access memory RAMst, a storage control circuit STCT0 that controls the NVM0 to NVM7 and RAMst, and a battery backup device BBU.
- NVM0 to NVM7 have, for example, the same configuration and performance.
- the RAMst is not particularly limited, but is a DRAM or the like, for example.
- the BBU has a large capacity inside, and is a device for securing a power source for saving data in the RAMst to the NVM0 to NVM7 for a certain period, for example, when the power supply is unexpectedly shut off.
- the storage module STG Immediately after the power is turned on, the storage module STG performs an initialization operation (so-called power-on reset) of the internal nonvolatile memories NVM0 to NVM7, the random access memory RAMst, and the control circuit STCT0. Further, the STG initializes the internal NVM0 to NVM7, RAMst, and STCT0 when it receives a reset signal from the control circuit DKCTL0.
- the STCT0 includes an interface circuit HOST_IF, buffers BUF0 to BUF3, an arbitration circuit ARB, an information processing circuit MNGER, a memory control circuit RAMC, and NVCT0 to NVCT7.
- the memory control circuit RAMC directly controls the random access memory RAMst, and the memory control circuits NVCT0 to NVCT7 directly control the nonvolatile memories NVM0 to NVM7, respectively.
- the random access memory RAMst holds an address conversion table (LPTBL), an erase count table (ERSTBL), a physical block table (PBKTBL), a physical address table (PADTBL), and various other information.
- the address conversion table (LPTBL) shows the correspondence between the logical address (LAD) and the physical addresses (PAD) of NVM0 to NVM7.
- the erase count table (ERSTBL) the erase count for each physical block is shown.
- PBKTBL physical block table
- the state of each physical block such as whether the physical block is in the erased state, partially written, or completely written, and the invalid physical address of each physical block
- the total number (INVP) is indicated.
- the physical address table indicates whether the data of each physical address is valid or invalid, or whether each physical address is in an erased state.
- the physical block represents an erasing unit area, and each physical block is composed of a plurality of physical addresses serving as a writing unit area.
- various other information in the random access memory RAMst includes the number of physical blocks in the erased state (Esz) in the storage module STG, the data sizes Wstg, Wh2d, and the like.
- FIG. 7 is a flowchart showing an example of a wear leveling method performed in the storage module of FIG.
- FIG. 7 shows an example of a write processing procedure performed in STG0 to STGn + 4 when a write access request (write command) (WREQ) is issued from the control circuit DKCTL0 in FIG. 1 to the storage modules STG0 to STGn + 4.
- An example of a wear leveling (ie, dynamic wear leveling) method performed during writing is shown.
- the flow of FIG. 7 is mainly executed by the information processing circuit MNGER of FIG.
- MNGER assigns one physical address (PAD) in units of 512-byte main data and 16-byte redundant data, and writes to the physical addresses (PAD) of the non-volatile memories NVM0 to NVM7. Yes.
- FIG. 22 is a supplementary diagram for schematically explaining the flow of FIG. If the flow of FIG. 7 is used, operation
- the physical block PBK [0] is composed of three physical addresses PAD [0] to PAD [2], and similarly, the physical blocks PBK [1] and PBK [2] are each three.
- the erasure counts of PBK [0], PBK [1], and PBK [2] are 10, 9, and 8 respectively, and PBK [1] and PBK [2] are in the erased state (E).
- write data WDT0, WDT1, and WDT2 of logical addresses LAD [0], LAD [1], and LAD [2] are written in PAD [0], PAD [1], and PAD [2], respectively. [0], PAD [1], and PAD [2] (WDT0, WDT1, WDT2) are all valid "1".
- a write access request (write command) (WREQ [3]) including the logical address LAD [0] and the write data WDT3 is input.
- the information processing circuit MNGER first changes the physical address PAD [0] corresponding to LAD [0] from valid “1” to invalid “0”, and writes a new WDT3 for the LAD [0].
- a valid physical address At this time, the physical block having the smallest erase count value (PBK [2] in this case) is selected among the physical blocks in the erased state or partially written.
- WDT3 is written to the first physical address (PAD [6] in this case) that is in the erased state in the physical block (PBK [2]), and the PAD [6] is enabled “1”.
- MNGER similarly disables PAD [6] to “0”.
- a physical block in this case, PBK [2]
- WDT4 is written to the first physical address (PAD [7] in this example) that is in the erased state in the physical block (PBK [2]), and the PAD [7] is enabled “1”.
- a logical address value for example, LAD [0]
- a data write command WRT
- 512-byte write data for example, WDT3
- write access request write command
- STCT0 The control circuit HOST_IF in FIG. 6 takes out the clock information embedded in the write access request (WREQ [3]), converts the write access request (WREQ [3]) converted into serial data into parallel data, and buffers BUF0. Then, the information is transferred to the information processing circuit MNGER (Step 1).
- the information processing circuit MNGER uses the current physical address value (for example, PAD [0]) stored at the address of the logical address value (LAD [0]) and the physical address value (PAD [0]).
- the value of the corresponding valid flag PVLD is read (Step 2). Further, if the valid flag PVLD value is “1 (valid)”, it is set to “0 (invalid)”, and the address translation table (LPTBL) and the physical address table (PADTBL) are updated. (Step 3).
- the information processing circuit MNGER extracts an erased state or a partially written physical block from the physical block table (PBKTBL) in the random access memory RAMst, and further, the erase count value is included in the extracted physical block.
- the smallest physical block for example, PBK [2]
- the erase count table (ERSTBL) in RAMst.
- the MNGER uses the physical address table (for example, PAD [6]) among the physical addresses in which data has not been written yet (in the erased state) in the selected physical block. (PADTBL) is selected (Step 4).
- the information processing circuit MNGER writes 512-byte write data (WDT3) to the physical address (PAD [6]) (Step 5). Subsequently, MNGER updates the address translation table (LPTBL) and the physical address table (PADTBL) (Step 6). Further, MNGER recalculates the data size Wstg (and Wh2d) and stores it in the random access memory RAMst (Step 7). Finally, MNGER transmits the latest value of the data size Wstg to the control circuit DKCTL0 through the control circuit HOST_IF and the interface signal H2D_IF (Step 8).
- leveling is performed by sequentially writing data from the physical address having the smallest erase count value. Therefore, by using together with the leveling between the storage modules STG as described in FIG. 3 and the like, further leveling of the entire storage system (storage device system) STRGSYS in FIG. 1 can be realized.
- FIG. 8 is a flowchart showing an example of a garbage collection and wear leveling method performed in the storage module of FIG.
- FIG. 23 is a supplementary diagram for schematically explaining the flow of FIG. 8.
- the flow in FIG. 8 is mainly executed by the information processing circuit MNGER in FIG. If data is continuously written to the nonvolatile memories NVM0 to NVM7, the number of erased physical addresses (number of physical blocks) decreases. When the number of physical addresses (the number of physical blocks) in the erased state becomes 0, the storage module STG can no longer write. Therefore, a garbage collection operation is required to increase the number of erased physical addresses (the number of physical blocks). In the garbage collection operation, it is desirable to perform a wear leveling operation together. Therefore, it is beneficial to execute the flow of FIG.
- the information processing circuit MNGER in FIG. 6 first reads the number of physical blocks Esz in the erased state stored in the random access memory RAMst (Step 1). Next, the threshold values DERCth and Esz of the number of physical blocks in a predetermined erase state are compared (Step 2). If Esz ⁇ DERCth, the process proceeds to Step 1, and if Esz ⁇ DERCth, the process proceeds to Step 3. In Step 3, MNGER reads the erase count table (ERSTBL), physical block table (PBKTBL), and physical address table (PADTBL) stored in RAMst.
- ESDBL erase count table
- PBKTBL physical block table
- PADTBL physical address table
- ERSTBL The number of times of erasing for each physical block is found by ERSTBL, and the state of each physical block (erased state, partly written, all written) and the number of invalid physical addresses (INVP) for each physical block are found by PBKTBL. , PADTBL reveals whether each physical address is valid or invalid.
- the information processing circuit MNGER has written all the written physical blocks until the sum of the number of invalid physical addresses (INVP) is equal to or larger than the size of the physical block in ascending order of the erase count value.
- Physical blocks are sequentially selected (Step 4).
- the physical block PBK [0] includes three physical addresses PAD [0] to PAD [2].
- the physical blocks PBK [1], PBK [2], and PBK [3] Are composed of three physical addresses PAD [3] to PAD [5], PAD [6] to PAD [8] and PAD [9] to PAD [11], respectively.
- PBK [0], PBK [1], PBK [2], and PBK [3] is 7, 8, 9, and 10, respectively, and PBK [3] is in the erased state (E). To do. Also, write data WDT0 to WDT8 are written in PAD [0] to PAD [8], respectively, and PAD [0], PAD [3], and PAD [6] (WDT0, WDT3, WDT6) among them are written. It is assumed that it is invalid “0”.
- the physical blocks PBK [0], PBK [1], and PBK [2] are selected in Step 4 of FIG.
- the information processing circuit MNGER reads data of a valid physical address in the selected physical block and temporarily stores it in the random access memory RAMst.
- the information processing circuit MNGER writes the write data once stored in the random access memory RAMst back to the erased physical blocks in the nonvolatile memories NVM0 to NVM7.
- a garbage collection operation is realized.
- the writing back is performed by selecting in order from the physical block having the smallest erase count value among the physical blocks in the erased state.
- a wear leveling operation is realized in parallel with the garbage collection operation.
- t 3
- two physical blocks PBK [0] and PBK [1] are selected in ascending order of erase count value, and write data WDT1 and WDT2 in the RAMst are selected in the physical block.
- WDT4, WDT5, WDT7, WDT8 are written back.
- one more erased physical block (here, PBK [2]) is generated.
- the information processing circuit MNGER includes an address conversion table (LPTBL), an erase count table (ERSTBL), a physical block table (PBKTBL), and a physical address table (PADTBL) in the random access memory RAMst.
- LPTBL address conversion table
- ERSTBL erase count table
- PBKTBL physical block table
- PADTBL physical address table
- the number of erases in the storage module (memory module) STG is leveled together with the dynamic wear leveling operation as shown in FIG. Can be realized.
- this may cause variations in the number of erasures between the storage modules STG.
- FIG. 3 the storage system
- FIG. 3 the storage system
- FIG. 3SYS the storage system
- FIG. 3SYS the storage system
- FIG. 9 is a flowchart showing an example of a read operation performed by the storage module control circuit and the storage control circuit in FIG. 1, after DKCTL0 receives a read request (RQ) in response to a notification from HCTL0. An operation example is shown.
- the read control circuit RDCTL checks whether or not the read valid flag VLD is “1” (Step 2).
- VLD is “0”
- LAD logical address value
- the RDCTL notifies the control circuit HCTL0 that an error has occurred (Step 10).
- VLD is “1”
- the interface circuit HOST_IF in the storage module STG3 extracts the clock information embedded in the read access request (RREQ) issued from the control circuit DKCTL0, converts the RREQ converted into serial data into parallel data, The data is transferred to BUF0 and the information processing circuit MNGER (Step 4).
- Various information is read with reference to the address conversion table LPTBL.
- the MNGER inputs the converted address to the non-volatile memories NVM0 to NVM7 through the arbitration circuit ARB and the memory control circuits NVCT0 to NVCT7, and the data (RDATA) stored in the NVM0 to NVM7 is read (Step 7). .
- the read data (RDATA) includes main data (DArea) and redundant data (RArea), and the redundant data (RArea) includes an ECC code (ECC). Therefore, the information processing circuit MNGER uses the ECC code (ECC) to check whether there is an error in the main data (DArea), corrects the error, and passes the data (RDATA) through the interface circuit HOST_IF.
- the data is transmitted to the control circuit DKCTL0 (Step 8).
- DKCTL0 transfers the transmitted data (RDATA) to the cache memories CM0 to CM3, and further transmits them to the information processing devices SRV0 to SRVm through the control circuit HCTL0 and the interface signal H2S_IF (Step 9).
- the entire storage device system can realize the leveling of the number of erasures (and as a result, the leveling of the number of writings), and is reliable. It is possible to improve the performance and extend the service life.
- the data retention time (i.e., how long can the written data be retained correctly as the number of erases (or the number of writes) increases) ) May decrease.
- the data retention time is not particularly limited, but is, for example, 10 years when the number of times of writing is small.
- the degree to which the data retention time depends on the number of erases (or the number of writes) varies depending on what kind of nonvolatile memory is used for the storage module. For example, it may differ depending on whether a flash memory or a phase change memory is used as the nonvolatile memory, or what memory cell structure is used even when a flash memory is used.
- FIG. 20A is an explanatory diagram showing an example of data retention characteristics of the storage module in the storage system according to Embodiment 2 of the present invention.
- FIG. 20A shows the dependency (function) between the physical block erase count (or write count) (nERS) and data retention time (data retention time) Ret.
- nERS physical block erase count
- RetSTG0 to RetSTG3 represent dependency relationships (functions) in the storage modules STG0 to STG3, respectively.
- Each of the storage modules (memory modules) STG0 to STG3 has its own erase count (or write count) and data retention time dependency (function) in advance in the non-volatile memories NVM0 to NVM7 as a mathematical expression.
- the mathematical formula is transferred to the random access memory RAMst.
- the information processing circuit MNGER in the storage control circuit STCT obtains the maximum value of the number of erasures in each physical block in the NVM0 to NVM7. Each time this maximum value changes, MNGER reads the mathematical formula from RAMst, calculates the data retention time (data retention time) Ret using the maximum value (nERS) of the erase count as an argument, and stores it in RAMst. To do. Furthermore, each of STG0 to STG3 transfers the calculated data retention time Ret to the storage controller STRGCONT as necessary.
- FIG. 20B is a diagram showing an example in which the data retention characteristics of FIG. 20A are defined on a table.
- the data retention time Ret is calculated by preliminarily defining the dependency (function) with a mathematical expression, but instead, a table RetTBL like FIG. 20B that discretely represents the mathematical expression is preliminarily defined. It is also possible to calculate the data retention time Ret.
- FIG. 20B shows a table RetTBL held by one storage module STG. In the RetTBL, the dependency between the physical block erase count (or write count) (nERS) and the data retention time Ret ( Function).
- nERS physical block erase count
- nERS data retention time Ret
- Each of the storage modules STG0 to STG3 has a table RetTBL corresponding to its characteristics in the non-volatile memories NVM0 to NVM7 in advance, and immediately after the power of the STG0 to STG3 is turned on, the table RetTBL is transferred to the random access memory RAMst. To do.
- the information processing circuit MNGER in the storage control circuit STCT obtains the maximum value of the number of erasures in each physical block in the NVM0 to NVM7. Then, every time the maximum value changes, MNGER searches the table RetTBL and acquires the data retention time (data retention time) Ret. Furthermore, each of STG0 to STG3 transfers the acquired data retention time Ret to the storage controller STRGCONT as necessary.
- the data retention time Ret of the storage module is different for each storage module, and further decreases based on a predetermined dependency (function) every time the number of erases (or the number of writes) (nERS) increases. I will do it.
- the data retention time Ret represents the remaining life of the corresponding storage module, and when the time becomes smaller than a predetermined value, the life of the storage module is exhausted. Therefore, even if the number of times of erasure (or the number of times of writing) is equalized between the storage modules using the method of the first embodiment, as can be seen from FIG. 20A, from the viewpoint of the data retention time Ret, between the storage modules. In some cases, variations may occur. As a result, the life of the entire storage device system may be reduced.
- a threshold value of data retention time (remaining life) Ret is provided, the threshold value is controlled as appropriate, and the data retention time Ret of each storage module is always kept above the threshold value.
- the number of erasures between storage modules may be leveled. For example, in FIG. 20A, if the number of times of erasure (nERS) of the storage modules STG0 and STG3 is the same, and the data retention time Ret of STG3 reaches a threshold value in this state, for STG3 for the time being What is necessary is just to write / erase STG0, without performing writing / erasing.
- FIG. 10 is a flowchart showing an example of a write operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to the second embodiment of the present invention.
- FIG. 11A and FIG. 11B are supplementary diagrams of FIG. 10 and show an example of the contents held in the table MGTBL held by the storage module control circuit.
- the table MGTBL shown in FIGS. 11A and 11B includes the data size ntW shown in FIG.
- the data retention time Ret for each STG0 to STG3 is held.
- the control circuit DKCTL0 sends a notification request for the data retention time Ret and the data sizes Wstg and Wh2d to the storage modules STG0 to STG3 through the interface signal H2D_IF as necessary (for example, immediately after turning on the power to the storage controller STRGCONT). Issue.
- each of the storage modules STG0 to STG3 returns a data retention time Ret (Ret0 (8), Ret1 (6), Ret2 (9), Ret3 (7)) to DKCTL0.
- DKCTL0 sets and updates these data retention times Ret and data sizes Wstg and Wh2d in the table MGTBL (FIG. 10: Step 2, FIG. 11A).
- DKCTL0 is currently a write access request (write command) including write data (WDT [1]) of data size ntW1 (here, 10) and a logical address (here, 123). ) (WREQ [1]).
- the write control circuit WTCTL in DKCTL0 uses Wstg and Wh2d in MGTBL to determine the write data size ratio WAF for each of STG0 to STG3, and in addition to this, the expected write data size eWd using ntW1 Is set in MGTBL and updated (FIG. 10: Step 3, FIG. 11A).
- the write control circuit WTCTL includes life information (dLife (here, 4.5)) of the storage system (storage device system) STRGSYS and data retention times Ret (Ret0 (8), Ret1) of the storage modules STG0 to STG3. (6), Ret2 (9), Ret3 (7)) are compared. Then, the WTCTL selects a storage module having a data retention time (residual lifetime) Ret that is equal to or longer than the storage system lifetime information (dLife) (FIG. 10: Step 4, FIG. 11A).
- the write control circuit WTCTL further selects a storage module (here, STG2) in which the expected write data size eWd is the minimum value (Min.) From the storage modules selected in Step 4 (FIG. 10: Step 5, FIG. 11A).
- STG2 a storage module in which the expected write data size eWd is the minimum value (Min.) From the storage modules selected in Step 4
- Step 2 Steps 2 to 5 are executed again.
- Step 7 FIG. 10: Step 6
- Command) (WREQ [1]) is issued (FIG. 11A).
- the storage module STG2 notifies the control circuit DKCTL0 of the data retention time Ret (8.9) and the data size Wstg (Wstg2 (30)) (FIG. 10). : Step 9, FIG. 11B), return to Step 2.
- the data size “20” is written by the garbage collection operation or the static wear leveling operation. As a result, a total value of “30” is notified as Wstg2. Further, as described in FIGS.
- the data retention time Ret (8.9) is calculated by the information processing circuit MNGER.
- the data retention time Ret (8.9) is changed from “9” in FIG. 11A according to the garbage collection operation. It is reduced to “8.9” in 11B.
- the write control circuit WTCTL sets and updates the notified data retention time Ret (8.9) and data size Wstg (Wstg2 (30)) in the table MGTBL (FIG. 10: Step 2, FIG. 11B).
- DKCTL0 is currently a write access request (write command) including write data (WDT [2]) of data size ntW2 (here, 10) and a logical address (here, 535). ) (WREQ [2]).
- the WTCTL uses the data sizes Wstg and Wh2d in the MGTBL to obtain the write data size ratio WAF for each of the storage modules STG0 to STG3, and in addition to this, obtains the expected write data size eWd using ntW2.
- MGTBL is set and updated (FIG. 10: Step 3, FIG. 11B).
- the life information (residual life threshold value) dLife is not changed, but is actually variably controlled as appropriate.
- the write control circuit WTCTL takes into account the increase in the use period of the system (in other words, the decrease in the remaining life) based on the specifications of the storage system, and reflects this remaining life over time. It is possible to set lifetime information (residual lifetime threshold value) dLife that decreases gradually. In this case, in the storage system, it is possible to secure the minimum remaining life required in accordance with the usage period, and it is possible to improve the reliability and extend the life.
- WTCTL sets dLife so that, for example, every time the data retention time Ret of most storage modules reaches the life information (residual life threshold) dLife, the value is gradually decreased. Is also possible. In this case, the data retention time Ret can be leveled between the storage systems, so that the reliability can be improved and the life can be extended.
- the write control circuit WTCTL further selects a storage module (here, STG3) in which the expected write data size eWd is the minimum value (Min.) From the storage modules selected in Step 4 (FIG. 10: Step 5, FIG. 11B).
- STG3 a storage module in which the expected write data size eWd is the minimum value (Min.) From the storage modules selected in Step 4
- Step 2 Steps 2 to 5 are executed again.
- Step 7 FIG. 10: Step 6
- a write access request (write command) (WREQ [2]) is issued (FIG. 11B).
- Wh2d is updated from “10” to “20” in STG3 in MGTBL in FIG. 11B.
- Step 7 in the storage module (in this case, STG2 and STG3) from which the write access request (WREQ) is issued, the logical address LAD included in the write access request (WREQ) is changed to the physical addresses (NVM0 to NVM7). PAD) and write data is written to the converted physical address (PAD).
- the entire storage device system equalizes the number of times of erasure (and results in writing as a result). (Equalization of the number of times) can be realized, and it becomes possible to improve reliability, extend the service life, and the like. Furthermore, management of data retention time (residual life) makes it possible to further improve reliability, extend the life, and the like.
- the storage module control circuit DKCTL0 manages the garbage collection in addition to the execution of the inter-storage module wear leveling described in FIG. 3, FIG. 4A, and FIG. 4B of the first embodiment. The case will be described.
- FIG. 12 is a flowchart showing an example of a write operation and a garbage collection management operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage device system according to the third embodiment of the present invention.
- 13A, 13B, and 13C are supplementary diagrams of FIG.
- FIG. 12 is diagrams illustrating an example of contents held in the table MGTBL held by the storage module control circuit DKCTL0 in FIG. 14A, 14B, and 14C are supplementary diagrams of FIG. 12, and are diagrams illustrating an example of contents held in the table GETBL held by the storage module control circuit DKCTL0 of FIG.
- the table MGTBL shown in FIGS. 13A, 13B, and 13C includes the data size ntW shown in FIG. 4 and the like, the data sizes Wh2d and Wstg for each storage module STG0 to STG3, the write data size ratio WAF, and the expected write data. In addition to the size eWd, the number of erased physical blocks Esz for each STG0 to STG3 is held.
- the table GETBL shown in FIGS. 14A, 14B, and 14C holds a garbage collection execution state GCv and an erasure execution state ERSv for each storage module STG0 to STG3,... Having each number (STG No).
- GCv means that the garbage collection operation is being executed when “1”, and means that it is not being executed when “0”.
- ERSv means that the erase operation is being executed when “1”, and means that it is not being executed when “0”.
- the control circuit DKCTL0 when necessary (for example, immediately after turning on the power to the storage controller STRGCONT), sends the physical block count Esz and the data size to the storage modules STG0 to STG3 via the interface signal H2D_IF.
- a notification request for Wstg and Wh2d is issued.
- each of the storage modules STG0 to STG3 returns the number of physical blocks in the erased state Esz (Esz0 (90), Esz1 (100), Esz2 (140), Esz3 (120)) to DKCTL0.
- Wstg (Wstg0 (40), Wstg1 (15), Wstg2 (10), Wstg3 (20)) and Wh2d (Wh2d0 (10), Wh2d1 (5), Wh2d2 (5), Wh2d3 (10)) are set. Reply to DKCTL0.
- control circuit DKCTL0 issues a garbage collection operation and erasure operation confirmation request to the storage modules STG0 to STG3 through the interface signal H2D_IF as necessary.
- each storage module STG0 to STG3 receives the garbage collection status Gst (Gst0 (0), Gst1 (0), Gst2 (0), Gst3 (0)) and the erase status Est (Est0 (0), Est1 ( 0), Est2 (0), Est3 (0)) are returned to DKCTL0 (FIG. 12: Step1, FIG. 13A).
- control circuit DKCTL0 sets and updates the number of physical blocks Esz and the data sizes Wstg and Wh2d in the erased state in the table MGTBL. Furthermore, DKCTL0 sets and updates these garbage collection status Gst and erase status Est in the garbage collection execution state GCv and the erase execution state ERSv of the table GETBL (FIG. 12: Step 2, FIG. 13A, FIG. 14A).
- DKCTL0 is currently a write access request (write command) including write data (WDT [1]) of data size ntW1 (here, 10) and a logical address (here, 123). ) (WREQ [1]).
- write control circuit WTCTL in DKCTL0 uses Wstg and Wh2d in MGTBL to determine the write data size ratio WAF for each of STG0 to STG3, and in addition to this, the expected write data size eWd using ntW1 Is set in MGTBL and updated (FIG. 12: Step 3, FIG. 13A).
- FIG. 12 Step 3, FIG. 13A
- the garbage collection control circuit GCCTL reads the table GETBL (FIG. 14A) and selects the storage modules STG0 to STG4 that are not currently performing the garbage collection operation or the erasure operation as the storage modules to be garbage collected (FIG. 12: Step 4, FIG. 13A). Subsequently, the GCCTL includes a preset threshold value ESZth of the number of physical blocks in the erased state (91 here) and the number of physical blocks in the erased state Esz (Esz0 (90)) for each of the storage modules STG0 to STG3 selected in Step 4. , Esz1 (100), Esz2 (140), Esz3 (120)).
- the garbage collection control circuit GCCTL issues a garbage collection request (GCrq) to the storage module STG0 selected as the garbage collection target in Step 5, and updates the table GETBL. That is, as shown in FIG. 14B, the value of the garbage collection execution state GCv corresponding to the storage module number (STG No) “0” is “1”, and the storage module STG0 is currently executing the garbage collection operation. It is shown that there is (FIG. 12: Step 11, FIG. 13A, FIG. 14B).
- the storage module STG0 that has received the garbage collection request (GCrq) executes garbage collection using the processing of Step 3 to Step 8 of FIG. That is, in FIG. 8, the storage module itself determines whether to execute the garbage collection operation by the processing of Step 1 and Step 2, but in the flow of FIG. Is determined by the garbage collection control circuit GCCTL in the control circuit DKCTL0.
- DKCTL0 can issue a write access request (WREQ), a read access request (RREQ), or the like for a storage module that is not executing a garbage collection operation, thereby improving the efficiency of a write operation, a read operation, or the like. It becomes possible.
- the write control circuit WTCTL selects a storage module (in this case, STG2) in which the expected write data size eWd is the minimum value (Min.) From among the storage modules selected as the write and read target storage modules in Step 5. Select (FIG. 12: Step 6, FIG. 13A). Next, if there is a notification of the data size Wstg from the storage modules STG0 to STG3, the process proceeds to Step 2, and Steps 2 to 6 are executed again. On the other hand, if there is no notification of Wstg from STG0 to STG3, the process proceeds to Step 8 (FIG. 12: Step 7).
- Command (WREQ [1]) is issued (FIG. 13A).
- the storage module STG2 notifies the control circuit DKCTL0 of the number of physical blocks in the erased state Esz (Esz2 (139)) and the data size Wstg (Wstg2 (30)).
- FIG. 12: Step 10, FIG. 13B the process returns to Step 2.
- a total value of “30” is notified as Wstg2.
- DKCTL0 is currently a write access request (write command) including write data (WDT [2]) of data size ntW2 (here, 10) and a logical address (here, 535). ) (WREQ [2]).
- the WTCTL uses the data sizes Wstg and Wh2d in the MGTBL to obtain the write data size ratio WAF for each of the storage modules STG0 to STG3, and in addition to this, obtains the expected write data size eWd using ntW2.
- MGTBL is set and updated (FIG. 12: Step 3, FIG. 13B).
- ESZth 91
- the GCCTL selects the storage modules STG1 to STG3 as the storage modules to be written and read (FIG. 12: Step 5, FIG. 13B).
- the write control circuit WTCTL selects a storage module (in this case, STG3) in which the expected write data size eWd is the minimum value (Min.) From among the storage modules selected as the write and read target storage modules in Step 5. Select (FIG. 12: Step 6, FIG. 13B). Next, if there is a notification of the data size Wstg from the storage modules STG0 to STG3, the process proceeds to Step 2, and Steps 2 to 6 are executed again. On the other hand, if there is no notification of Wstg from STG0 to STG3, the process proceeds to Step 8 (FIG. 12: Step 7).
- a write access request (write command) (WREQ [2]) is issued (FIG. 13B).
- Step 10 the process proceeds to Step 2.
- Step 8 in the storage module (in this case, STG2 and STG3) from which the write access request (WREQ) is issued, the logical address LAD included in the write access request (WREQ) is changed to the physical addresses (NVM0 to NVM7). PAD) and write data is written to the converted physical address (PAD).
- the storage module STG0 that has received the garbage collection request (GCrq) in Step 11 completes the garbage collection operation after the write operation of the write data (WDT [2]) by the storage module STG3 is completed. Will be described.
- the storage module STG0 transmits the number of physical blocks Esz (Esz0 (100)) and the data size Wstg (Wstg0 (70)) after the garbage collection operation to the control circuit DKCTL0 (FIG. 12: Step 12, FIG. 13C).
- the garbage collection control circuit GCCTL in the control circuit DKCTL0 updates the table GETBL upon completion of the garbage collection operation.
- the value of the garbage collection execution state GCv corresponding to the storage module number (STG No) “0” becomes “0”, so that the storage module STG0 is not executing the garbage collection operation. Is shown.
- the write control circuit WTCTL in the control circuit DKCTL0 sets and updates the number of physical blocks in the erased state Esz (Esz0 (100)) and the data size Wstg (Wstg0 (70)) in the table MGTBL (FIG. 12: Step 2, FIG. 13C).
- WTCTL uses the data sizes Wstg and Wh2d of the storage module STG0 to obtain the write data size ratio WAF and the expected write data size eWd of the storage module STG0, and sets and updates the table MGTBL (FIG. 12: Step 3, FIG. 13C).
- the entire storage device system equalizes the number of times of erasure (and results in writing as a result). (Equalization of the number of times) can be realized, and it becomes possible to improve reliability, extend the service life, and the like. Furthermore, by managing the garbage collection operation with the storage controller STRGCONT, it is possible to know which storage module is executing the garbage collection operation and which storage module can be written to and read from. The write and read operations can be executed in parallel. As a result, the storage system can be speeded up while leveling.
- the storage module control circuit DKCTL0 performs the erase operation in addition to the execution of the write operation (inter-storage module wear leveling) described in FIG. 3, FIG. 4A, FIG. The case of performing will be described.
- FIG. 15 is a flowchart showing an example of a write operation and an erase operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to the fourth embodiment of the present invention.
- 16A, FIG. 16B, and FIG. 16C are supplementary diagrams of FIG. 15, and show an example of the contents held in the table MGTBL held by the storage module control circuit DKCTL0 of FIG.
- the control circuit DKCTL0 passes the interface signal H2D_IF to the storage modules STG0 to STG3 to determine the number of physical blocks Esz and the data size.
- a notification request for Wstg and Wh2d is issued.
- each of the storage modules STG0 to STG3 returns the number of physical blocks in the erased state Esz (Esz0 (90), Esz1 (100), Esz2 (140), Esz3 (120)) to DKCTL0.
- Wstg (Wstg0 (40), Wstg1 (15), Wstg2 (10), Wstg3 (20)) and Wh2d (Wh2d0 (10), Wh2d1 (5), Wh2d2 (5), Wh2d3 (10)) are set. Reply to DKCTL0.
- control circuit DKCTL0 issues a garbage collection operation and erasure operation confirmation request to the storage modules STG0 to STG3 through the interface signal H2D_IF as necessary.
- each storage module STG0 to STG3 receives the garbage collection status Gst (Gst0 (0), Gst1 (0), Gst2 (0), Gst3 (0)) and the erase status Est (Est0 (0), Est1 ( 0), Est2 (0), Est3 (0)) are returned to DKCTL0 (FIG. 15: Step1, FIG. 16A).
- control circuit DKCTL0 sets and updates the number of physical blocks Esz and the data sizes Wstg and Wh2d in the erased state in the table MGTBL. Furthermore, DKCTL0 sets and updates these garbage collection status Gst and erase status Est in the garbage collection execution state GCv and erase execution state ERSv of the table GETBL (FIG. 15: Step 2, FIG. 16A, FIG. 14A).
- DKCTL0 is currently a write access request (write command) including write data (WDT [1]) of data size ntW1 (here, 10) and a logical address (here, 123). ) (WREQ [1]).
- write control circuit WTCTL in DKCTL0 uses Wstg and Wh2d in MGTBL to determine the write data size ratio WAF for each of STG0 to STG3, and in addition to this, the expected write data size eWd using ntW1 Is set in MGTBL and updated (FIG. 15: Step 3, FIG. 16A).
- FIG. 15 Step 3, FIG. 16A
- Step 4 the storage controller STRGCONT checks whether there is a data erase request (EQ) from the information processing devices SRV0 to SRVm, and if there is a data erase request (EQ), executes Step 5, If not, Step 6 is executed. That is, for example, from SRV0 to SRVm, a data erasure request (EQ) for erasing data of the logical address LAD of “1000 to 2279” is input to the STRGCONT interface circuits STIF00 to STIFm1. This data erasure request (EQ) is notified to the data erasure control circuit ERSCTL of the control circuit DKCTL0 through the control circuit HCTL0 (FIG. 15: Step 4), and then Step 5 is executed.
- EQ data erase request
- ERSrq data erasure access request
- the write control circuit WTCTL selects a storage module (in this case, STG2) having the expected write data size eWd of the minimum value (Min.) From among the storage modules STG1 to STG3 other than the erase operation target in Step 5 (FIG. 15: Step 6, FIG. 16A).
- STG2 the storage module having the expected write data size eWd of the minimum value (Min.) From among the storage modules STG1 to STG3 other than the erase operation target in Step 5
- Step 2 Steps 2 to 4 and Step 6 are executed again.
- Step 8 FIG. 15: Step 7
- Command (WREQ [1]) is issued (FIG. 16A).
- the storage module STG2 notifies the control circuit DKCTL0 of the number of physical blocks in the erased state Esz (Esz2 (139)) and the data size Wstg (Wstg2 (30)). (FIG. 15: Step 10, FIG. 16B), the process returns to Step 2.
- a total value of “30” is notified as Wstg2.
- DKCTL0 is currently a write access request (write command) including write data (WDT [2]) of data size ntW2 (here, 10) and a logical address (here, 535). ) (WREQ [2]).
- the WTCTL uses the data sizes Wstg and Wh2d in the MGTBL to obtain the write data size ratio WAF for each of the storage modules STG0 to STG3, and in addition to this, obtains the expected write data size eWd using ntW2.
- MGTBL is set and updated (FIG. 15: Step 3, FIG. 16B).
- Step 4 the storage controller STRGCONT checks whether there is a data erase request (EQ) from the information processing devices SRV0 to SRVm, and if there is a data erase request (EQ), executes Step 5, If not, Step 6 is executed. In this case, since there is no data erasure request (EQ), Step 6 is executed. Subsequently, the write control circuit WTCTL selects a storage module (in this case, STG3) having the expected write data size eWd of the minimum value (Min.) From among the storage modules STG1 to STG3 other than the erase operation target in Step 5 ( FIG. 15: Step 6, FIG. 16B).
- a storage module in this case, STG3 having the expected write data size eWd of the minimum value (Min.) From among the storage modules STG1 to STG3 other than the erase operation target in Step 5 ( FIG. 15: Step 6, FIG. 16B).
- Step 2 if there is a notification of the data size Wstg from the storage modules STG0 to STG3, the process proceeds to Step 2, and Steps 2 to 4 and Step 6 are executed again. On the other hand, if there is no notification of Wstg from STG0 to STG3, the process proceeds to Step 8 (FIG. 15: Step 7).
- a write access request (write command) (WREQ [2]) is issued (FIG. 16B).
- Step 10 the process proceeds to Step 2.
- Step 8 in the storage module (in this case, STG2 and STG3) from which the write access request (WREQ) is issued, the logical address LAD included in the write access request (WREQ) is changed to the physical addresses (NVM0 to NVM7). PAD) and write data is written to the converted physical address (PAD).
- the storage module STG0 that has received the data erase access request (ERSrq) in Step 11 completes the erase operation after the write operation of the write data (WDT [2]) by the storage module STG3 is completed.
- the storage module STG0 transmits the number of physical blocks Esz (Esz0 (100)) and the data size Wstg (Wstg0 (40)) after the erase operation to the control circuit DKCTL0 (FIG. 15: Step 12, FIG. 16C).
- the data erasure control circuit ERSCTL in the control circuit DKCTL0 updates the table GETBL in response to the completion of the erasure operation. That is, as shown in FIG. 14A, the value of the erase execution state ERSv corresponding to the storage module number (STG No) “0” is “0”, which indicates that the storage module STG0 is not executing the erase operation. It is.
- the write control circuit WTCTL in the control circuit DKCTL0 sets the number of erased physical blocks Esz (Esz0 (100)) and the data size Wstg (Wstg0 (40)) in the table MGTBL and updates them (FIG. 15: Step 2, FIG. 16C).
- WTCTL uses the data sizes Wstg and Wh2d of the storage module STG0 to determine the write data size ratio WAF and the expected write data size eWd of the storage module STG0, and sets and updates the table MGTBL (FIG. 15: Step 3, FIG. 16C).
- the entire storage device system can level the number of erases (and write as a result). (Equalization of the number of times) can be realized, and it becomes possible to improve reliability, extend the service life, and the like. Furthermore, by controlling the erase operation in the storage controller STRGCONT, it is possible to grasp which storage module is executing the erase operation and to which storage module can be written or read. Read operations can be executed in parallel. As a result, the storage system can be speeded up while leveling.
- the storage controller STRGCONT performs management of the garbage collection operation and control of the erasing operation for each storage module STG.
- the storage control circuit STCT0 of FIG. 6 may manage the garbage collection operation and control the erase operation for each of the nonvolatile memories NVM0 to NVM7. That is, information (GCnvm) indicating which NVM0 to NVM7 is performing the garbage collection operation and information indicating which NVM0 to NVM7 is performing the erasing operation in the random access memory RAMst of FIG. (ERSnvm) or the like may be included.
- GCnvm information indicating which NVM0 to NVM7 is performing the garbage collection operation
- information indicating which NVM0 to NVM7 is performing the erasing operation in the random access memory RAMst of FIG. (ERSnvm) or the like may be included.
- FIG. 17 is a flowchart showing an example of a write operation and a garbage collection management operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to the fifth embodiment of the present invention.
- 18A, 18B, and 18C are supplementary diagrams of FIG. 17, and are diagrams illustrating an example of contents held in the table MGTBL held by the storage module control circuit DKCTL0 of FIG.
- Step 1a of FIG. 17 the control circuit DKCTL0 uses the interface signal H2D_IF to supply the storage modules STG0 to STG3 with the number of physical blocks Esz and the data size as needed (for example, immediately after powering on the storage controller STRGCONT).
- a notification request for Wstg and Wh2d is issued.
- each of the storage modules STG0 to STG3 returns the number of physical blocks in the erased state Esz (Esz0 (90), Esz1 (100), Esz2 (140), Esz3 (120)) to DKCTL0.
- Wstg Wstg0 (40), Wstg1 (15), Wstg2 (10), Wstg3 (20)) and Wh2d (Wh2d0 (10), Wh2d1 (5), Wh2d2 (5), Wh2d3 (10)) are set. It returns to DKCTL0 (FIG. 18A).
- Step 1b of FIG. 17 the control circuit DKCTL0 issues a garbage collection operation and erasure operation confirmation request to the storage modules STG0 to STG3 through the interface signal H2D_IF as necessary.
- each storage module STG0 to STG3 receives the garbage collection status Gst (Gst0 (0), Gst1 (0), Gst2 (0), Gst3 (0)) and the erase status Est (Est0 (0), Est1 ( 0), Est2 (0), Est3 (0)) are returned to DKCTL0 (FIG. 18A).
- the write control circuit WTCTL first divides the write data (WDT [A]) of the data size ntW_A (here, 20) held in the random access memories RAM0 to RAM3. .
- the data is divided into write data (WDT [A1]) having a data size of ntW_A1 (here, 10) and write data (WDT [A2]) having a data size of ntW_A2 (here, 10).
- WTCTL generates parity data (PA12) of data size ntW_PA12 (here, 10) from the divided write data (WDT [A1]) and write data (WDT [A2]) (FIG. 18A).
- a write access request (WREQ [A2]) including the WDT [A2] and a predetermined logical address (here, 224) is generated.
- control circuit DKCTL0 sets the number of erased physical blocks Esz and the data sizes Wstg and Wh2d associated with Step 1a in the table MGTBL and updates them. Furthermore, DKCTL0 sets and updates the garbage collection status Gst and the erase status Est associated with Step 1a in the garbage collection execution state GCv and the erase execution state ERSv of the table GETBL (FIG. 17: Step 2, FIG. 18A, FIG. 14A).
- the garbage collection control circuit GCCTL reads the table GETBL (FIG. 14A) and selects the storage modules STG0 to STG4 that are not currently performing the garbage collection operation or the erasure operation as the storage modules to be garbage collected (FIG. 17: Step 4, FIG. 18A). Subsequently, the GCCTL includes a preset threshold value ESZth of the number of physical blocks in the erased state (91 here) and the number of physical blocks in the erased state Esz (Esz0 (90)) for each of the storage modules STG0 to STG3 selected in Step 4. , Esz1 (100), Esz2 (140), Esz3 (120)).
- the garbage collection control circuit GCCTL issues a garbage collection request (GCrq) to the storage module STG0 selected as the garbage collection target in Step 5, and updates the table GETBL. That is, as shown in FIG. 14B, the value of the garbage collection execution state GCv corresponding to the storage module number (STG No) “0” is “1”, and the storage module STG0 is currently executing the garbage collection operation. This is shown (FIG. 17: Step 11, FIG. 18A, FIG. 14B).
- the storage module STG0 that has received the garbage collection request (GCrq) executes garbage collection using the processing of Step 3 to Step 8 of FIG. That is, in FIG. 8, the storage module itself determines whether or not to execute the garbage collection operation by the processing of Step 1 and Step 2, but in the flow of FIG. Is determined by the garbage collection control circuit GCCTL in the control circuit DKCTL0.
- DKCTL0 can issue a write access request (WREQ), a read access request (RREQ), or the like for a storage module that is not executing a garbage collection operation, thereby improving the efficiency of a write operation, a read operation, or the like. It becomes possible.
- the write control circuit WTCTL has three storage modules (in this case, STG2, STG3, and STG1) in ascending order of the expected write data size eWd from the storage modules selected as the storage modules to be written and read in Step 5. (FIG. 17: Step 6, FIG. 18A).
- Step 2 the process proceeds to Step 2
- Steps 2 to 6 are executed again.
- Step 8 the process proceeds to Step 8 (FIG. 17: Step 7).
- the write control circuit WTCTL sends the write access request (write command) (WREQ [A1], WREQ [A2], WREQ [PA12] described in Step 1a to the storage modules STG2, STG3, and STG1 selected in Step 6, respectively. ) Is issued (FIG. 18B).
- the WTCTL updates the table STGTBL based on the correspondence relationship between the logical addresses LAD (223, 224, 225) accompanying each write access request and STG2, STG3, STG1 (FIG. 17: Step 9),
- the data size Wh2d of MGTBL is updated (FIG. 17: Step 9, FIG. 18B). That is, in STGTBL as shown in FIG.
- Wh2d is updated to “15”, “15”, and “20”, respectively.
- Step 10 when the writing of the write data (WDT [A1]) is completed, the storage module STG2 completes the erased physical block number Esz (Esz2 (139)) and the data size Wstg (Wstg2 (30) to the control circuit DKCTL0. )) Further, when the write of the write data (WDT [A2]) is completed, the storage module STG3 passes the erasure state physical block number Esz (Esz3 (119)) and the data size Wstg (Wstg3 (40)) to DKCTL0.
- the storage module STG1 notifies the DKCTL0 of the erased physical block number Esz (Esz1 (100)) and the data size Wstg (Wstg1 (45)) (FIG. 18C). .
- Step 10 the process returns to Step 1b, and the write control circuit WTCTL divides write data and generates parity data for the next write data (WDT [B]) of the data size ntW_B (here, 20).
- the data is divided into write data (WDT [B1]) of data size ntW_B1 (here 10) and write data (WDT [B2]) of data size ntW_B2 (here 10), and data size ntW_PB12 ( Parity data (PB12) of 10) is generated (FIG. 18C).
- the WTCTL generates a write access request (write command) including each of these write data and parity data.
- the information notified in Step 10 is reflected in MGTBL in Step 2 (FIG. 18C).
- the storage module STG0 that has received the garbage collection request (GCrq) at Step 11 completes the garbage collection operation.
- the STG0 transmits the erased physical block number Esz (Esz0 (100)) and the data size Wstg (Wstg0 (70)) after the garbage collection operation to the control circuit DKCTL0 (FIG. 17: Step 12, FIG. 18C).
- the garbage collection control circuit GCCTL in the control circuit DKCTL0 updates the table GETBL in response to the completion of the garbage collection operation.
- the value of the garbage collection execution state GCv corresponding to the storage module number (STG No) “0” becomes “0”, so that the storage module STG0 is not executing the garbage collection operation. Is shown.
- the write control circuit WTCTL in the control circuit DKCTL0 sets and updates the number of physical blocks in the erased state Esz (Esz0 (100)) and the data size Wstg (Wstg0 (70)) in the table MGTBL (FIG. 17: Step 2, FIG. 18C).
- FIG. 19 is an explanatory diagram showing an example of a read operation performed by the storage module control circuit in FIGS. 1 and 2 in the storage system according to Embodiment 5 of the present invention.
- data A is composed of data A1 and A2
- data A1 is stored in the storage module STG2
- data A2 is stored in the storage module STG3.
- the parity data PA12 generated from the data A1 and A2 is stored in the storage module STG1.
- Data B is composed of data B1 and B2, data B1 is stored in the storage module STG1, and data B2 is stored in the storage module STG2. Further, the parity data PB12 generated from the data B1 and B2 is stored in the storage module STG0.
- Data C is composed of data C1 and C2, data C1 is stored in the storage module STG0, and data C2 is stored in the storage module STG1.
- the parity data PC12 generated from the data C1 and C2 is stored in the storage module STG3.
- Data D is composed of data D1 and D2, data D1 is stored in storage module STG2, and data D2 is stored in storage module STG3.
- the parity data PD12 generated from the data D1 and D2 is stored in the storage module STG0.
- a garbage collection request (GCrq) is issued from the garbage collection control circuit GCCTL in the control circuit DKCTL0 to the storage module STG1, and the read control circuit RDCTL is being executed while the STG1 is executing a garbage collection operation accordingly.
- the RDCTL can grasp that the storage module STG1 is executing the garbage collection operation from the table GETBL as described in FIG.
- RDCTL indicates that data B1 and B2 are stored in storage modules STG1 and STG2 and parity data PB12 is stored in storage module STG0 according to the table STGTBL as described in FIG. It can be grasped from the corresponding logical address LAD.
- the read control circuit RDCTL reads data B2 stored in the storage module STG2 and parity data PB12 stored in the storage module STG0 other than the storage module STG1 that is executing the garbage collection operation.
- RDCTL restores data B using data B2 and parity data PB12 (restores data B1 and restores data B from data B1 and B2).
- STRGCONT control circuit DKCTL
- the storage device system of the fifth embodiment typically, the entire number of erasures in the entire storage device system ( As a result, it is possible to realize the leveling of the number of times of writing), and it is possible to improve the reliability and extend the life.
- the garbage collection operation in the storage controller STRGCONT it is possible to grasp which storage module is executing the garbage collection operation and to which storage module can be written and read. The write and read operations can be executed in parallel. As a result, the storage system can be speeded up while leveling. In addition to this, it is possible to further improve the reliability by realizing the RAID function in the storage controller STRGCONT (control circuit DKCTL).
- FIG. 21 is a block diagram showing a detailed configuration example of the storage module in FIG. 1 in the storage system according to Embodiment 6 of the present invention.
- the storage module (memory module) STG shown in FIG. 21 replaces the random access memory RAMst of FIG. 6 with a nonvolatile memory NVMEMst instead of deleting the battery backup device BBU of FIG. The point is different.
- the configuration is the same as in the case of FIG.
- the non-volatile memory NVMEMst is a memory that can perform a write operation faster than the NAND flash memory and can be accessed in small units (for example, byte units).
- small units for example, byte units.
- PCM phase change memory
- SPRAM Spin transfer torque RAM
- MRAM Magneticoresistive RAM
- FRAM Feroelectric RAM
- ReRAM resistance change memory
- RAM Resistive
- the storage controller predicts each storage module from the amount of write data to be written next.
- the amount of write data can be obtained.
- the storage controller can write the next data to the storage module having the smallest predicted write data amount.
- the storage controller In addition to the amount of write data actually written to the non-volatile memory by a plurality of storage modules, the storage controller notifies the storage controller of the above-mentioned predicted write data amount by remaining the storage system. It can be obtained for storage modules that have a lifespan that is longer than the life. Then, the storage controller can write the next data into the storage module having the smallest predicted write data amount in the target storage module. As a result, while maintaining the product life of the storage system, the number of writes between the plurality of storage modules can be leveled with high efficiency, and a highly reliable and long-life storage system can be realized.
- the present invention made by the present inventor has been specifically described based on the embodiment.
- the present invention is not limited to the embodiment, and various modifications can be made without departing from the scope of the invention.
- the above-described embodiment has been described in detail for easy understanding of the present invention, and is not necessarily limited to one having all the configurations described.
- a part of the configuration of one embodiment can be replaced with the configuration of another embodiment, and the configuration of another embodiment can be added to the configuration of one embodiment. .
- ARB arbitration circuit BBU battery backup device BIF interface circuit BP backplane BUF buffer CM cache memory CMCTL cache control circuit CPU processor unit CPUCR processor core DIAG diagnostic circuit DKCTL control circuit ERSCTL data erase control circuit ERSv erase execution state Estz erase state Esz Number of physical blocks GCCTL garbage collection control circuit GCv garbage collection execution state Gst garbage collection status H2D_IF interface signal H2S_IF interface signal HCTL control circuit HDIAG diagnostic circuit HOST_IF interface circuit HRDCTL read control circuit HWTCTL write control circuit L D Logical address MGTBL, STGTBL, GETBL Table MNGER Information processing circuit NVM Non-volatile memory NVMEMst Non-volatile memory PAD Physical address PBK Physical block RAM, RAMst Random access memory RAMC, NVCT0 to NVCT7 Memory control circuit RDCTL Read control circuit Ret Data retention time ( Remaining life) RetTBL table SIFC interface circuit SRV Information processing device (host) STCT control circuit STG Storage module (memor
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Techniques For Improving Reliability Of Storages (AREA)
Abstract
Description
《情報処理システムの概要》
図1は、本発明の実施の形態1による記憶装置システムにおいて、それを適用した情報処理システムの概略構成例を示すブロック図である。図1に示す情報処理システムは、情報処理装置SRV0~SRVmと、ストレージシステム(記憶装置システム)STRGSYSを備える。STRGSYSは、複数のストレージモジュール(メモリモジュール)STG0~STGn+4と、SRV0~SRVmからの要求に応じてSTG0~STGn+4の制御を行うストレージコントローラSTRGCONTとを備えている。情報処理装置SRV0~SRVmとストレージコントローラSTRGCONTはインターフェース信号H2S_IFにて接続されており、ストレージコントローラSTRGCONTとストレージモジュールSTG0~STGn+4はインターフェース信号H2D_IFにて接続されている。
以下、説明を分かり易くするため、図1において4個のストレージモジュール(メモリモジュール)STG0~STG3を備える場合を例とし、ストレージモジュール用の制御回路DKCTL0が行う書き込み動作について図3~図5を用いて説明する。図3は、図1および図2におけるストレージモジュール用の制御回路が行う書き込み動作の一例を示すフローチャートである。図4Aおよび図4Bは、図3の補足図であり、ストレージモジュール用の制御回路が持つテーブル(MGTBL)の保持内容の一例を示す図である。図5A、図5B、図5Cおよび図5Dは、図2のストレージモジュール用の制御回路が持つテーブル(STGTBL)の保持内容の一例を示す図である。
図6は、図1におけるストレージモジュールの詳細な構成例を示すブロック図である。当該ストレージモジュールSTGは、図1におけるストレージモジュールSTG0~STGn+4として利用される。図6に示すストレージモジュールSTGは、不揮発性メモリNVM0~NVM7と、ランダムアクセスメモリRAMstと、NVM0~NVM7およびRAMstを制御するストレージ用の制御回路STCT0と、バッテリバックアップ装置BBUを備える。NVM0~NVM7は、例えば同じ構成および性能を備えている。RAMstは、特に限定しないが、例えばDRAM等である。BBUは、内部に大きな容量等を備え、例えば、予期せぬ電源遮断時等において、RAMst内のデータをNVM0~NVM7に退避するための電源を一定期間確保するための装置である。
図7は、図1のストレージモジュール内で行われるウエアレベリング方法の一例を示すフロー図である。図7には、図1の制御回路DKCTL0からストレージモジュールSTG0~STGn+4へライトアクセス要求(書き込み命令)(WREQ)が発行された際に、STG0~STGn+4内で行われる書き込み処理手順の一例と、当該書き込みの際に行われるウエアレベリング(すなわちダイナミックウエアレベリング)方法の一例が示されている。また、図7のフローは、主に図6の情報処理回路MNGERによって実行される。MNGERは、特に限定しないが512バイトのメインデータと16バイトの冗長データを単位として1個の物理アドレス(PAD)を割り当て、不揮発性メモリNVM0~NVM7の当該物理アドレス(PAD)へ書き込みを行っている。
図8は、図1のストレージモジュール内で行われるガーベージコレクションおよびウエアレベリング方法の一例を示すフロー図である。図23は、図8のフローを概略的に説明する補足図である。図8のフローは、主に図6の情報処理回路MNGERによって実行される。不揮発性メモリNVM0~NVM7へデータを書き込み続けると、消去状態の物理アドレス数(物理ブロック数)が減少してしまう。この消去状態の物理アドレス数(物理ブロック数)が0になると、ストレージモジュールSTGはこれ以上書き込みを行えなくなってしまう。そこで、消去状態の物理アドレス数(物理ブロック数)を増やすためのガーベージコレクション動作が必要とされる。そして、このガーベージコレクション動作の際には、併せてウエアレベリング動作を行うことが望ましい。そこで、図8のフローを実行することが有益となる。
前述したように、ストレージコントローラSTRGCONT内のホスト用の制御回路HCTL0は、情報処理装置(ホスト)SRV0~SRVmからのリード要求(RQ)を受け付け、それに対応するデータ(RDATA)がキャッシュメモリCM0~CM3へ格納されていない場合に、ストレージモジュール用の制御回路DKCTL0へ通達を行う。図9は、図1におけるストレージモジュール用の制御回路およびストレージ用の制御回路が行う読み出し動作例を示すフロー図であり、DKCTL0がHCTL0からの通達に応じてリード要求(RQ)を受け付けた以降の動作例を示すものである。
本実施の形態2では、実施の形態1の図3、図4Aおよび図4B等で述べたストレージモジュール間ウエアレベリング方法の変形例として、前述した予想ライトデータサイズeWdに加えて更にデータリテンション時間Retを利用した方法について説明する。
不揮発性メモリ(特にフラッシュメモリのように破壊書き込み方式のメモリ)では、消去回数(または書き込み回数)が増加するほどデータリテンション時間(すなわち書き込まれたデータをどの程度の期間に渡って正しく保持できるか)が低下していく場合がある。データリテンション時間は、特に限定しないが、例えば、書き込み回数が少ない場合には10年等である。このデータリテンション時間が消去回数(または書き込み回数)にどの程度依存するかは、ストレージモジュールへどのような不揮発性メモリを利用するかによって変化する。例えば、不揮発性メモリとしてフラッシュメモリを用いるかあるいは相変化メモリを用いるかや、フラッシュメモリを用いる場合でもどのようなメモリセル構造を用いるかなどによって異なり得る。
以下に、特に限定しないが、4個のストレージモジュールSTG0~STG3を備える場合を例として、制御回路DKCTL0が実行する書き込み動作について図10、図11に加えて図5A、図5Bを用いて説明する。図10は、本発明の実施の形態2による記憶装置システムにおいて、図1および図2におけるストレージモジュール用の制御回路が行う書き込み動作の一例を示すフローチャートである。図11Aおよび図11Bは、図10の補足図であり、ストレージモジュール用の制御回路が持つテーブルMGTBLの保持内容の一例を示す図である。図11Aおよび図11Bに示すテーブルMGTBLには、図4等に示したデータサイズntWと、各ストレージモジュールSTG0~STG3毎のデータサイズWh2d,Wstg、ライトデータサイズ比率WAF、および予想ライトデータサイズeWdに加えて、各STG0~STG3毎のデータリテンション時間Retが保持されている。
本実施の形態3では、ストレージモジュール用の制御回路DKCTL0が、実施の形態1の図3、図4Aおよび図4B等で述べたストレージモジュール間ウエアレベリングの実行に加えて、ガーベージコレクションの管理を行う場合について説明する。
以下に、特に限定しないが、4個のストレージモジュールSTG0~STG3を備える場合を例とし、図2に示した制御回路DKCTL0内のライト制御回路WTCTLが実行する書き込み動作と、ガーベージコレクション制御回路GCCTLが実行するガーベージコレクションの要求動作について、図12~図14に加えて図5A、図5Bを用いて説明する。図12は、本発明の実施の形態3による記憶装置システムにおいて、図1および図2におけるストレージモジュール用の制御回路が行う書き込み動作およびガーベージコレクションの管理動作の一例を示すフローチャートである。図13A、図13Bおよび図13Cは、図12の補足図であり、図2のストレージモジュール用の制御回路DKCTL0が持つテーブルMGTBLの保持内容の一例を示す図である。図14A、図14Bおよび図14Cは、図12の補足図であり、図2のストレージモジュール用の制御回路DKCTL0が持つテーブルGETBLの保持内容の一例を示す図である。
本実施の形態4では、ストレージモジュール用の制御回路DKCTL0が、実施の形態1の図3、図4Aおよび図4B等で述べた書き込み動作(ストレージモジュール間ウエアレベリング)の実行に加えて、消去動作を行う場合について説明する。
以下に、特に限定しないが、4個のストレージモジュールSTG0~STG3を備える場合を例とし、図2に示した制御回路DKCTL0内のライト制御回路WTCTLが実行する書き込み動作と、データ消去制御回路ERSCTLが実行する消去の要求動作について、図15、図16に加えて図5C、図5D、図14A、図14Cを用いて説明する。図15は、本発明の実施の形態4による記憶装置システムにおいて、図1および図2におけるストレージモジュール用の制御回路が行う書き込み動作および消去動作の一例を示すフローチャートである。図16A、図16Bおよび図16Cは、図15の補足図であり、図2のストレージモジュール用の制御回路DKCTL0が持つテーブルMGTBLの保持内容の一例を示す図である。
本実施の形態5では、図1のストレージシステムSTRGSYSがRAID機能(例えばRAID5等)を備える場合について説明する。そして、当該RAID機能を前提として、ストレージモジュール用の制御回路DKCTL0が、実施の形態3の図12、図13A、図13Bおよび図13C等で述べたような書き込み動作(ストレージモジュール間ウエアレベリング)やガーベージコレクション管理等を行う場合について説明する。
以下に、特に限定しないが、4個のストレージモジュールSTG0~STG3を備える場合を例とする。そして、この場合において、図2に示した制御回路DKCTL0内のライト制御回路WTCTLが実行する書き込み動作と、ガーベージコレクション制御回路GCCTLが実行するガーベージコレクションの要求動作について、図17、図18に加えて図14A、図14Bを用いて説明する。図17は、本発明の実施の形態5による記憶装置システムにおいて、図1および図2におけるストレージモジュール用の制御回路が行う書き込み動作およびガーベージコレクションの管理動作の一例を示すフローチャートである。図18A、図18Bおよび図18Cは、図17の補足図であり、図2のストレージモジュール用の制御回路DKCTL0が持つテーブルMGTBLの保持内容の一例を示す図である。
図19は、本発明の実施の形態5による記憶装置システムにおいて、図1および図2におけるストレージモジュール用の制御回路が行う読み出し動作の一例を示す説明図である。図19の例では、データAはデータA1とA2から構成され、データA1はストレージモジュールSTG2へ保存され、データA2はストレージモジュールSTG3へ保存されている。また、データA1とA2から生成されたパリティデータPA12はストレージモジュールSTG1へ保存されている。
本実施の形態6では、実施の形態1の図6に示したストレージモジュール(メモリモジュール)STGの変形例について説明する。
図21は、本発明の実施の形態6による記憶装置システムにおいて、図1におけるストレージモジュールの詳細な構成例を示すブロック図である。図21に示すストレージモジュール(メモリモジュール)STGは、図6の構成例と比較して、図6のバッテリバックアップ装置BBUを削除する代わりに、図6のランダムアクセスメモリRAMstを不揮発性メモリNVMEMstに置き換えた点が異なっている。これに以外に構成に関しては、図6の場合と同様であるため詳細な説明は省略する。
以上に説明した各実施の形態によって得られる代表的な効果を纏めると以下の通りである。
BBU バッテリバックアップ装置
BIF インターフェース回路
BP バックプレーン
BUF バッファ
CM キャッシュメモリ
CMCTL キャッシュ制御回路
CPU プロセッサユニット
CPUCR プロセッサコア
DIAG 診断回路
DKCTL 制御回路
ERSCTL データ消去制御回路
ERSv 消去実行状態
Est 消去ステータス
Esz 消去状態の物理ブロック数
GCCTL ガーベージコレクション制御回路
GCv ガーベージコレクション実行状態
Gst ガーベージコレクションステータス
H2D_IF インターフェース信号
H2S_IF インターフェース信号
HCTL 制御回路
HDIAG 診断回路
HOST_IF インターフェース回路
HRDCTL リード制御回路
HWTCTL ライト制御回路
LAD 論理アドレス
MGTBL,STGTBL,GETBL テーブル
MNGER 情報処理回路
NVM 不揮発性メモリ
NVMEMst 不揮発性メモリ
PAD 物理アドレス
PBK 物理ブロック
RAM,RAMst ランダムアクセスメモリ
RAMC,NVCT0~NVCT7 メモリ制御回路
RDCTL リード制御回路
Ret データリテンション時間(残存寿命)
RetTBL テーブル
SIFC インターフェース回路
SRV 情報処理装置(ホスト)
STCT 制御回路
STG ストレージモジュール(メモリモジュール)
STIF インターフェース回路
STRGCONT ストレージコントローラ
STRGSYS ストレージシステム(記憶装置システム)
VLD 有効情報
WAF ライトデータサイズ比率
WDT ライトデータ
WTCTL ライト制御回路
Wstg,Wh2d,ntW データサイズ
dLife 寿命情報(残存寿命のしきい値)
eWd 予想ライトデータサイズ
Claims (15)
- 複数のメモリモジュールと、
前記複数のメモリモジュールを制御する第1制御回路とを備え、
前記複数のメモリモジュールのそれぞれは、複数の不揮発性メモリおよび前記複数の不揮発性メモリを制御する第2制御回路を備え、
前記第2制御回路は、前記複数の不揮発性メモリに対して実際に書き込みを行った第2書き込みデータ量を把握し、前記第2書き込みデータ量を前記第1制御回路に通達し、
前記第1制御回路は、前記複数のメモリモジュールに対して発行済みの書き込み命令に伴う第1書き込みデータ量を前記複数のメモリモジュール毎に把握し、前記第1書き込みデータ量と前記第2書き込みデータ量の比率となる第1比率を前記複数のメモリモジュール毎に計算し、当該計算結果を反映して、前記複数のメモリモジュールの中から次の書き込み命令を発行するメモリモジュールを選択する記憶装置システム。 - 請求項1記載の記憶装置システムにおいて、
前記第1制御回路は、前記次の書き込み命令に伴う第3書き込みデータ量に前記第1比率を乗算したデータ量と前記第2書き込みデータ量との加算結果となる第4書き込みデータ量を前記複数のメモリモジュール毎に計算し、当該計算結果に基づいて、前記複数のメモリモジュールの中から前記次の書き込み命令を発行するメモリモジュールを選択する記憶装置システム。 - 請求項2記載の記憶装置システムにおいて、
前記第1制御回路は、前記複数のメモリモジュール毎に計算した前記第4書き込みデータ量の中で、最小のデータ量を持つメモリモジュールを選択し、当該選択したメモリモジュールに前記次の書き込み命令を発行する記憶装置システム。 - 請求項1記載の記憶装置システムにおいて、
前記第2制御回路は、更に、前記複数の不揮発性メモリにおける消去回数あるいは書き込み回数と残存寿命との依存関係を保持しており、前記依存関係に基づいて得られる前記残存寿命を前記第1制御回路に通達し、
前記第1制御回路は、更に、前記第2制御回路から通達された前記複数のメモリモジュール毎の前記残存寿命を反映して、前記複数のメモリモジュールの中から前記次の書き込み命令の発行先の候補を定め、当該候補の中から前記第1比率の計算結果を反映して前記次の書き込み命令を発行するメモリモジュールを選択する記憶装置システム。 - 請求項4記載の記憶装置システムにおいて、
前記第1制御回路は、前記残存寿命の第1しきい値を保持しており、前記第1しきい値以上の前記残存寿命を持つ単数または複数のメモリモジュールを前記次の書き込み命令の発行先の候補として定める記憶装置システム。 - 請求項5記載の記憶装置システムにおいて、
前記第1制御回路は、前記第1しきい値を可変制御する記憶装置システム。 - 請求項6記載の記憶装置システムにおいて、
前記第1制御回路は、前記次の書き込み命令に伴う第3書き込みデータ量に前記第1比率を乗算したデータ量と前記第2書き込みデータ量との加算結果となる第4書き込みデータ量を前記複数のメモリモジュール毎に計算し、当該計算結果に基づいて、前記次の書き込み命令の発行先の候補の中から前記次の書き込み命令を発行するメモリモジュールを選択する記憶装置システム。 - 請求項7記載の記憶装置システムにおいて、
前記第1制御回路は、前記複数のメモリモジュール毎に計算した前記第4書き込みデータ量の中で、最小のデータ量を持つメモリモジュールを選択し、当該選択したメモリモジュールに前記次の書き込み命令を発行する記憶装置システム。 - 請求項1記載の記憶装置システムにおいて、
前記第2制御回路は、前記複数の不揮発性メモリを対象にウエアレベリングおよびガーベージコレクションを実行する記憶装置システム。 - 請求項9記載の記憶装置システムにおいて、
前記第2制御回路は、更に、前記複数の不揮発性メモリに含まれる消去状態の物理ブロック数を前記第1制御回路に通達し、
前記第1制御回路は、更に、前記消去状態の物理ブロック数の第2しきい値を保持しており、前記消去状態の物理ブロック数が前記第2しきい値以下であるメモリモジュールに対して前記ガーベージコレクションの実行命令を発行することで前記ガーベージコレクションを実行中であるメモリモジュールを把握し、前記次の書き込み命令を発行するメモリモジュールを選択する際には、前記ガーベージコレクションを実行中であるメモリモジュール以外の中から選択する記憶装置システム。 - 請求項9記載の記憶装置システムにおいて、
前記第2制御回路は、前記第1制御回路からの前記書き込み命令に応じて行った前記複数の不揮発性メモリに対する実際の書き込みが完了する毎に前記第2書き込みデータ量を前記第1制御回路に通達する記憶装置システム。 - 請求項9記載の記憶装置システムにおいて、
前記第2制御回路は、前記複数の不揮発性メモリを対象に行った前記ウエアレベリングが完了する毎に前記第2書き込みデータ量を前記第1制御回路に通達する記憶装置システム。 - 請求項9記載の記憶装置システムにおいて、
前記第2制御回路は、前記複数の不揮発性メモリを対象に行った前記ガーベージコレクションが完了する毎に前記第2書き込みデータ量を前記第1制御回路に通達する記憶装置システム。 - 複数のメモリモジュールと、
第1テーブルを持ち、ホストからの第1書き込み命令を受けて、前記第1書き込み命令に伴うデータの書き込み先を前記複数のメモリモジュールの中から前記第1テーブルに基づいて選択し、当該選択したメモリモジュールに第2書き込み命令を発行する第1制御回路とを有し、
前記複数のメモリモジュールのそれぞれは、複数の不揮発性メモリおよび前記複数の不揮発性メモリを制御する第2制御回路を備え、
前記第2制御回路は、前記複数の不揮発性メモリに対して、前記第2書き込み命令に伴う書き込みと、ウエアレベリングまたはガーベージコレクションに伴う書き込みを行い、前記第2書き込み命令に伴う書き込みと、前記ウエアレベリングまたは前記ガーベージコレクションに伴う書き込みによって生じる第2書き込みデータ量を把握し、前記第2書き込みデータ量を前記第1制御回路に通達し、
前記第1テーブルには、前記第2書き込みデータ量と、既に発行済みの前記第2書き込み命令に伴う第1書き込みデータ量とが前記複数のメモリモジュール毎に保持され、
前記第1制御回路は、前記第1テーブルに基づいて、前記第1書き込みデータ量と前記第2書き込みデータ量の比率となる第1比率を前記複数のメモリモジュール毎に計算し、当該計算結果を反映して、前記複数のメモリモジュールの中から次の前記第2書き込み命令を発行するメモリモジュールを選択する記憶装置システム。 - 請求項14記載の記憶装置システムにおいて、
前記第2制御回路は、更に、前記複数の不揮発性メモリにおける消去回数あるいは書き込み回数と残存寿命との依存関係を保持しており、前記依存関係に基づいて得られる前記残存寿命を前記第1制御回路に通達し、
前記第1テーブルには、更に、前記残存寿命が前記複数のメモリモジュール毎に保持され、
前記第1制御回路は、更に、前記第1テーブル内の前記残存寿命を反映して、前記複数のメモリモジュールの中から次の前記第2書き込み命令の発行先の候補を定め、当該候補の中から前記第1比率の計算結果を反映して次の前記第2書き込み命令を発行するメモリモジュールを選択する記憶装置システム。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2014534130A JP5820078B2 (ja) | 2012-09-07 | 2012-09-07 | 記憶装置システム |
PCT/JP2012/072961 WO2014038073A1 (ja) | 2012-09-07 | 2012-09-07 | 記憶装置システム |
US14/423,384 US20150186056A1 (en) | 2012-09-07 | 2012-09-07 | Storage device system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2012/072961 WO2014038073A1 (ja) | 2012-09-07 | 2012-09-07 | 記憶装置システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014038073A1 true WO2014038073A1 (ja) | 2014-03-13 |
Family
ID=50236720
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/072961 WO2014038073A1 (ja) | 2012-09-07 | 2012-09-07 | 記憶装置システム |
Country Status (3)
Country | Link |
---|---|
US (1) | US20150186056A1 (ja) |
JP (1) | JP5820078B2 (ja) |
WO (1) | WO2014038073A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015087651A1 (ja) * | 2013-12-12 | 2015-06-18 | 株式会社フィックスターズ | メモリの使用可能期間を延ばすための装置、プログラム、記録媒体および方法 |
US10237159B2 (en) | 2016-06-16 | 2019-03-19 | Hitachi, Ltd. | Computer system and method of controlling computer system |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9383926B2 (en) * | 2014-05-27 | 2016-07-05 | Kabushiki Kaisha Toshiba | Host-controlled garbage collection |
WO2019062231A1 (zh) * | 2017-09-27 | 2019-04-04 | 北京忆恒创源科技有限公司 | 垃圾回收方法及其存储设备 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007013372A1 (ja) * | 2005-07-29 | 2007-02-01 | Matsushita Electric Industrial Co., Ltd. | メモリコントローラ、不揮発性記憶装置、不揮発性記憶システム及び不揮発性メモリのアドレス管理方法 |
JP2007265265A (ja) * | 2006-03-29 | 2007-10-11 | Hitachi Ltd | フラッシュメモリを用いた記憶装置、その消去回数平準化方法、及び消去回数平準化プログラム |
JP2010015222A (ja) * | 2008-07-01 | 2010-01-21 | Panasonic Corp | メモリカード |
JP2011003111A (ja) * | 2009-06-22 | 2011-01-06 | Hitachi Ltd | フラッシュメモリを用いたストレージシステムの管理方法及び計算機 |
JP2012118587A (ja) * | 2010-11-29 | 2012-06-21 | Canon Inc | 管理装置及びその制御方法、並びにプログラム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7568075B2 (en) * | 2005-09-22 | 2009-07-28 | Hitachi, Ltd. | Apparatus, system and method for making endurance of storage media |
JP4933861B2 (ja) * | 2005-09-22 | 2012-05-16 | 株式会社日立製作所 | ストレージ制御装置、データ管理システムおよびデータ管理方法 |
KR100909902B1 (ko) * | 2007-04-27 | 2009-07-30 | 삼성전자주식회사 | 플래쉬 메모리 장치 및 플래쉬 메모리 시스템 |
US9195588B2 (en) * | 2010-11-02 | 2015-11-24 | Hewlett-Packard Development Company, L.P. | Solid-state disk (SSD) management |
US8918595B2 (en) * | 2011-04-28 | 2014-12-23 | Seagate Technology Llc | Enforcing system intentions during memory scheduling |
US9405670B2 (en) * | 2011-06-09 | 2016-08-02 | Tsinghua University | Wear leveling method and apparatus |
US20140032820A1 (en) * | 2012-07-25 | 2014-01-30 | Akinori Harasawa | Data storage apparatus, memory control method and electronic device with data storage apparatus |
-
2012
- 2012-09-07 WO PCT/JP2012/072961 patent/WO2014038073A1/ja active Application Filing
- 2012-09-07 US US14/423,384 patent/US20150186056A1/en not_active Abandoned
- 2012-09-07 JP JP2014534130A patent/JP5820078B2/ja not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007013372A1 (ja) * | 2005-07-29 | 2007-02-01 | Matsushita Electric Industrial Co., Ltd. | メモリコントローラ、不揮発性記憶装置、不揮発性記憶システム及び不揮発性メモリのアドレス管理方法 |
JP2007265265A (ja) * | 2006-03-29 | 2007-10-11 | Hitachi Ltd | フラッシュメモリを用いた記憶装置、その消去回数平準化方法、及び消去回数平準化プログラム |
JP2010015222A (ja) * | 2008-07-01 | 2010-01-21 | Panasonic Corp | メモリカード |
JP2011003111A (ja) * | 2009-06-22 | 2011-01-06 | Hitachi Ltd | フラッシュメモリを用いたストレージシステムの管理方法及び計算機 |
JP2012118587A (ja) * | 2010-11-29 | 2012-06-21 | Canon Inc | 管理装置及びその制御方法、並びにプログラム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015087651A1 (ja) * | 2013-12-12 | 2015-06-18 | 株式会社フィックスターズ | メモリの使用可能期間を延ばすための装置、プログラム、記録媒体および方法 |
US10237159B2 (en) | 2016-06-16 | 2019-03-19 | Hitachi, Ltd. | Computer system and method of controlling computer system |
Also Published As
Publication number | Publication date |
---|---|
JP5820078B2 (ja) | 2015-11-24 |
JPWO2014038073A1 (ja) | 2016-08-08 |
US20150186056A1 (en) | 2015-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10459808B2 (en) | Data storage system employing a hot spare to store and service accesses to data having lower associated wear | |
US9552290B2 (en) | Partial R-block recycling | |
US9569118B2 (en) | Promoting consistent response times in a data storage system having multiple data retrieval mechanisms | |
US9785575B2 (en) | Optimizing thin provisioning in a data storage system through selective use of multiple grain sizes | |
US8850114B2 (en) | Storage array controller for flash-based storage devices | |
US9367469B2 (en) | Storage system and cache control method | |
US9304685B2 (en) | Storage array system and non-transitory recording medium storing control program | |
JP6062060B2 (ja) | ストレージ装置、ストレージシステム、及びストレージ装置制御方法 | |
JP2021089733A (ja) | ストレージ装置、及び該ストレージ装置の動作方法 | |
WO2016172235A1 (en) | Method and system for limiting write command execution | |
KR102649131B1 (ko) | 메모리 시스템 내 대용량 데이터 저장이 가능한 블록에서의 유효 데이터 체크 방법 및 장치 | |
CN113126907B (zh) | 用于存储器装置的异步电力损失恢复 | |
US11157402B2 (en) | Apparatus and method for managing valid data in memory system | |
KR20140113211A (ko) | 비휘발성 메모리 시스템, 이를 포함하는 시스템 및 상기 비휘발성 메모리 시스템의 적응적 사용자 저장 영역 조절 방법 | |
KR20210057193A (ko) | 소계 기입 카운터에 기초한 하이브리드 웨어 레벨링 동작 수행 | |
US9390003B2 (en) | Retirement of physical memory based on dwell time | |
US11775389B2 (en) | Deferred error-correction parity calculations | |
JP5820078B2 (ja) | 記憶装置システム | |
US11169920B2 (en) | Cache operations in a hybrid dual in-line memory module | |
Ware et al. | Architecting a hardware-managed hybrid DIMM optimized for cost/performance | |
US10817435B1 (en) | Queue-based wear leveling of memory components | |
US11966638B2 (en) | Dynamic rain for zoned storage systems | |
CN112346658B (zh) | 在具有高速缓存体系结构的存储设备中提高数据热量跟踪分辨率 | |
US20240231708A1 (en) | Dynamic rain for zoned storage systems | |
KR20230115196A (ko) | 메모리 블록을 할당 해제하는 스토리지 컨트롤러, 그것의 동작하는 방법, 및 그것을 포함하는 스토리지 장치의 동작하는 방법 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12884162 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2014534130 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14423384 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12884162 Country of ref document: EP Kind code of ref document: A1 |