WO2015109128A1 - Remplacement de données sur la base de propriétés de données et rétention de données dans un système dispositif de mémorisation hiérarchisée - Google Patents

Remplacement de données sur la base de propriétés de données et rétention de données dans un système dispositif de mémorisation hiérarchisée Download PDF

Info

Publication number
WO2015109128A1
WO2015109128A1 PCT/US2015/011661 US2015011661W WO2015109128A1 WO 2015109128 A1 WO2015109128 A1 WO 2015109128A1 US 2015011661 W US2015011661 W US 2015011661W WO 2015109128 A1 WO2015109128 A1 WO 2015109128A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
flash memory
type
memory
read
Prior art date
Application number
PCT/US2015/011661
Other languages
English (en)
Inventor
John Davis
Ethan Miller
Brian Gold
John Colgrove
Peter Vajgel
John Hayes
Alex Ho
Original Assignee
Pure Storage, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US14/157,489 external-priority patent/US9811457B2/en
Priority claimed from US14/157,481 external-priority patent/US8874835B1/en
Application filed by Pure Storage, Inc. filed Critical Pure Storage, Inc.
Publication of WO2015109128A1 publication Critical patent/WO2015109128A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • G11C11/5621Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency using charge storage in a floating gate
    • G11C11/5628Programming or writing circuits; Data input circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/349Arrangements for evaluating degradation, retention or wearout, e.g. by counting erase cycles
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/34Determination of programming status, e.g. threshold voltage, overprogramming or underprogramming, retention
    • G11C16/349Arrangements for evaluating degradation, retention or wearout, e.g. by counting erase cycles
    • G11C16/3495Circuits or methods to detect or delay wearout of nonvolatile EPROM or EEPROM memory devices, e.g. by counting numbers of erase or reprogram cycles, by using multiple memory areas serially or cyclically
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1032Reliability improvement, data loss prevention, degraded operation etc
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1041Resource optimization
    • G06F2212/1044Space efficiency improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7208Multiple device management, e.g. distributing data over multiple flash devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7211Wear leveling
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C2211/00Indexing scheme relating to digital stores characterized by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C2211/56Indexing scheme relating to G11C11/56 and sub-groups for features not covered by these groups
    • G11C2211/564Miscellaneous aspects
    • G11C2211/5641Multilevel memory having cells with different number of storage levels

Definitions

  • Flash memory is a nonvolatile memory that was developed from electrically erasable programmable read-only memory (EEPROM).
  • EEPROM electrically erasable programmable read-only memory
  • NAND flash memory and NOR flash memory are available. NAND flash memory can be written or read in blocks or pages. Originally, each flash memory cell had one bit per cell.
  • flash memory is available in single level cell (SLC) and multilevel cell (MLC) types.
  • SLC single level cell
  • MLC multilevel cell
  • the single level cell type of flash memory has one bit per cell (as on the original flash memory), and the multilevel cell flash memory types have more than one bit per cell.
  • multilevel cell flash memory operates with each cell capable of multiple ranges of electrons trapped on a floating gate, multiple programming voltages, and multiple threshold voltages applied to a controlling gate within the flash memory cell.
  • the number of bits per cell is a function of the number of ranges of programming voltages and threshold voltages, and corresponding numbers of electrons on the floating gate.
  • One type of multilevel cell flash memory is available is referred to as triple level cell (TLC) flash memory, which has three bits per cell. Further types of multilevel cell flash memory may be developed.
  • Flash memory is attractive as a storage media due to the access speeds.
  • the familiar USB (universal serial bus) flash drive is an example of portable storage that makes use of flash memory.
  • Solid-state drives, such as a USB flash drive usually use the NAND type of flash memory. Trade-offs in cost per bit, bit density per unit of area in an integrated circuit, error rates, long-term storage reliability, read disturb rates, read wear rates and write wear rates, among other considerations, affect the choice of which type of flash memory to use in a given application.
  • a single type of flash memory is selected for use in a system, e.g., a thumb drive, a solid-state drive, or a combination hard drive and solid-state storage, and the system then has the advantages and disadvantages of that type of flash memory. It is within this context that the embodiments arise.
  • a method for managing flash memory is provided.
  • the method includes determining at least one property of a data and determining to which type of a plurality of types of flash memory to write the data, based on the at least one property of the data.
  • the plurality of types of flash memory includes at least two types of flash memory having differing numbers of bits per cell.
  • the method includes writing the data to a flash memory of the determined type, wherein at least one action of the method is performed by a processor.
  • a method for managing flash memory includes determining a first data residence time of a first data in a first type of nonvolatile memory.
  • the method includes moving the first data to a second type of nonvolatile memory, responsive to the first data residence time of the first data exceeding a data residence time threshold of the first type of nonvolatile memory, wherein at least one act of the method is performed by an electronic device.
  • a nonvolatile memory manager includes a mapping unit configured to place a first data into one of a plurality of types of nonvolatile memory.
  • the mapping unit is further configured to relocate the first data from the one of the plurality of types of nonvolatile memory to another of the plurality of types of nonvolatile memory, responsive to a determination that the first data has resided in the one of the plurality of types of nonvolatile memory longer than a first threshold of data retention time of the one of the plurality of types of nonvolatile memory.
  • a memory includes a nonvolatile memory having at least a nonvolatile memory of a first type and a nonvolatile memory of a second type, wherein the nonvolatile memory of the first type has a shorter data retention time than the nonvolatile memory of the second type.
  • the memory includes a memory manager, having at least one processor configured to perform actions including: writing a first data to the nonvolatile memory of the first type responsive to a property of the first data; determining how long the first data resides in the nonvolatile memory of the first type; and writing the first data to the nonvolatile memory of the second type responsive to the determining how long the first data resides in the nonvolatile memory of the first type exceeding a threshold of data retention time of the nonvolatile memory of the first type.
  • a flash manager includes a mapping unit, configured to place incoming data into one of a plurality of types of flash memory in accordance with one or more first properties of the incoming data.
  • the mapping unit is configured to relocate data into one of the plurality of types of flash memory in accordance with one or more second properties of the data.
  • the one or more first properties of the data are observable upon arrival of the incoming data, and the one or more second properties of the data are determined over time after arrival of the data.
  • the plurality of types of flash memory includes a first type of flash memory having at least one bit per cell and a second type of flash memory having a greater number of bits per cell than the first type.
  • a flash storage device includes a flash memory having at least a flash memory of a first type and a flash memory of a second type, wherein the first type of flash memory has a differing number of bits per cell from the second type of flash memory.
  • the device includes a flash manager, having at least one processor configured to perform actions that include writing a first data to the flash memory of the first type responsive to the first data having a first property, and writing a second data to the flash memory of the second type responsive to the second data having a second property.
  • FIG. 1 is a system diagram showing an enterprise computing system, with processing resources, networking resources and storage resources, including flash storage, in accordance with some embodiments.
  • FIG. 2 is a system diagram showing an embodiment of the flash storage of Fig. 1 , including a flash manager and multiple types of flash memory, in accordance with some embodiments.
  • Fig. 3 is a system diagram showing a tracking unit and a mapping unit in an embodiment of the flash manager of Fig. 2, in accordance with some embodiments.
  • Fig. 4A is a flow diagram showing a method for managing flash memory, which can be practiced on or by embodiments of the enterprise computing system, the flash storage, and the flash manager of Figs. 1-3, in accordance with some embodiments.
  • Fig. 4B is a flow diagram of a method for managing flash memory, applying a read rate threshold and a read count threshold, in accordance with some embodiments.
  • Fig. 4C is a flow diagram of a method for managing flash memory, applying a write rate threshold and a write count threshold, in accordance with some embodiments.
  • Fig. 4D is a flow diagram of a method for managing flash memory, applying an error count threshold and an error rate threshold, in accordance with some
  • Fig. 4E is a flow diagram of a method for managing flash memory, applying thresholds relating to properties of data and memory, in accordance with some
  • Fig. 4F is a flow diagram of a method for managing flash memory, applying differing error correction thresholds for data verification and long-term reads, in accordance with some embodiments.
  • FIG. 5 is an illustration showing an exemplary computing device which may implement the embodiments described herein, in accordance with some embodiments.
  • Combining two or more types of flash memory into a device or system is one way to increase capacity and reduce costs. Controlling data placement into each of the two or more types of flash memory, based on data properties, supports selection of a type of flash memory for the data placement depending on how the data is used in the system.
  • the single level cell type of flash memory has the lowest density, the highest cost per bit, the longest data retention time, the fastest average write time, the lowest read error rate, the lowest susceptibility to read disturb and program disturb (write disturb) events and errors, the lowest read wear rate and the lowest write wear rate.
  • Quad level cell (QLC) flash memory having four bits per cell, or variants of flash with higher bit density per cell, i.e., greater than four bits per cell, may continue these trends.
  • Embodiments of a flash storage, a flash manager, and a related method for managing flash memory determine placement of data into types of flash memory, in accordance with properties of the data and various policies. By making placement of the data dependent on properties of the data, these embodiments support use of multiple types of flash memory in a system. It should be appreciated that in some embodiments the data properties may be characterized as nonperformance related properties of the different types of flash memory, i.e., properties not related to access times for the different flash memory types.
  • some embodiments place data in one type or another of flash memory according to data properties that are observable upon arrival of the data. Some embodiments relocate data to one type or another of flash memory according to data properties that are observable over time after arrival of the data. It should be appreciated that these aspects can be combined in some embodiments by placing data initially and then relocating data over time, as applicable. Properties of the data can be accounted for in metadata, which can be derived initially and/or derived from tracking the data. Policies affecting data placement in accordance with properties of the data can be embedded in memory or a storage unit associated with a device for the embodiments described herein. Various mechanisms, aspects and actions are further described below with reference to Figs. 1 -5.
  • Fig. 1 is a system diagram showing an enterprise computing system 102, with processing resources 104, networking resources 106 and storage resources 108, including flash storage 128, in accordance with an embodiment of the present disclosure.
  • a flash manager 130 and flash memory 132 are included in the flash storage 128.
  • the enterprise computing system 102 illustrates one environment suitable for deployment of the flash storage 128, although the flash storage 128 could be used in other computing systems or devices, larger or smaller, or in variations of the enterprise computing system 102, with fewer or additional resources.
  • the enterprise computing system 102 is coupled to a network 140, such as the global communication network known as the Internet, in order to provide or make use of services.
  • the enterprise computing system 102 could provide cloud services, physical computing resources, or virtual computing services.
  • a processing manager 1 10 manages the processing resources 104, which include processors 1 16 and random-access memory (RAM) 1 18. Other types of processing resources could be integrated, as the embodiment of Fig. 1 is one example and not meant to be limiting.
  • a networking manager 1 12 manages the networking resources 106, which include routers 120, switches 122, and servers 124. Other types of networking resources could be included.
  • a storage manager 1 14 manages storage resources 108, which include hard drives 126 and flash storage 128, as well as other types of storage resources. In some embodiments, the flash storage 128 completely replaces the hard drives 126.
  • the enterprise computing system 102 can provide or allocate the various resources as physical computing resources, or in variations, as virtual computing resources supported by physical computing resources.
  • the various resources could be implemented using one or more servers executing software.
  • Files or data objects, or other forms of data, are stored in the storage resources 108.
  • the hard drives 126 serve as archival storage
  • the flash storage 128 serves as active storage.
  • the flash storage 128 serves at least in part as a cache memory.
  • the flash manager 130 determines in which type of flash memory, in the flash memory 132, each portion of data should be stored. In some embodiments, the flash manager 130 determines whether, when, and into which type of flash memory, in the flash memory 132, some portions of data should be relocated.
  • Fig. 2 is a system diagram showing an embodiment of the flash storage 128 of Fig. 1, including a flash manager 130 and multiple types of flash memory 132.
  • the flash memory 132 includes a single level cell type of flash memory 210, a multilevel cell type of flash memory 212, and a triple level cell type of flash memory 214.
  • the flash memory 132 includes two of these types of flash memory, one of these types of flash memory and one other type of flash memory, two other types of flash memory, or three or more types of flash memory, and so on.
  • the flash manager 130 has a tracking unit 206, a mapping unit 208, a garbage collector 202 and a wear level agent 204.
  • the flash manager 130 has fewer or additional units therein.
  • the mapping unit 208 determines where in the flash memory 132, i.e., into which of the types of flash memory 210, 212, 214, to write the data that arrives from various locations within the enterprise computing system 102, e.g., from the processing resources 104 and/or the networking resources 106.
  • the tracking unit 206 tracks operations on the data in the flash memory 132, such as read operations, write operations and/or error corrections. In addition, tracking unit 206 provides information about these operations to the mapping unit, the garbage collector 202 and/or the wear level agent 204, for use in decisions thereof.
  • the garbage collector 202 of Fig. 2 reclaims blocks of flash memory, in coordination with the mapping unit 208.
  • the garbage collector 202 cooperates with the tracking unit 206.
  • a block in one of the types of flash memory 210, 212, 214 may have a low number of valid pages, e.g., below a specified threshold value.
  • the garbage collector 202 moves the valid pages from the block being reclaimed, into one of the types of flash memory 210, 212, 214 in cooperation with the mapping unit 208. The valid pages are thus relocated, as part of the garbage collection process in this embodiment.
  • the mapping unit 208 determines which of the types of flash memory 210, 212, 214 to move these valid pages to, based on one or more properties of the data.
  • the relevant properties of the data could include one or more parameters as tracked by the tracking unit 206 since the arrival of the data into the flash memory 132.
  • the wear level agent 204 produces even wear in blocks of flash memory, in coordination with the mapping unit 208.
  • the wear level agent 204 cooperates with the tracking unit 206.
  • the wear level agent 204 could determine that some pages in a block, or some blocks, in one of the types of flash memory 210, 212, 214 have been read a total number of times that exceeds a read count threshold value, or are being read at a rate that exceeds a read rate threshold value.
  • An excessive number of reads may be likely to increase the likelihood of errors in the data itself, or may cause read disturb errors in data adjacent to the data.
  • an excessive read rate may physically heat up the data cells on the integrated circuit, which can disturb the data, i.e., cause errors.
  • the wear level agent 204 determines that some pages in a block, or some blocks, have produced a total number of errors during reads that exceeds a read error count threshold value, or are producing errors at a rate that exceeds a read error rate threshold value, e.g. are producing an error count over a specified period of time that implies a read error rate exceeding a read error rate threshold value. These pages, or these blocks, would be considered by the wear level agent 204 to have excessive wear.
  • the wear level agent 204 then moves these pages or blocks, with excessive wear, from current locations in the flash memory 132 into one of the types of flash memory 210, 212, 214 in cooperation with the mapping unit 208.
  • the pages or blocks are thus relocated, as part of the wear leveling process in this embodiment.
  • the mapping unit 208 determines which of the types of flash memory 210, 212, 214 to move these pages or blocks to, based on one or more properties of the data.
  • the relevant properties of the data could include one or more parameters as tracked by the tracking unit 206 since the arrival of the data into the flash memory 132.
  • the wear level agent 204 can monitor the data in the new location and monitor new data in the earlier location.
  • the wear level agent 204 of Fig. 2 could determine that some pages in a block, or some blocks, in one of the types of flash memory 210, 212, 214 have been written a total number of times that exceeds a write count threshold value, or are being written at a rate that exceeds a write rate threshold value.
  • An excessive number of writes may be likely to increase the likelihood of errors in the data itself, or may cause write disturb errors in data adjacent to the data, i.e., in cells adjacent to the cells being written.
  • An excessive write rate may physically heat up the data cells on the integrated circuit, which can disturb the data, i.e., cause errors.
  • the wear level agent 204 could determine that some pages in a block, or some blocks, have produced a total number of errors during writes that exceeds a write error count threshold value, or are producing errors at a rate that exceeds a write error rate threshold value, e.g. are producing an error count over a specified period of time that implies a write error rate exceeding a write error rate threshold value. These pages, or these blocks, would be considered by the wear level agent 204 to have excessive wear. The wear level agent 204 then moves these pages or blocks, with excessive wear, from current locations in the flash memory 132 into one of the types of flash memory 210, 212, 214 in cooperation with the mapping unit 208. The pages or blocks are thus relocated, as part of the wear leveling process in this embodiment.
  • the mapping unit 208 determines which of the types of flash memory 210, 212, 214 to move these pages or blocks to, based on one or more properties of the data.
  • the relevant properties of the data could include one or more parameters as tracked by the tracking unit 206 since the arrival of the data into the flash memory 132.
  • the wear level agent 204 of Fig. 2 can apply thresholds specific to various types of flash memory. For example, flash memory having a lower number of bits per cell could have a lower read error rate threshold value than flash memory having a greater number of bits per cell.
  • the wear level agent 204 can determine whether an observed read error rate indicates excessive wear in a given page or block of a specific type of flash memory, in which case the data could be moved to a differing block in the same type of flash memory, or whether the observed read error rate is consistent with that type of flash memory and indicates the data should be relocated to a type of flash memory having a generally lower read error rate.
  • Such determination could take into account one or more static properties of the data, as well as one or more dynamic properties of the data, and corresponding policies.
  • the wear level agent 204 could apply thresholds that differ from thresholds applied by the mapping unit 208, in the above and other scenarios. Thresholds could apply to time, or to numbers of operations in some embodiments. For example a read error rate and corresponding threshold value could apply to a number of read errors per total number of reads, or a number of read errors over a length of time. Thresholds and rates can be implemented as parameters in some embodiments.
  • Fig. 3 is a system diagram showing a tracking unit 206 and a mapping unit 208 in an embodiment of the flash manager 130 of Fig. 2.
  • the tracking unit 206 tracks various properties of the data. In some embodiments, some or all of the properties are indicated in metadata 316. For example, a first type of property is observable upon arrival of the data.
  • the tracking unit 206 could determine a file type or an object type of the data, and write this information into the metadata 316.
  • An operating system or other portion of the enterprise computing system 102 could send along metadata with the file or the data, and this metadata could be written into the metadata 316 in the tracking unit 206 in some embodiments.
  • Metadata 3 16 could indicate whether data is, includes at least a portion of, or is included in a text file, an operating system file, an image file, e.g. photographs, video or movies, a temporary file, an archive file, backup data, deduplicated backup data, and so on.
  • Metadata 316 could indicate a read count of the data, a write count of the data, an error count of reads of the data, a count of a number of discards of the data during deduplication, a time interval from when the data was written, and so on.
  • a second type of property is observable over time, after the arrival of the data.
  • the tracking unit 206 of Fig. 3 could keep track of the number of times data is read, written to, or produces errors, and/or time intervals over which these events occur.
  • Counters and timers could be implemented in hardware, firmware or software, or a single timer or a single counter could be used and count values or time values could be written into memory locations, for example in the metadata 316.
  • the tracking unit 206 has read counters 302, 304 (e.g., read counter 1 through read counter N), for counting the number of times a data page, a data block or other amount of data is read.
  • the tracking unit 206 has write counters 306, 308 (e.g., write counter 1 through write counter N), for counting the number of times a data page, a data block or other amount of data is written to.
  • the tracking unit 206 has error correction code (ECC) counters 3 10, 312 (e.g., ECC counter 1 through ECC counter N), for counting the number of times a data page, a data block or other amount of data produces an error during a read, where the error is corrected through application of an error correction code.
  • ECC error correction code
  • the tracking unit 206 has a timer 3 14, for timing intervals.
  • the timer 3 14 could be applied to establish the beginning and end of a time interval, during which time interval reads, writes or errors are counted to determine a read rate, a write rate or an error rate. The determined read rate, write rate, or error rate can then be recorded in the metadata 316, and used for comparison with a read rate threshold, a write rate threshold, or an error rate threshold. As a further use, the timer 314 could be applied to measure an interval between reads, an interval between writes, or a time interval establishing how long data has resided at a specified location in memory.
  • a memory location planner 334 In the mapping unit 208, a memory location planner 334, data placement policies 326, and a memory mapper 332 cooperate, to place incoming data and/or relocate data, in accordance with properties of the data. Planning for where to locate data, i.e., in which of the types of flash memory 210, 212, 214 to place or relocate the data, is handled by the memory location planner 334, in accordance with the properties of the data, from the tracking unit 206, and the data placement policies 326.
  • Data placement policies 326 include incoming data placement policies 328, and relocation data placement policies 330.
  • the memory mapper 332 keeps track of where the data is placed, whether initially or through relocation, by maintaining a map between logical addresses and physical addresses of the data.
  • the memory location planner 334 includes an execution FIFO (first in first out) 318, a map unit 320, a copy unit 322, and a release unit 324.
  • the memory location planner 334 could be event driven, responding to the arrival of incoming data or notifications from the tracking unit 206, could be polling-based, conducting polls and responding to results of the polls, could be sequentially scanning through the flash memory or through the metadata 316, or could proceed in some other manner consistent with the teachings herein.
  • examples of incoming data and data to be relocated are discussed below to further illustrate embodiments. Following these example illustrations and discussions, the data placement policies 326 will be further discussed, with numerous examples thereof.
  • the memory location planner 334 looks at first properties or static properties of incoming data, i.e., those properties of the data that are observable upon arrival of the data. Static properties are unchanging over the lifespan of the data, e.g., an operating system file is always an operating system file, an image file always has at least one image, a text file is always text. It should be appreciated that any exceptions to this could be handled by exception policies.
  • the memory location planner 334 consults with the incoming data placement policies 328, which could include instructions or directions as to which type of flash memory 210, 212, 214 (see Fig. 2) to write data having a specified property or properties upon arrival.
  • the memory location planner 334 determines which physical address or addresses are available in the determined type of flash memory 210, 212, 214, by applying the map unit 320 in coordination with the memory mapper 332.
  • the memory location planner 334 then places the determined physical address or addresses in the execution FIFO 318.
  • the incoming data is placed into the execution FIFO 3 18 along with the address or addresses to which the data will be written.
  • the incoming data is placed into a separate data FIFO, as readily devised. Further mechanisms for temporarily holding the data are readily apparent.
  • the execution FIFO 318 writes the data to the address in the selected type of flash memory 210, 212, 214, in the order in which the direction to so write is received, i.e., first-in first-out.
  • the memory mapper 332 tracks these data writes, updating a map of logical addresses and physical addresses accordingly.
  • Memory mapping via the memory mapper 332, is applied whenever data is written to or read from the flash storage 128, e.g., when an I/O operation is requested on the storage resources and the flash storage 128, by a component in the enterprise computing system of Fig. 1 , or when the mapping unit 208 initially locates or later relocates data.
  • the I/O operation specifies a logical address for the data being requested, and the memory mapper would find the physical address, i.e., where the data is physically located in the flash memory 132.
  • Other sequences for writing data, tracking where the data is written, mapping the data, and mechanisms for doing so, are readily devised in accordance with the teachings disclosed herein.
  • the data is placed directly, and an execution FIFO 318 is optional.
  • the execution FIFO 3 18 is replaced by another type of scheduling device such as a stack or a queue.
  • the memory location planner 334 looks at second properties or dynamic properties of the data, i.e., those properties of the data that are tracked over time by the tracking unit 206. Dynamic properties may change over the lifespan of the data, e.g., a file may be read more times or fewer times over an interval or a lifespan, data may have fewer errors or more errors during reads over an interval or a lifespan, data may be written to a specified address more often or less often or more times or fewer times over an interval or a lifespan, and so on.
  • the memory location planner 334 consults with the relocation data placement policies 330, which could include instructions or directions as to which type of flash memory 210, 212, 214 to write data having a specified property or properties determined after arrival of the data, and could include parameter values such as thresholds to be applied to the properties as tests.
  • the memory location planner 334 determines which physical address or addresses are available in the determined type of flash memory 210, 212, 214, by applying the map unit 320 in coordination with the memory mapper 332.
  • the memory location planner 334 then places the determined physical address or addresses in the execution FIFO 318.
  • the execution FIFO 3 18 copies the data from the current address to the new address in the selected type of flash memory 210, 212, 214, in the order in which the direction to so write is received, i.e., first-in first-out. This action happens via the copy unit 322, which also verifies that the data was written correctly prior to releasing the earlier address via the release unit 324. Differences between placing arriving data and relocating data could be handled by setting a flag in the execution FIFO 318 or through other mechanisms as readily devised. In some embodiments, the data is relocated directly, and an execution FIFO 3 18 is not needed.
  • the memory mapper 332 tracks the data writes, updating a map of logical addresses and physical addresses accordingly. In some embodiments, additional layers of memory mapping and addresses are applied.
  • the data placement policies 326 include incoming data placement policies 328 and relocation data placement policies 330, in some embodiments. In further embodiments, the data placement policies 326 are combined, not separate. In some embodiments, the incoming data placement policies 328 moderate or even override some or all of the relocation data placement policies 330 in some circumstances as discussed below. It should be appreciated that the data placement policies 326 are not limited to the examples described herein, and further data placement policies 326 can be developed and applied in further embodiments. Particularly, policies could apply rates in place of counts, or counts in place of rates, or could apply both. It should be further appreciated that data placement policies 326 applying to types of flash memory with differing numbers of levels per cell may be interpreted as or have corresponding data placement policies applying to types of flash memory with differing characteristics relative to data, and vice versa.
  • One policy is writing text data to a type of flash memory having a lower read error rate and writing image data to a type of flash memory having a higher read error rate.
  • This policy could be interpreted as, or replaced by a policy of writing text data to a type of flash memory having a lower number of bits per cell and writing image data to a type of flash memory having a higher number of bits per cell.
  • This policy could also be written as two policies, one for text data and one for image data.
  • Another policy is writing data having a number of discards during deduplication, which number of discards meets a deduplication threshold value, to the type of flash memory having the lower read error rate.
  • a backup run employing deduplication could encounter a large number of duplicate blocks of data.
  • One copy of such a block of data could be written to the type of flash or nonvolatile memory having the lower read error rate, and the large number of duplicate blocks could then be discarded, as an action of deduplication.
  • Yet another policy is placing data in the type of flash memory having the higher read error rate in response to the data having a single instance during deduplication.
  • Data that has only a single instance i.e., data that does not have discards during a backup operation with deduplication, is likely to be read only once during a restore operation.
  • placing such data in the type of flash memory with a higher read error rate may still be safe, considering that the data will not be read frequently or a large number of times.
  • the data placement policies may include a policy to move data from the type of flash memory having the higher read error rate to the type of flash memory having the lower read error rate in response to the data having a cumulative number of read errors exceeding a read error threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a cumulative number of read errors exceeding a read error threshold.
  • Another policy is moving data from the type of flash memory having the higher read error rate to the type of flash memory having the lower read error rate in response to the data having a rate of read errors exceeding a read error rate threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a rate of read errors exceeding a read error rate threshold.
  • One policy is moving data from a type of flash memory having a lower read cycle endurance to a type of flash memory having a higher read cycle endurance in response to the data having a cumulative number of reads exceeding a read count threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a cumulative number of reads exceeding a read count threshold.
  • Another data placement policy includes a policy to move data from the type of flash memory having the higher read error rate to the type of flash memory having the lower read error rate in response to the data having a read rate exceeding a read rate threshold.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a read rate exceeding a read red threshold value.
  • One policy is moving data from a type of flash memory having a lower write cycle endurance to a type of flash memory having a higher write cycle endurance in response to the data having a cumulative number of writes exceeding a write count threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a cumulative number of writes exceeding a write count threshold value.
  • This write count threshold value could match the write limit (i.e., the write cycle endurance) published by a manufacturer of a given type of flash memory, or could be set at some other value, e.g., conservatively lower.
  • One policy is moving data from the type of flash memory having the lower write cycle endurance to the type of flash memory having the higher write cycle endurance in response to the data having a write rate exceeding a write rate threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to the data having a write rate exceeding a write rate threshold value.
  • One policy is moving data from a type of flash memory having a lower retention time to a type of flash memory having a higher retention time in response to a data residence time of the data exceeding a residence time threshold value.
  • This policy could be interpreted as, or replaced by a policy of moving data from the type of flash memory having the higher number of bits per cell to the type of flash memory having the lower number of bits per cell in response to a data residence time of the data exceeding a residence time threshold value.
  • Figs. 4A-4F are flow diagrams showing various methods for managing flash memory, which can be practiced on or by embodiments of the enterprise computing system 102, the flash storage 128, and the flash manager 130 of Figs. 1-3, or the computing device of Fig. 5. It should be appreciated that further embodiments combine portions from one or more of these flow diagrams, in accordance with the teachings disclosed herein. For example, one embodiment of the method includes portions of the flow diagrams shown in Figs. 4B-4D implementing portions of the flow diagram shown in Fig. 4A.
  • Fig. 4A is a flow diagram showing a method for managing flash memory.
  • the method applies properties of the data, and policies regarding properties of the data, to direct placement of data and relocation of data into various types of flash memory.
  • a property of incoming data is determined, in an action 402. For example, a determination could be made as to whether the data includes an image, a temporary file, an operating system file or a type of object, or whether the data is a single instance of data from deduplication or a multiple instance data from deduplication, etc.
  • Policies are consulted, in an action 404.
  • incoming data placement policies could be consulted.
  • These policies could include instructions or directions as to which type of flash memory to write data having particular properties.
  • These policies could relate to data properties that are observable upon arrival of the data in some embodiments.
  • Data is written to a type of flash memory in accordance with the policies, in an action 406.
  • data having a particular property could be written to a specified type of flash memory in accordance with a policy directing to do so for such data.
  • Data is tracked in various types of flash memory, in an action 408.
  • data could be tracked in two or more types of flash memory. Tracking could include counting the number of reads of the data, the number of writes of the data, the number of errors during reads of the data, establishing a time interval or window during which to count, and so on. Results from tracking could be written into the metadata, for use in comparison with the policies.
  • Policies are consulted, in an action 410. For example, relocation data placement policies as described above could be consulted. These policies could include instructions or directions as to which type of flash memory to write data having particular properties based on tracking the data in the action 408.
  • Data is relocated to a type of flash memory in accordance with the policies, in an action 412. For example, the properties based on tracking, and the
  • Fig. 4B is a flow diagram of a method for managing flash memory, applying a read rate threshold and a read count threshold.
  • the method shown in Fig. 4B applies a dynamic property of the data, namely a count of the number of times the data is read, and a policy relating to the dynamic property of the data and types of flash memory, to direct placement of the data into various types of flash memory.
  • Reads of data are counted in a first type of flash memory, during a time interval, in an action 420.
  • the tracking unit could dedicate a read counter and a timer to track the number of reads and the time interval, recording results in the metadata.
  • a question is asked, is this the expiration of the time interval? If the answer is no, the time interval has not expired, flow branches back to the action 420 to continue counting the reads of the data. If the answer is yes, the time interval has expired, the flow branches to the decision action 424.
  • a question is asked, is the read count over the entire time interval below a read rate threshold? For example, dividing the read count by the time interval provides a read rate, which could be compared to a predetermined read rate threshold associated with the type of flash memory in which the data resides.
  • Such an event could occur when the read rate of the data is too high, and there is concern that a read disturb error could occur.
  • Relocating the data elsewhere in the same type of flash memory would stop or decrease the influence of the repeated reads on the neighboring cells of the original location of the data. This could be performed as a function of wear leveling in some embodiments.
  • the answer is yes in decision action 424 the read count over the entire time interval is below the read rate threshold, flow branches to the action 426.
  • a decision could be made as to which type of flash to relocate to, based on a total read count, once a read rate has been exceeded. For example, after action 424 where it has been determined that the read rate threshold has been exceeded, the method may determine if the read count threshold has been exceeded and then proceed to determine the type of flash or nonvolatile memory to relocate the data to. In some embodiments, after action 424 where it has been determined that the read rate threshold has been exceeded, the method may determine the type of flash or nonvolatile memory from a plurality of types of flash or nonvolatile memory and is not limited to a single second type of flash or nonvolatile memory as Fig. 4B is one example and not meant to be limiting.
  • the data is relocated to a second type of flash, in the action 426. This could occur because the data has a low read rate, and can be relocated to a type of flash memory that is safe, i.e., unlikely to generate read disturb errors, for a lower read rate in some embodiments.
  • the reads of data in the second type of flash are counted, in an action 428. For example, the tracking unit could dedicate a read counter to counting the reads of the data.
  • a decision action 430 a question is asked, does the read count meet the read count threshold value for the second type of flash memory?
  • the second type of flash memory could have a lower read count threshold value than the first type of flash memory, and the read count from the tracking unit is compared to this read count threshold value. If the answer is no, the read count does not yet meet the read count threshold value, the flow branches back to the action 428, to continue counting reads of the data in the second type of flash memory. If the answer is yes, the read count meets the read count threshold, the flow branches to the action 432. [0056] The data is relocated to the first type of flash memory, in the action 432. This could occur because the data has been read a total number of times greater than is safe for reading data in the second type of flash memory, and read errors (or excessive numbers of read errors) might start occurring if the data is not soon moved.
  • the write counts could be applied to independent pieces of data. Write rates could be applied in place of write counts, and vice versa.
  • Fig. 4C is a flow diagram of a method for managing flash memory, applying a write rate threshold and a write count threshold.
  • the method shown in Fig. 4C applies a dynamic property of the data, namely how many times the data is written, and a policy relating thereto, to direct placement of the data into various types of flash memory.
  • Writes of the data into a first type of flash memory are counted during a time interval, in an action 436.
  • the tracking unit could dedicate a write counter and a timer to track the number of writes and the time interval, recording results in the metadata.
  • Some of the data could be changing, and some of the data could be the same or unchanging, for example when a portion of a file is edited and the file is saved, and the counter could be tracking writes of data to a physical address or a range of physical addresses in a type of flash memory (or tracking the logical addresses with translation via the memory mapper).
  • the question is asked is the write count over the entire time interval below the write rate threshold value? For example, dividing the write count by the time interval provides a write rate, which could be compared to a predetermined write rate threshold associated with the type of flash memory in which the data resides.
  • Such an event could occur when the write rate of the data is too high, and there is concern that a write disturb error could occur.
  • Relocating the data elsewhere in the same type of flash memory would stop or decrease the influence of the repeated writes on the neighboring cells of the original location of the data. This could be performed as a function of wear leveling in some embodiments. If the answer is yes in decision action 440, the write count over the entire time interval is below the write rate threshold, flow branches to the action 442.
  • the data is relocated to a second type of flash, in the action 442. This could occur because the data has a low write rate, and can be relocated to a type of flash memory that is safe, i.e., unlikely to generate write disturb errors, for a lower write rate.
  • the writes of the data in the second type of flash memory are counted, in an action 444.
  • the tracking unit could dedicate a write counter to counting the writes of the data.
  • Fig. 4D is a flow diagram of a method for managing flash memory, applying an error count threshold and an error rate threshold.
  • the method shown in Fig. 4D applies a dynamic property of the data, namely a count of the read errors, and a policy relating thereto, to direct placement of the data into various types of flash memory.
  • just the error count threshold or just the error rate threshold could be applied.
  • An error count threshold could apply to a cumulative count of errors, or to an instantaneous error correction, e.g., of more than a specified number of bits in a data read in some embodiments.
  • Read errors of data in the second type of flash memory are counted during a time interval, in an action 450.
  • the tracking unit could dedicate an error correction code counter and a timer, to count the time interval and the number of errors that occurred during reads of data.
  • a question is asked, is this the expiration of the time interval? If the answer is no, the time interval has not expired, the flow branches back to the action 450, to continue counting the read errors during the time interval. If the answer is yes, the time interval has expired, the flow branches to the decision action 454.
  • a question is asked, does the read error count over the entire time interval meet the read error rate threshold value? If the answer is yes, the read error count over the entire time interval meets the read error rate threshold value, the flow branches to the action 460, in order to relocate the data to the first type of flash. This could occur because the read error rate is excessive, and the data is then moved to a type of flash memory having a lower read error rate. If the answer is no, the read error count over the entire time interval does not yet meet the read error rate threshold value, the flow branches to the action 456.
  • the count of the read errors of the data, in the second type of flash continues. For example, even if a read error rate has not been exceeded, the total number of reads could still be a concern. Exceeding either a read error rate threshold value or a total count of reads threshold value could cause read errors to become more likely.
  • a decision action 458 a question is asked, does the read error count meet the read error count threshold for the second type of flash? If the answer is no, the read error count does not yet meet the read error count threshold value, the flow branches back to the action 456, to continue counting the read errors. If the answer is yes, the read error count meets the read error count threshold value, the flow branches to the action 460.
  • the data is relocated to the first type of flash, in the action 460. This could occur because read errors are likely to occur if further reads of the data in the second type of flash memory are performed, and it is safer to move the data to a type of flash memory that has a lower read error rate.
  • Fig. 4E is a flow diagram of a method for managing flash memory, applying thresholds relating to properties of data and memory.
  • the method shown in Fig. 4E applies a dynamic property of the data, namely a read error rate, and a policy relating thereto, to direct placement of the data into various types of flash memory.
  • a read error rate of data in a type of flash memory is determined, in an action 466.
  • This read error rate could be determined by counting the number of read errors of the data over a span of time or over a variable or predetermined number of reads, e.g., by dividing the number of read errors by the number of reads on an ongoing basis or after a fixed number of reads. This would produce a read error rate as a function of time or a read error rate as a function of the number of reads.
  • the data could have the static property of being a portion of an operating system file, or a portion of an image file, and various types of data might have differing thresholds for a read error rate. If the answer is no, the read error rate is not above the threshold value, the flow branches back to the action 466, so that the read error rate can be monitored further. If the answer is yes, the read error rate is above the threshold value, the flow branches to the decision action 470.
  • the decision action 470 a question is asked, is the read error rate above the threshold value for this type of flash memory?
  • various types of flash memory may have differing read error rates, so that differing read error rate thresholds are established for the various types of flash memory.
  • a read error rate that is greater than a read error rate threshold value, for a type of flash memory may indicate that the page or block in the flash memory is experiencing wear from a large number of reads or writes, and is producing errors more frequently as a result.
  • the answer is yes, the read error rate is above the threshold value for this type of flash memory, the flow branches to the action 472, to relocate the data elsewhere in the same type of flash memory. It should be appreciated that this relocation of the data could be part of a wear leveling process.
  • the flow branches to the action 474, to relocate the data to a type of flash memory having a lower read error rate. This could occur because the type of flash memory in which the read error rate is observed is not wearing out, but is nonetheless producing errors too great a rate for this particular data, i.e., the read error rate is exceeding the threshold for data with this static property as discussed above. It should be appreciated that the flow diagram of Fig. 4E provides an example of an interaction between a dynamic property of data and a static property of data, in decisions as to where in various types of flash memory to locate the data.
  • Fig. 4F is a flow diagram of a method for managing flash memory, applying differing error correction thresholds for data verification and long-term reads.
  • the method shown in Fig. 4F applies a dynamic property of the data, namely a rate of error correction and a policy relating thereto, to direct placement of the data into various types of flash memory.
  • Data is written to a second type of flash memory, in an action 480.
  • the second type of flash memory could be a type that has an intermediate error rate as compared to a first type of flash memory having a lower error rate.
  • a rate of error correction is determined, in an action 482.
  • the tracking unit could count the number of times during reads of the data an error is corrected using error correction code and tracking this number using one of the error correction code counters. This could be performed as a sequence of data bytes is read out from the second type of flash memory, during verification of the data, and each time an error occurs the error correction code counter is incremented. The total number of error corrections could then be divided by the total number of bytes (or words or other lengths of data) read out, to determine the error correction rate.
  • a question is asked, is the rate of error correction below a first error correction threshold value?
  • the first error correction threshold value could be a data verification error correction threshold value. If the answer is yes, the flow branches to the action 488 for long-term monitoring of the data. If the answer is no, the flow branches to the action 486.
  • the data is relocated elsewhere in the second type of flash memory. This could occur because the verification showed a greater number of errors, i.e., a greater rate of error correction than should be the case for the second type of flash memory if not experiencing wear. The conclusion would be that the particular page or block in the second type of flash memory may be experiencing wear from excessive reads or writes, and is therefore producing excess errors even upon a first write of new data and verification of this data. The data is then relocated as a consequence, and the rate of error correction is again tested to see whether this new location in the second type of flash memory is or is not experiencing wear. After the action 486, the flow branches back to the action 482, in order to check the rate of error correction in the new location.
  • the action 488 long-term monitoring of the data commences, and a rate of error correction during subsequent reads of the data is determined.
  • a question is asked, is the rate of error correction below a second error correction threshold value?
  • the second error correction threshold value could be a long- term error correction threshold value, which differs from the verification error correction threshold value.
  • the long-term error correction threshold value could be equal to, lesser than, or greater than the verification error correction threshold value. If the answer is yes, the rate of error correction is below the second error correction threshold value, the flow branches back to the action 488, for ongoing monitoring of the rate of error correction.
  • the method shown in Fig. 4F could be applied as part of data verification and monitoring, or wear leveling or both. It should be appreciated that each of Figs. 4A-F include a loop back from the final method operation to the initial method operation as the data may change or evolve over time and the methods described herein may monitor these changes by repeating as needed.
  • a policy can combine two or more measurements, or two or more policies can be enacted or combined. For example, a high read error rate combined with a high read rate could result in moving data to a lower error-rate flash or nonvolatile memory, or application of a more advanced error correction mechanism on a less reliable flash or nonvolatile memory memory. Data could be placed in a type of nonvolatile memory according to an intended residence time of the data in that type of nonvolatile memory.
  • the intended residence time could be based on the file type, an object type or an object identifier. Later, the data could be moved to another type of nonvolatile memory if the data has resided longer than the intended residence time, which could act as a threshold. As a further example, a combination of a high number of reads and a low number of writes to a particular location could result in moving data into a flash memory with a high read endurance but a low write cycle endurance.
  • Various mechanisms for tracking or deriving the dynamic properties of the data are readily developed in accordance with the teachings herein.
  • At least one dynamic property of the data, and at least one policy relating the dynamic property to the various types of flash memory are applied to the decision(s) as to where to place or relocate the data, i.e., the decision or decisions as to which type of flash memory to write or place the data into.
  • At least one static property of the data, and at least one policy relating the static property of the data to the various types of flash memory are applied to the decision(s) as to where to write or place the data.
  • Embodiments of the method can be applied to flash memories having two or more types of flash memory, and parameters, thresholds, properties, and policies relating to two types, three types, four types, five types or more of flash memory are readily devised in accordance with the teachings herein.
  • FIG. 5 is an illustration showing an exemplary computing device which may implement the embodiments described herein.
  • the computing device of Fig. 5 may be used to perform embodiments of the functionality for managing various types of flash memory, or managing other types of memory, in accordance with various embodiments.
  • the computing device includes a central processing unit (CPU) 501, which is coupled through a bus 505 to a memory 503, and mass storage device 507.
  • Mass storage device 507 represents a persistent data storage device such as a disc drive, which may be local or remote in some embodiments.
  • the mass storage device 507 could implement a backup storage, in some embodiments.
  • Memory 503 may include read only memory, random access memory, etc. Applications resident on the computing device may be stored on or accessed via a computer readable medium such as memory 503 or mass storage device 507 in some embodiments.
  • Applications may also be in the form of modulated electronic signals modulated accessed via a network modem or other network interface of the computing device.
  • CPU 501 may be embodied in a general-purpose processor, a special purpose processor, or a specially programmed logic device in some embodiments.
  • the computing device may include well-known components such as one or more counters 513, timers 515, or communication ports 517. One or more of the communication ports 517 could be connected to a network 519.
  • Display 51 1 is in communication with CPU 501, memory 503, and mass storage device 507, through bus 505. Display 51 1 is configured to display any
  • Input/output device 509 is coupled to bus 505 in order to communicate information in command selections to CPU 501. It should be appreciated that data to and from external devices may be communicated through the input/output device 509.
  • CPU 501 can be defined to execute the functionality described herein to enable the functionality described with reference to Figs. 1-4.
  • the code embodying this functionality may be stored within memory 503 or mass storage device 507 for execution by a processor such as CPU 501 in some embodiments.
  • the operating system on the computing device may be MS- WINDOWS Tm , UNIXTM, LINUXTM, iOSTM, CentOSTM, Android Tm , Redhat LinuxTM, z/OSTM, EMC ISILON ONEFSTM, DATA ONTAPTM or other known operating systems. It should be appreciated that the embodiments described herein may be integrated with virtualized computing system also.
  • Embodiments of the flash storage, flash manager and flash memory can be scaled for large business, small business, consumer and other environments, and for products from the size of server farms (and larger) down to consumer products.
  • Such products could include solid-state drives, combination hard drive/solid-state drives, memory cards for business computing environments, personal computers, or storage devices, USB devices, touch tablets, integrated circuits or integrated circuit sets for personal computing devices from desktop to handheld to pocket-sized or smaller.
  • the flash memory could be placed on a single integrated circuit with multiple types of flash memory.
  • the flash memory of multiple types, could be implemented in a single device, board, box, drive, or appliance, e.g., a network-attachable storage, or scaled up to an enterprise memory storage.
  • the flash memory is on one or more integrated circuits, and the flash manager is on one or more integrated circuits, and these could be in separate packages or in a multichip package.
  • the flash memory and the flash manager are combined on a single integrated circuit.
  • this single integrated circuit, or multiple integrated circuits could be implemented in a mobile form factor, such as a thumb drive, or a product with a smaller form factor such as or comparable to
  • Embodiments can be directed to a single property of data, which could be a static property or a dynamic property. Embodiments can be directed to multiple properties of data, which could be static properties, dynamic properties, or both.
  • Embodiments can be directed to a single policy, which relates to one or more static properties of data, one or more dynamic properties of data, or both, or to multiple policies. Decisions can be based on multiple factors, e.g., a small number of writes and a large number of reads.
  • Embodiments can be directed to multiple types of flash memory having differing numbers of levels/bits per cell, or differing qualities relating to data, or both. For example, different types of flash memory could be optimized for read endurance (number of read cycles) and/or read retention (length of time to retain data), at various trade-offs of lower write endurance and/or erase cycle endurance, and policies could be developed accordingly. Different types of flash memory could have the same number of bits per cell but differing properties.
  • two types of flash with the same number of bits per cell could be differentiated by having differing retention times, differing numbers of program and erase cycles, or other characteristics.
  • the number of program and erase cycles may be further defined based on the type of environment, e.g., an enterprise environment vs. a non-enterprise
  • Embodiments could include multiple types of memory but exclude flash, or could include flash and one or more other types of memory, or two or more types of flash and one or more other types of memory, and so on. It should be appreciated that the embodiments may be extended to volatile memory types as well as nonvolatile memory types, and in some embodiments, combinations of volatile and nonvolatile memory types. [0078] Detailed illustrative embodiments are disclosed herein. However, specific functional details disclosed herein are merely representative for purposes of describing embodiments. Embodiments may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
  • the embodiments might employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing. Any of the operations described herein that form part of the embodiments are useful machine operations.
  • the embodiments also relate to a device or an apparatus for performing these operations.
  • the apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer.
  • various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
  • a module, an application, a layer, an agent or other method-operable entity could be implemented as hardware, firmware, or a processor executing software, or combinations thereof. It should be appreciated that, where a software-based embodiment is disclosed herein, the software can be embodied in a physical machine such as a controller. For example, a controller could include a first module and a second module and the controller could be configured to perform various actions, e.g., of a method, an application, a layer or an agent.
  • the embodiments can also be embodied as computer readable code on a computer readable medium.
  • the computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), readonly memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices.
  • the computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
  • Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like.
  • the embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire -based or wireless network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)

Abstract

La présente invention concerne un procédé de gestion de mémoire flash et non volatile. Le procédé comprend les étapes consistant à déterminer au moins une propriété de données ; et déterminer sur quel type d'une pluralité de types de mémoire flash écrire les données, sur la base de l'au moins une propriété des données. La pluralité de types de mémoire flash comprend au moins deux types présentant des nombres différents de bits par cellule. Le procédé comprend l'étape consistant à écrire les données sur une mémoire flash du type déterminé. La présente invention concerne également un gestionnaire de mémoire et flash, ainsi qu'un dispositif de mémorisation flash.
PCT/US2015/011661 2014-01-16 2015-01-15 Remplacement de données sur la base de propriétés de données et rétention de données dans un système dispositif de mémorisation hiérarchisée WO2015109128A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US14/157,489 2014-01-16
US14/157,489 US9811457B2 (en) 2014-01-16 2014-01-16 Data placement based on data retention in a tiered storage device system
US14/157,481 US8874835B1 (en) 2014-01-16 2014-01-16 Data placement based on data properties in a tiered storage device system
US14/157,481 2014-01-16

Publications (1)

Publication Number Publication Date
WO2015109128A1 true WO2015109128A1 (fr) 2015-07-23

Family

ID=53543454

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/011661 WO2015109128A1 (fr) 2014-01-16 2015-01-15 Remplacement de données sur la base de propriétés de données et rétention de données dans un système dispositif de mémorisation hiérarchisée

Country Status (1)

Country Link
WO (1) WO2015109128A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11204705B2 (en) 2019-03-05 2021-12-21 Western Digital Technologies, Inc. Retention-aware data tiering algorithm for hybrid storage arrays
WO2022111041A1 (fr) * 2020-11-27 2022-06-02 苏州浪潮智能科技有限公司 Procédé et dispositif d'écriture de données dans un disque ssd

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120166749A1 (en) * 2009-09-08 2012-06-28 International Business Machines Corporation Data management in solid-state storage devices and tiered storage systems
US20120239853A1 (en) * 2008-06-25 2012-09-20 Stec, Inc. Solid state device with allocated flash cache
US20130132638A1 (en) * 2011-11-21 2013-05-23 Western Digital Technologies, Inc. Disk drive data caching using a multi-tiered memory
US20130198449A1 (en) * 2012-01-27 2013-08-01 International Business Machines Corporation Multi-tier storage system configuration adviser
US20130265825A1 (en) * 2012-04-10 2013-10-10 Paul A. Lassa System and method for micro-tiering in non-volatile memory

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120239853A1 (en) * 2008-06-25 2012-09-20 Stec, Inc. Solid state device with allocated flash cache
US20120166749A1 (en) * 2009-09-08 2012-06-28 International Business Machines Corporation Data management in solid-state storage devices and tiered storage systems
US20130132638A1 (en) * 2011-11-21 2013-05-23 Western Digital Technologies, Inc. Disk drive data caching using a multi-tiered memory
US20130198449A1 (en) * 2012-01-27 2013-08-01 International Business Machines Corporation Multi-tier storage system configuration adviser
US20130265825A1 (en) * 2012-04-10 2013-10-10 Paul A. Lassa System and method for micro-tiering in non-volatile memory

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11204705B2 (en) 2019-03-05 2021-12-21 Western Digital Technologies, Inc. Retention-aware data tiering algorithm for hybrid storage arrays
WO2022111041A1 (fr) * 2020-11-27 2022-06-02 苏州浪潮智能科技有限公司 Procédé et dispositif d'écriture de données dans un disque ssd
US12079485B2 (en) 2020-11-27 2024-09-03 Inspur Suzhou Intelligent Technology Co., Ltd. Method and apparatus for closing open block in SSD

Similar Documents

Publication Publication Date Title
US9612953B1 (en) Data placement based on data properties in a tiered storage device system
US9811457B2 (en) Data placement based on data retention in a tiered storage device system
US10915442B2 (en) Managing block arrangement of super blocks
US10963327B2 (en) Detecting error count deviations for non-volatile memory blocks for advanced non-volatile memory block management
US10387243B2 (en) Managing data arrangement in a super block
US10949108B2 (en) Enhanced application performance in multi-tier storage environments
US9619381B2 (en) Collaborative health management in a storage system
US9063874B2 (en) Apparatus, system, and method for wear management
US20190171381A1 (en) Reducing unnecessary calibration of a memory unit for which the error count margin has been exceeded
KR102275094B1 (ko) 저장된 데이터를 플래시 메모리에 기초한 저장 매체에 기입하기 위한 방법 및 디바이스
US10552063B2 (en) Background mitigation reads in a non-volatile memory system
CN111742291A (zh) 具有用户空间闪存转换层的用户空间存储i/o栈的方法和系统
TW201619971A (zh) 耦合至主機dram之綠能與非固態硬碟(nand ssd)驅動器、gnsd應用程式及其操作方法和電腦系統主機、增加非揮發快閃記憶儲存器耐久性之方法
TW201403318A (zh) 具耐用轉換層並能轉移暫存讓記憶體耐磨損的硬碟驅動器
CN114127677B (zh) 用于写高速缓存架构中的数据放置的方法和系统
US10324648B1 (en) Wear-based access optimization
US20220043713A1 (en) Meta Data Protection against Unexpected Power Loss in a Memory System
WO2015109128A1 (fr) Remplacement de données sur la base de propriétés de données et rétention de données dans un système dispositif de mémorisation hiérarchisée
US10108470B2 (en) Parity storage management
US11886741B2 (en) Method and storage device for improving NAND flash memory performance for intensive read workloads
Zuolo et al. Memory driven design methodologies for optimal SSD performance
Russ et al. Simulation-based optimization of wear leveling for solid-state disk digital video recording

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15737052

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15737052

Country of ref document: EP

Kind code of ref document: A1