CN104303162A - Systems and methods for managing cache admission - Google Patents

Systems and methods for managing cache admission Download PDF

Info

Publication number
CN104303162A
CN104303162A CN201280071235.9A CN201280071235A CN104303162A CN 104303162 A CN104303162 A CN 104303162A CN 201280071235 A CN201280071235 A CN 201280071235A CN 104303162 A CN104303162 A CN 104303162A
Authority
CN
China
Prior art keywords
data
access
logical identifier
entry
logical
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201280071235.9A
Other languages
Chinese (zh)
Other versions
CN104303162B (en
Inventor
N·塔拉伽拉
S·桑德拉拉曼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SanDisk Technologies LLC
Original Assignee
INTELLIGENT IP Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by INTELLIGENT IP Inc filed Critical INTELLIGENT IP Inc
Publication of CN104303162A publication Critical patent/CN104303162A/en
Application granted granted Critical
Publication of CN104303162B publication Critical patent/CN104303162B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/0223User address space allocation, e.g. contiguous or non contiguous base addressing
    • G06F12/023Free address space management
    • G06F12/0238Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory
    • G06F12/0246Memory management in non-volatile memory, e.g. resistive RAM or ferroelectric memory in block erasable memory, e.g. flash memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0888Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using selective caching, e.g. bypass
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1021Hit rate improvement
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/21Employing a record carrier using a specific recording technology
    • G06F2212/214Solid state disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/46Caching storage objects of specific type in disk cache
    • G06F2212/466Metadata, control data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7201Logical to physical mapping or translation of blocks or pages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/72Details relating to flash memory management
    • G06F2212/7207Details relating to flash memory management management of metadata or control data

Abstract

A cache layer leverages a logical address space and storage metadata of a storage layer (e.g., virtual storage layer) to cache data of a backing store. The cache layer maintains access metadata to track data characteristics of logical identifiers in the logical address space, including accesses pertaining to data that is not in the cache. The access metadata may be separate and distinct from the storage metadata maintained by the storage layer. The cache layer determines whether to admit data into the cache using the access metadata. Data may be admitted into the cache when the data satisfies cache admission criteria, which may include an access threshold and/or a sequentiality metric. Time-ordered history of the access metadata is used to identify important/useful blocks in the logical address space of the backing store that would be beneficial to cache.

Description

For managing the system and method that buffer memory is received
Technical field
The present invention relates to data to store, and more specifically, relate to and utilize accesses meta-data to manage buffer memory receiving.
Background technology
Buffer storage can be used for the I/O performance improving computing system.Buffer storage can comprise high-performance memory storage, such as, and volatile memory, nonvolatile memory (such as, flash memory) etc.By optionally receiving the data of often access, buffer storage can be utilized most effectively.By receiving the data of infrequently accessing, buffer memory can " poisoning ".This data of infrequently accessing may take the limited capacity of buffer memory and the data of more often accessing be foreclosed, and this may eliminate the performance advantage of buffer memory.Therefore, need a kind of for managing the system and method that buffer memory is received, to prevent Cache Poisoning by receiving data to enter buffer memory with receiving standard selective based on one or more buffer memory.
Accompanying drawing explanation
In order to more easily understand advantage of the present invention, by by reference to the specific embodiment shown in accompanying drawing, above concise and to the point the present invention described is described more specifically.It is to be appreciated that these drawings depict only exemplary embodiments of the present invention, therefore do not think that it limits the scope of the invention, by using accompanying drawing, will describe by additional characteristic sum details and explain the present invention, wherein:
Fig. 1 is the block diagram of the system comprising Nonvolatile memory devices;
Fig. 2 is the block diagram of an embodiment of Nonvolatile memory devices;
Fig. 3 is the block diagram of an embodiment of the memory controller comprising write data pipeline and reading data flow waterline;
Fig. 4 is the block diagram of an embodiment of the system comprising virtual store layer;
Fig. 5 describes an embodiment of forward index;
Fig. 6 describes an embodiment of reverse indexing;
Fig. 7 describes an embodiment of the annex point in the amount of physical memory of Nonvolatile memory devices;
Fig. 8 describe order, based on an example of the form of daily record;
Fig. 9 A describes visit data structure example of cache access metadata;
Fig. 9 B describes visit data structure example of one group of orderly cache access metadata;
Figure 10 A describes the exemplary map based on Hash between logical identifier and accesses meta-data;
Figure 10 B describes the exemplary map based on scope between logical identifier and accesses meta-data;
Figure 10 C describes the exemplary mixed-use developments between logical identifier and accesses meta-data;
Figure 11 is the process flow diagram of an embodiment of the method that management buffer memory is received;
Figure 12 is the process flow diagram of another embodiment of the method that management buffer memory is received;
Figure 13 describes an example of the ordered sequence of the data access for determining sequence measurement;
Figure 14 is the process flow diagram of an embodiment of the method using sequence measurement management buffer memory to receive;
Figure 15 A is the curve map of the example describing dynamic accommodation standard;
Figure 15 B is the curve map of another example describing dynamic accommodation standard;
Figure 15 C describes to comprise the curve map that low value receives another example of the dynamic accommodation standard of standard;
Figure 16 is the process flow diagram of an embodiment of the method using access tolerance and sequence measurement management buffer memory to receive.
Embodiment
Fig. 1 describes an embodiment of the system 100 comprising Nonvolatile memory devices 102.In the described embodiment, system 100 comprises host computing system 114, handling capacity management equipment 122 and memory storage 102.Computing system 114 can be computing machine, such as, and server, notebook computer, desktop computer, mobile device or other computing equipments well known in the art.Host computing system 114 generally includes the parts of such as storer, processor, bus and miscellaneous part well known in the art.
Data in host computing system 114 pairs of memory storages 102 store, and carry out data communication by communication connection and memory storage 102.Memory storage 102 can be positioned at the inside of host computing system 114, also can be positioned at the outside of host computing system 114.Communication connection can be bus, network or allow to transmit other connected modes of data between host computing system 114 and memory storage 102.In one embodiment, memory storage 102 is connected to host computing system 114 by PCI connection (such as, PCI express (" PCI-e ")).Memory storage 102 can be the card of the PCI-e interface be inserted on host computing system 114.
Memory storage 102 in the embodiment described performs data storage operations, such as, and reading, write, erasing etc.In certain embodiments, the power supply of memory storage 102 connects and communication connection is the part of same physical connection between host computing system 114 and memory storage 102.Such as, memory storage 102 can be linked into power supply by following manner: PCI, PCI-e, Serial Advanced Technology Attachment (" serial ATA " or " SATA "), Parallel ATA (" PATA "), small computer system interface (" SCSI "), IEEE 1394 (" live wire (FireWire) "), fiber channel, USB (universal serial bus) (" USB "), PCIe-AS or be connected with other of host computing system 114.
Memory storage 102 provides non-volatile memories for host computing system 114.Memory storage 102 shown in Fig. 1 is the non-volatile Nonvolatile memory devices 102 comprising memory controller 104, write data pipeline 106, reading data flow waterline 108 and non-volatile non-volatile memory medium 110.Memory storage 102 can comprise optional feature, in order to provide the simple view of memory storage 102, described optional feature is not shown.
Even if non-volatile memory medium 110 pairs of data to store to make when not powering to memory storage 102 still reserved data.In certain embodiments, non-volatile memory medium 110 comprises solid storage medium, such as, flash memory, nanometer random access memory (" NRAM "), magnetic resistance RAM (" MRAM "), dynamic ram (" DRAM "), phase transformation RAM (" PRAM "), racing track storer, recall resistance storer, based on the storer of nano wire, based on the sub-10 nano technique storer of silicon-oxide, Graphene storer, silicon-oxide-nitride--oxide-silicon (" SONOS "), resistive ram (" RRAM "), programmable metallization unit (" PMC "), conductive bridge RAM (" CBRAM ") etc.Meanwhile, in the described embodiment, memory storage 102 comprises non-volatile memory medium 110, in other embodiments, memory storage 102 can comprise magnetic medium, such as, hard disk, tape etc., storage medium 102 can also comprise optical medium or other non-volatile data storage medium.Memory storage 102 can also comprise the memory controller 104 coordinated storage and the retrieval of the data in non-volatile memory medium 110.Memory controller 104 can use one or more index to locate and retrieve data, and performs other operations to the data be stored in memory storage 102.Such as, memory controller 104 can comprise cleaner (grommer), and it is for performing data scrubbing operation, such as, and refuse collection.
As shown, in certain embodiments, memory storage 102 implements write data pipeline 106 and reading data flow waterline 108, will be described below in greater detail below to this example.When data are transferred to non-volatile memory medium 110 from host computing system 114, write data pipeline 106 can perform specific operation to data.These data can comprise, the generation of such as error-correcting code (ECC), encryption, compression or other operations.Reading data flow waterline 108 can align and read and the data being sent to host computing system 114 perform similar to potential inverse operation from non-volatile memory medium 110.
In one embodiment, host computing system 114 also comprises one or more miscellaneous part except memory storage 102, such as, and additional memory devices, graphic process unit, network interface card etc.In view of the disclosure, it will be understood to those of skill in the art that dissimilar parts can be included in host computing system 114.Described parts can be positioned at inside or the outside of host computing system 114.In one embodiment, parts described in some can be connected to host computing system 114 and by PCI or the PCI-e card of host computing system 114 received power.
In certain embodiments, driver 118 or optional memory interface 116 are application programming interfaces (" API "), and for converting order and other data to be suitable for being sent to memory controller 104 form.In another embodiment, driver 118 comprises one or more function of memory controller 104.Such as, driver 118 can comprise all or part of module described below, and one or more index that can comprise for memory storage 106 or mapping.Comprise the driver 118 of storage system 102, one or more memory controller 104 and one or more memory storage 106 and there is the memory interface 116 being connected to file system/file server, and the conventional arrangement in file system/file server advantageously pushes (that is, unloading) in storage system 102.
Logical identifier as used in this application is the identifier of the data cell of the physical address of the data being different from storage data units.Data cell as used in this application is any one group of data be logically grouped into together.Data cell can be file, the data segment of the redundant array (" RAID ") of object, cheap/independently disk/drive data band or other data groups in storing for data.Data cell can be executable code, data, metadata, catalogue, index, the data of any other type that can be stored in memory storage or their combination.The mark for marking data cell of name, logical address, physical address, address realm or other routines for identification data unit can be passed through.Logical identifier comprises data unit identifiers (such as, filename), object identifier, index node, universal unique identifier (" UUID "), Globally Unique Identifier (" GUID ") or other data cell labels, and LBA (Logical Block Addressing) (" LBA "), cylinder/magnetic head/sector (" CHS ") or other low level logical identifiers can be comprised.Logical identifier generally comprises any logical-tag that can be mapped to physical location.
In certain embodiments, memory storage 106 by data with order, be stored in non-volatile memory medium 110 based on the form of daily record.Such as, when Update Table unit, read the data of this data cell from a physical storage locations, revise described data, and be written into different physical storage locations subsequently.Write data into the event log that the order of data storage device 106 and order can be included in the order of the storage operation that Nonvolatile memory devices 102 performs.By traversal event log (and/or order storing and operate of resetting), can build or rebuild storing metadata, such as, forward index.
In typical random access device (RAD), logical identifier is almost man-to-man corresponding with the physical address of random access device (RAD).This man-to-man mapping in typical random access device (not comprising a small amount of physical address mapped for bad block in reserved random access device (RAD)) is also relevant to following relation, described pass be the memory capacity relevant to logical identifier and and the physical address physical capacity of being correlated with between close to man-to-man relation.Such as, if logical identifier is LBA (Logical Block Addressing) (" LBA "), then relevant to LBA each logical block has fixing capacity.The physical block of the correspondence on random storage device is usually identical with the capacity of logical block.This makes typical file server 114/ file system can be managed physical capacity in random access device (RAD) by management logic identifier (such as, LBA).The continuity that this LBA to PBA maps generally depends on file system, and file system utilizes this continuity to carry out defragmentation to the data be stored on data storage device.Similarly, some system can use this continuity to locate data on specific physical track to improve performance, as being referred to as the situation of the technology of disk " short stroke (short stroking) ".The mapping of LBA to the PBA of very predictable is important in some applications for the data storage come by direct management logic address space in indirect control amount of physical memory.
But, storage system 102 can be log-structured file system, so just not used for " fixing " relation of mapping or the algorithm of determining LBA to PBA, or in another embodiment, storage system 102 can be random access, but can by more than one client 110 or this storage system 102 of file server 114/ file system access, the memory capacity represented to make the logical identifier distributing to each client 110 or file server 114/ file system is much larger than the logic of canonical system to the man-to-man relation of physical identifier.Storage system 102 can also simplify configuration, has the assignment logic address realm of the memory capacity much larger than the memory storage 106 in storage system 102 with each making in one or more client 110.Within system 100, storage system 102 manages and assignment logic identifier, one to one predefined or close to man-to-man relation to make not have between logical identifier and physical identifier.
Because compared with typical storage system, system 100 allows more effectively to manage memory capacity, so system 100 is favourable.Such as, for the typical random access device (RAD) can accessed by multiple client 110, if distribute the storage space of specified vol to each client, even if the actual capacity of the storage space so taken is considerably less, storage space also can exist and this storage space of can not reallocating usually.Because system 100 reduces the complexity that the standard being connected to memory storage 106 simplifies configuration-system, so system 100 or favourable.Standard simplify configuration-system have be included in the logic between logical identifier in spatial logic address space and physical storage locations to logic mapping simplify configuration layer.Because eliminate multiple layer of mapping and carried out simplifying configuration (logic is to the mapping of physics), so system 100 is more effective in lowermost layer.
Fig. 2 illustrates the block diagram comprising an embodiment 200 of the Nonvolatile memory devices controller 202 of write data pipeline 106 and reading data flow waterline 108 according in Nonvolatile memory devices 102 of the present invention.Non-volatile memory controller 202 can comprise multiple memory controller 0-N104a-n, and each controls non-volatile memory medium 110.In the described embodiment, show two non-volatile controller: non-volatile controller 0104a and memory controller N 104n, and each controller controls respective non-volatile memory medium 110a-n.In the described embodiment, memory controller 0104a control data channel, stores data to make the non-volatile memory medium 110a be connected.Memory controller N 104n controls the index metadata channel relevant to the data stored, and relevant non-volatile memory medium 110n stores index metadata.In alternative embodiments, Nonvolatile memory devices controller 202 comprises single non-volatile controller 104a and single non-volatile memory medium 110a.In another embodiment, there is multiple memory controller 104a-n and relevant non-volatile memory medium 110a-n.In one embodiment, be connected to one or more non-volatile controller 104a-104n-1 control data of their relevant non-volatile memory medium 110a-110n-1, and at least one memory controller 104n being connected to its relevant non-volatile memory medium 110n controls index metadata.
In one embodiment, at least one non-volatile controller 104 is field programmable gate array (" FPGA "), and the function of controller is programmed in FPGA.In certain embodiments, described FPGA is fPGA.In another embodiment, memory controller 104 comprises the parts being specifically designed as memory controller 104, such as, and special IC (" ASIC ") or self-defined logical scheme.Each memory controller 104 generally includes write data pipeline 106 and reading data flow waterline 108, is further described it with reference to Fig. 3.In another embodiment, at least one memory controller is by combining FPGA, ASIC and self-defined logical block is formed.
Non-volatile memory medium 110 is the non-volatile storage element arrays 216,218,220 be arranged in memory bank (bank) 214, and accesses described memory element array concurrently by two-way storage I/O (" I/O ") bus 210.In one embodiment, store I/O bus 210 and can carry out one-way communication at any time.Such as, when data are just written to non-volatile memory medium 110, just can not read data from this non-volatile memory medium 110.In another embodiment, data can flow in both direction simultaneously.But, as used herein relevant to data bus two-way, it refers to data path, this data path only can have data stream sometime in one direction, but when the data stream in the direction of in BDB Bi-directional Data Bus stops, data can flow in the reverse direction in this BDB Bi-directional Data Bus.
Non-volatile memory device (such as, SSS 0.0216a) is configured to chip (encapsulation of one or more bare chip) on circuit board or bare chip usually.As described, even if these elements are encapsulated into chip package, stacked chips encapsulation or some other potted element together, non-volatile memory device (such as, 216a) still independent operation or be semi-independent of other non-volatile memory devices (such as, 218a).As described, a line non-volatile memory device 216a, 216b, 216m are appointed as memory bank 214.As described, can have " n " individual memory bank 214a-n, and in n x m non-volatile storage element array 216,218,220 in non-volatile memory medium 110, each memory bank there is " m " individual non-volatile memory device 216a-m, 218a-m, 220a-m.Certainly, n and the m value that different embodiments comprises can be different.In one embodiment, each memory bank 214 of non-volatile memory medium 110a comprises 20 non-volatile memory device 216a-216m, and has 8 memory banks 214.In one embodiment, each memory bank 214 of non-volatile memory medium 110a comprises 24 non-volatile memory device 216a-216m, and has 8 memory banks 214.Except n x m non-volatile storage element array 216a-216m, 218a-218m, 220a-220m, addressing can also be carried out to one or more additional column (P), and other non-volatile memory devices 216a, 216b, 216m parallel work-flow of these additional column and one or more row.In one embodiment, the P row of increase store the parity data of ECC block (that is, the ECC coded word) part of m the memory element crossing over particular bank.In one embodiment, each non-volatile memory device 216,218,220 is made up of single layer cell (" SLC ") device.In another embodiment, each non-volatile memory device 216,218,220 is made up of multilevel-cell (" MLC ").
In one embodiment, the non-volatile memory device (such as, 216b, 218b, 220b) of the common line 211 on shared storage I/O bus 210a is encapsulated into together.In one embodiment, non-volatile memory device 216,218,220 can have one or more bare chip in each encapsulation, and wherein, one or more encapsulates vertical stacking, and can access each bare chip independently.In another embodiment, non-volatile memory device (such as, SSS 0.0216a) one or more vertical bare chip can be had in each bare chip, and one or more bare chip can be had in each encapsulation, wherein, one or more encapsulates vertical stacking, and can access each vertical bare chip independently.In another embodiment, non-volatile memory device SSS 0.0216a can have one or more vertical bare chip in each bare chip, and one or more bare chip can be had in each encapsulation, wherein, some or whole one or more bare chip vertical stackings described, and each vertical bare chip can be accessed independently.
In one embodiment, two bare chips are vertically stacking, and four stacking one group, to form 8 memory elements (such as, SSS 0.0-SSS 8.0) 216a, 218a ... 220a, each is at independent memory bank 214a, 214b ... in 214n.In another embodiment, 24 memory elements (such as, SSS 0.0-SSS 0.24) 216a, 216b ... 216m forms logical memory bank 214a, each in such 8 logical memory banks has 24 memory elements (such as, SSS 0.0-SSS8.24) 216,218 ... 220.Data are sent to non-volatile memory medium 110 by storing I/O bus 210, and then to all memory elements (SSS 0.0-SSS 8.0) 216a, the 218a in the memory element of particular group ... 220a.Store control bus 212a for selecting specific memory bank (such as, memory bank 0214a), to make the memory bank 214a data that the storage I/O bus 210 by being connected to all memory banks 214 receives just in time being written to selection.
In one embodiment, store I/O bus 210 by one or more independently I/O bus (comprise 210a.a-m ... " IIOBa-m " of 210n.a-m) form, wherein, the non-volatile memory device in often arranging shares the independently I/O bus of a parallel join to each non-volatile memory device 216,218,220.Such as, store one of I/O bus 210a independently I/O bus 210a.a physically can be connected to first non-volatile memory device 216a, 218a, 220a in each memory bank 214a-n.Store second of I/O bus 210b independently I/O bus 210a.b physically can be connected to second non-volatile memory device 216b, 218b, 220b of each memory bank 214a-n.Can each non-volatile memory device 216a, 216b, 216m (a line non-volatile memory device as shown in Figure 2) simultaneously and/or concurrently in access memory banks 214a.In one embodiment, wherein non-volatile memory device 216,218,220 comprises the bare chip of stacked package, and all encapsulation in given stack are all physically connected to same independently I/O bus.As used herein, " simultaneously " also comprises close to side by side accessing, wherein in slightly different time interval access means to avoid switching noise." simultaneously " that use in this context is different from the order that in turn sends separately order and/or data or access in turn.
Usually, storage control bus 212 is used to select memory bank 214a-n independently.In one embodiment, chip enable or chip selection is used to select memory bank 214.When chip selection and chip enable all can be used, storage control bus 212 can select an encapsulation in a pile encapsulation.In other embodiments, storage control bus 212 uses an encapsulation in the encapsulation of other command selection a piles.Can also by selecting non-volatile memory device 216,218,220 in the combination of the address information and control signal that store I/O bus 210 and store transmission on control bus 212.
In one embodiment, each non-volatile memory device 216,218,220 is divided into erase block, and each erasing is divided into the page soon.Erase block on non-volatile memory device 216,218,220 can be called physical erase block or " PEB ".The size of the typical page is 2048 bytes (" 2kB ").In one example in which, non-volatile memory device (such as, SSS 0.0) comprises two registers and can be programmed for two pages, and the non-volatile memory device 216,218,220 like this with two registers just has the capacity of 4kB.The memory bank be made up of 20 non-volatile memory devices 216a, 216b, 216m 214 will have the page size of 80kB, and the described page is accessed in the same address that can be exported by the independently I/O bus storing I/O bus 210.
Can by non-volatile memory device 216a, 216b ... the page of this group 80kB in the memory bank 214 of 216m composition is called logical page (LPAGE) or virtual page number.Similarly, can by each memory element 216a of memory bank 214a, 216b ... the erase block grouping of 216m is to form logical erase block (also can call it as virtual erasure block).In one embodiment, when receiving erase command in non-volatile memory device, the erase block of the page in erasable nonvolatile memory element.And along with the development of technology, expect that the erase block in non-volatile memory device 216,218,220, the page, the size of Memory slice or other logics and Physical Extents and quantity changed along with the time, expect that a lot of embodiment and new configuration consistency are possible, and expect that a lot of embodiment is consistent with general description herein.
Usually, when bag being written to the ad-hoc location in non-volatile memory device 216, storage I/O bus 210 sends physical address, and what described physical address was followed below is described bag, wherein, the position in specific webpage specific to the specific physics erase block wanting described bag to be written to the particular storage element of particular bank.Physical address includes the enough information used by assigned address that bag is directed in the page by non-volatile memory device 216.Because memory element row (such as, SSS 0.0-SSS N.0216a, 218a, all memory elements 220a) are all connected to the same independently I/O bus of storage I/O bus interface 210a (such as, 210a.a), to arrive the suitable page and to avoid bag being written to memory element row (such as, SSS 0.0-SSS N.0216a, 218a, the page being assigned similar address 220a), the memory bank 214a of the non-volatile memory device SSS 0.0216a with the correct page that will write bag is selected to comprise by storing control bus 212a, and do not select other memory banks 214b in non-volatile memory medium 110a ... 214n.
Similarly, meet the reading order needs signal stored on control bus 212 stored in I/O bus 210 and select single memory bank 214a and the suitable page in this memory bank 214a.In one embodiment, reading order read full page, and because have in memory bank 214a multiple parallel non-volatile memory device 216a, 216b ... 216m, so what utilize reading order to read is whole logical page (LPAGE).But, as the content relevant with memory bank staggered (bank interleave) hereafter will explained, reading order can be resolved into multiple subcommand.Similarly, whole logical page (LPAGE) can be written in write operation the non-volatile memory device 216a of memory bank 214a, 216b ... in 216m.
Can store erase block erase command I/O bus 210 sending and is used for wiping erase block, described storage I/O bus 210 has the specific erase block address for wiping specific erase block.Usually, memory controller 104a can send the erase block erase command for wiping logical erase block by the parallel channel (independently I/O bus 210a-n.a-m) storing I/O bus 210, each described parallel channel has the specific erase block address for wiping specific erase block.Meanwhile, particular bank (such as, memory bank 0214a) is selected, to prevent from wiping the erase block with similar address in nonoptional memory bank (such as, memory bank 1-N 214b-n) by storing control bus 212.Alternatively, not specific memory bank is selected (such as by storing control bus 212, memory bank 0214a) (or selecting all memory banks), so that the erase block in parallel all memory banks (memory bank 1-N 214b-n) with similar address can be wiped.Can use the combination storing I/O bus 210 and store control bus 212 that other orders are sent to ad-hoc location.One of ordinary skill in the art would recognize that and use two-way storage I/O bus 210 and store other modes that control bus 212 selects particular memory location.
In one embodiment, bag is written sequentially to non-volatile memory medium 110.Such as, memory controller 104a sends the storage write impact damper of bag to the memory bank 214a of memory element 216, and when impact damper is full of time, just bag is written to the logical page (LPAGE) of specifying.Storage write impact damper is full of bag by memory controller 104a subsequently again, and when impact damper is full of time, bag is written to next logical page (LPAGE).Next logical page (LPAGE) may be arranged in same memory bank 214a, or is positioned in another memory bank (such as, 214b).Continue this process, an a logical page (LPAGE) then logical page (LPAGE), usually until logical erase block is full of.In another embodiment, along with the continuation of this process, stream can continue to pass through the border of logical erase block, from a logical erase block to another logical erase block.
In reading, amendment, write operation, wrap in read operation relevant to request msg is located and is read.The data segment of the request msg of the amendment revised can not be written to the position of reading them.On the contrary, again convert the data segment of amendment to bag, and be sequentially written to the available position of the next one in the current logical page (LPAGE) write.Revise the index entry of each bag, point to the bag of the data segment comprising amendment to make it.One or more in the index of the bag of being correlated with identical also unmodified request msg will comprise the pointer in the original position pointing to unmodified bag.Therefore, if save original request msg, such as, preserve the former version of request msg, so raw requests data just will have the pointer pointed to as all bags of original write in the index.The data of new request will have the pointer of the pointer pointing to some bag and the bag pointing to the amendment in the current logical page (LPAGE) just write in the index.
In replicate run, index comprises the item of the raw requests data for being mapped to the multiple bags be stored in non-volatile memory medium 110.When performing replicate run, producing the latest copy of request msg, and produce new item in the index latest copy of request msg being mapped to original packet.The latest copy of request msg is also written to non-volatile memory medium 110, and wherein, its position is mapped to the new item in index.The latest copy of request bag can be used for identifying the bag in raw requests data, wherein, changing raw requests data, and raw data is not also sent to the copy of request msg and index is lost or when damaging, with reference to the bag in described raw requests data.
Advantageously, sequentially write is surrounded by and helps more uniformly use non-volatile memory medium 110 and allow solid state storage device controller 202 to monitor the service rating of storage focus in non-volatile memory medium 110 and the various logic page.Sequentially write bag also contributes to powerful, efficient garbage collection system, will be described in greater detail below.One of ordinary skill in the art would recognize that other benefits of sequential storage bag.
In various embodiments, Nonvolatile memory devices controller 202 also comprises data bus 204, local bus 206, buffer control unit 208, impact damper 0-N 22a-n, master controller 224, directmemoryaccess (" DMA ") controller 226, Memory Controller 228, dynamic memory array 230, static RAM array 232, Management Controller 234, management bus 236, to the bridge 238 of system bus 240 and other logical circuits 242, will be described below.In other embodiments, system bus 240 is coupled to one or more network interface unit (" NIC ") 244 (some network interface unit can comprise long-range DMA (" RDMA ") controller 246), one or more CPU (central processing unit) (" CPU ") 248, one or more external storage controller 250 and the external memory storage array 252 be associated, one or more memory controller 254, counterpart controllers 256 and application specific processor 258, will be described below.The parts 244-258 being connected to system bus 240 can be arranged in host computing system 114 or other devices.
Usually, memory controller 104 transfers data to non-volatile memory medium 110 by storing I/O bus 210.Nonvolatile memory to be arranged in memory bank 214 and each memory bank 214 comprise can concurrent access multiple memory element 216a, 216b, 216m typical embodiment in, storing I/O bus 210 is total linear arrays, and one for every row memory element 216,218,220 stores I/O bus across memory bank 214.As used herein, term " store I/O bus " can refer to that one stores I/O bus 210 or independently data bus array, and wherein, the individual data bus of described array transmits different data independently each other.In one embodiment, access a row memory element (such as, 216a, 218a, 220a) each storage I/O bus 210 can comprise the mapping of logic to physics of the partition holding (such as, erase block) for accessing in row memory element 216a, 218a, a 220a.If the first storage area is failed, part is failed, cannot access or have some other problems, then this mapping (or bad block remaps) allows the logical address of the physical address being mapped to partition holding to be remapped to different partition holdings.
Can also by system bus 240, bridge 238, local bus 206, impact damper 222 data are transferred to memory controller 104 from request unit 155 eventually through data bus 204.Data bus 204 is typically connected to one or more impact damper 222a-n controlled by buffer control unit 208.Buffer control unit 208 usually control data is transferred to impact damper 222 from local bus 206, and is transferred to streamline input buffer 306 and output buffer 330 by data bus 204.How the data sent from request unit can be stored in impact damper 222 temporarily and also be sent to subsequently on data bus 204 by the usual control of buffer control unit 208, or vice versa, to consider different clock zones, to prevent data collision etc.Buffer control unit 208 usually and master controller 224 joint operation with coordination data stream.When the data arrives, data will first arrive system bus 240, then are transferred to local bus 206 by bridge 238.
Usually, indicated by master controller 224 and buffer control unit 208, data are transferred to one or more data buffer 222 from local bus 206.Data flow out from impact damper 222 subsequently and enter into data bus 204, by non-volatile controller 104, and arrive non-volatile memory medium 110, such as, and nand flash memory or other storage mediums.In one embodiment, the outer metadata (" metadata ") of relevant band using one or more data channel transmission data and arrive together to data, one or more data channel described comprises one or more memory controller 104a-104n-1 and relevant non-volatile memory medium 110a-110n-1, at least one channel (memory controller 104n simultaneously, non-volatile memory medium 110n) be exclusively used in metadata in band, such as, index information and other metadata in the generation of Nonvolatile memory devices 102 inside.
Local bus 206 is bidirectional bus or one group of bus normally, and its permission transmits data or order between the device of Nonvolatile memory devices controller 202 inside and between the device of Nonvolatile memory devices 102 inside and the device 244-258 being connected to system bus 240.Bridge 238 contributes to the communication between local bus 206 and system bus 240.One of ordinary skill in the art would recognize that other embodiments, such as, the function element of loop configuration or switching star like arrangement and bridge 23 and bus 240,206,204,210.
The bus of system bus 240 normally host computing system 114, or the bus of other devices installing or connect Nonvolatile memory devices 102.In one embodiment, system bus 240 can be PCI-e bus, Serial Advanced Technology Attachment (" serial ATA ") bus, Parallel ATA etc.In another embodiment, system bus 240 is external buss, such as, and small computer system interface (" SCSI "), live wire, fiber channel, USB, PCIe-AS etc.Nonvolatile memory devices 102 can be encapsulated, to be internally installed to device or the device as outside connection.
Nonvolatile memory devices controller 202 comprises the master controller 224 of the function of the higher level controlled in Nonvolatile memory devices 102.In various embodiments, master controller 224 is by analysis object request and other Request Control data stream, and instruction creates index object identifier associated with the data be mapped to related data, coordinate the physical location of DMA request etc.A lot of function described herein is controlled by master controller 224 in whole or in part.
In one embodiment, master controller 224 uses embedded controller.In other embodiments, master controller 224 uses local storage, such as, dynamic memory array 230 (dynamic RAM " DRAM "), static memory array 232 (static RAM " SRAM ") etc.In one embodiment, master controller 224 controls local storer is used.In another embodiment, master controller 224 accesses local storage by Memory Controller 228.In another embodiment, master controller 224 runs Linux server and can support various conventional server interface, such as, and WWW, HTML (Hypertext Markup Language) (" HTML ") etc.In another embodiment, master controller 224 uses nanoprocessor (nano-processor).Any combination of programmable or standard logic or controller type listed above can be used to construct master controller 224.One of ordinary skill in the art would recognize that a lot of embodiments of master controller 224.
In one embodiment, memory storage/Nonvolatile memory devices controller 202 manages multiple data storage device/non-volatile memory medium 110a-n, master controller 224 divides the operating load between internal controller (such as, memory controller 104a-n).Such as, master controller 224 can divide the object that will be written to data storage device (such as, non-volatile memory medium 110a-n), with the data storage device making a part for object be stored into each connection.This feature is the performance of the enhancing allowing quick storage and access object.In one embodiment, FPGA is used to realize master controller 224.In another embodiment, can by management bus 236, be connected to the system bus 240 of NIC 244 by network or be connected to other devices of system bus 240, upgrade the firmware in master controller 224.
In one embodiment, the master controller 224 of management object emulates block storage, so that described memory storage/Nonvolatile memory devices 102 is regarded as block memory storage by host computing system 114 or other devices being connected to memory storage/Nonvolatile memory devices 102, and data is sent to the particular physical address in described memory storage/Nonvolatile memory devices 102.Master controller 224 divides block subsequently, and stores the data block as object.Master controller 224 subsequently by block and the physical address map that sends together with block to the physical location determined by master controller 224.Described mapping is stored in object indexing.Usually, block is emulated, block assembly application programming interfaces (" API ") are equipped with in the driver of computing machine (such as, memory storage/Nonvolatile memory devices 102 is used as other devices of block memory storage by host computing system 114 or hope).
In another embodiment, master controller 224 and NIC controller 244 and the co-ordination of embedded RDMA controller 246, to send the RDMA transmission of data and command set in real time.NIC controller 244 can be hidden in after opaque port can use self-defining driver.In addition, the driver on host computing system 114 can pass through I/O memory driver access computer network 116, and described I/O memory driver uses standard storehouse API and operates in combination with NIC 244.
In one embodiment, console controller 224 is also the redundant array of drive (" RAID ") controller.When other data storage device/Nonvolatile memory devices 102 networks of data storage device/Nonvolatile memory devices 102 and one or more are connected, master controller 224 can be the RAID controller for single-stage RAID, multistage RAID, gradual RAID etc.Master controller 224 also allows some objects to be stored in RAID array, and stores other objects without RAID.In another embodiment, master controller 224 can be distributed raid controller component.In another embodiment, master controller 224 can comprise multiple RAID, distributed raid and as this paper other function element described by other places.In one embodiment; the storage of master controller 224 control data in RAID class formation; wherein; parity information is stored in one or more memory element 216,218,220 of logical page (LPAGE), and described parity information is protected the data be stored in other memory elements 216,218,220 of same logical page (LPAGE).
In one embodiment, master controller 224 and single or redundant network managers (such as, switch) collaborative work, to set up route, balance bandwidth availability ratio, standby (failover) etc.In another embodiment, master controller 224 and integrated special logic (by local bus 206) and relevant driver software collaborative work.In another embodiment, master controller 224 and the application specific processor 258 connected or logic (by external system bus 240) and relevant driver software collaborative work.In another embodiment, master controller 224 and long-range special logic (by computer network 116) and relevant driver software collaborative work.In another embodiment, master controller 224 and local bus 206 or the external bus collaborative work being connected to hard disk drive (" HDD ") memory controller.
In one embodiment, master controller 224 communicates with one or more memory controller 254, wherein, memory storage/Nonvolatile memory devices 102 can as the memory storage connected by SCSI bus, internet SCSI (" iSCSI "), fiber channel etc.Meanwhile, memory storage/Nonvolatile memory devices 102 can independently management object, and can occur as object-based file system or distributed object file system.Master controller 224 can also be accessed by counterpart controllers 256 and/or application specific processor 258.
In another embodiment, master controller 224 and the collaborative work of Self-Integration Management Controller, with periodic verification FPGA code and/or controller software, checking FPGA code and/or access control device software, support external reset request during power on (reset) when running (reset), support the setting that the reset request sent, the measurement supporting voltage, electric current, power, temperature and other environmental metrics and threshold value are interrupted due to house dog time-out.In another embodiment, master controller 224 manages the collection of rubbish to discharge erase block, thus again utilizes the erase block of release.In another embodiment, master controller 224 manages wear leveling.In another embodiment, master controller 224 allows data storage device/Nonvolatile memory devices 102 to be divided into multiple logical unit, and allows the media encryption based on subregion.In another embodiment, master controller 224 support has advanced, that multidigit ECC corrects memory controller 104.Those skilled in the art will recognize that other Characteristic and function of the master controller 224 in memory controller 202 or more specifically in Nonvolatile memory devices 102.
In one embodiment, Nonvolatile memory devices controller 202 comprises Memory Controller 228, and it controls dynamic RAM array 230 and/or static RAM array 232.As mentioned above, Memory Controller 228 can be independently or integrated with master controller 224.Memory Controller 228 controls the volatile memory of some type usually, such as, and DRAM (dynamic RAM array 230) and SRAM (static RAM array 232).In other examples, Memory Controller 228 also controls other type of memory, such as, and Electrically Erasable Read Only Memory (" EEPROM ") etc.In other embodiments, Memory Controller 228 controls two kinds or more kinds of type of memory, and Memory Controller 228 can comprise more than one controller.Usually, Memory Controller 228 control SRAM 232, SRAM 232 as much as possible can with and supplement this SRAM 232 by DRAM 230.
In one embodiment, object indexing is stored in storer 230,232, and is regularly unloaded to by described object indexing subsequently in the passage of non-volatile memory medium 110n or other nonvolatile memories.Other that one of ordinary skill in the art would recognize that Memory Controller 228, dynamic memory array 230 and static memory array 232 use and configure.
In one embodiment, Nonvolatile memory devices controller 202 comprises dma controller 226, its control store device/Nonvolatile memory devices 102 and one or more external storage controller 250 and the dma operation between relevant external memory storage array 252 and CPU248.It is pointed out that and external storage controller 250 and external memory storage array 252 to be called outside, reason is that they are positioned at the outside of memory storage/Nonvolatile memory devices 102.In addition, dma controller 226 can also be controlled the RDMA operation utilizing request unit to carry out by NIC 244 and relevant RDMA controller 246.
In one embodiment, Nonvolatile memory devices controller 202 comprises the Management Controller 234 being connected to management bus 236.Usually, the state of Management Controller 234 pairs of environmental index and memory storage/Nonvolatile memory devices 102 manages.Management Controller 234 can unit temp in monitoring management bus 236, fan speed, power supply be arranged.Management Controller 234 can be supported to store FPGA code and the reading of Erasable Programmable Read Only Memory EPROM (" EEPROM ") of controller software and programming.Usually, management bus 236 is connected to the various parts of memory storage/Nonvolatile memory devices 102.Management Controller 234 can transmit warning and interruption etc. on local bus 206, or can be included in the independent connection of system bus 240 or other buses.In one embodiment, management bus 236 is built-in integrated circuit (" I2C ") buses.One of ordinary skill in the art would recognize that other function and applications of the Management Controller 234 being connected to the parts of memory storage/Nonvolatile memory devices 102 by management bus 236.
In one embodiment, Nonvolatile memory devices controller 202 comprises can be application-specific other logical circuits 242 self-defining.Usually, when non-volatile apparatus controller 202 or master controller 224 are configured to use FPGA or other configurable controllers, self-defining logical circuit can be included in wherein by requirement, memory requirement etc. based on application-specific, client.
Fig. 3 is the schematic block diagram of the embodiment 300 illustrated according to the memory controller 104 in Nonvolatile memory devices 102 of the present invention, and described memory controller 104 has write data pipeline 106, reading data flow waterline 108 and handling capacity management equipment 122.Embodiment 300 comprises data bus 204, local bus 206 and buffer controller 208, and these devices are substantially similar with those devices described relative to the Nonvolatile memory devices controller 202 of Fig. 2.Write data pipeline 106 comprises packing device 302 and error-correcting code (" ECC ") maker 304.In other embodiments, write data pipeline 106 and comprise input buffer 306, write sync buffering device 308, write-in program module 310, compression module 312, encrypting module 314, garbage collector bypass (part is wherein positioned at reading data flow waterline 108), media encryption module 318 and write impact damper 320.Reading data flow waterline 108 comprises reading sync buffering device 328, ECC correction module 322, de-packetizer 324, calibration module 326 and output buffer 330.In other embodiments, reading data flow waterline 108 can comprise medium deciphering module 332, a part of garbage collector bypass 316, deciphering module 334, decompression module 336 and fetch program module 338.Memory controller 104 can also comprise control and status register 340 and control queue 342, memory bank interleaving controller 344, sync buffering device 346, memory bus controller 348 and multiplexer (" MUX ") 350.The parts of non-volatile controller 104 and relevant write data pipeline 106 and reading data flow waterline 108 are described below.In other embodiments, synchronous non-volatile memory medium 110 can be used, and sync buffering device 308,328 can be removed.
Write data pipeline 106 comprises packing device 302, it receives the data segment or metadata section that will be written in nonvolatile memory by another write data pipeline 106 stage directly or indirectly, and generates one or more bag of the size being applicable to non-volatile memory medium 110.A part for data segment or metadata section normally data structure (such as, object), but also can comprise whole data structure.In another embodiment, data segment is a part for data block, but also can comprise whole data block.Usually, from computing machine (such as, host computing system 114, or other computing machines or device) receive one group of data (such as, data structure), and in the data segment being sent to Nonvolatile memory devices 102, described one group of data are sent to described Nonvolatile memory devices 102.Data segment can also be well known with other titles, such as, and bag, but data segment as used herein comprises whole or a part of data structure or data block.
Each data structure is stored as one or more bag.Each data structure can have one or more container bag.Each bag comprises packet header.Packet header can comprise head type field.Type field can comprise data, attribute, metadata, data segment delimiter (when there being multiple bag), data structure, data link etc.Packet header can also comprise the information of the size about bag, such as, is included in the byte number of the data in bag.The length of bag can be determined by the type of bag.Packet header can comprise the information determining the relation of wrapping between data structure.Possible example is the position of side-play amount identification data section in data structure in usage data handbag head.One of ordinary skill in the art would recognize that other information that can be included in and be added to other information in the packet header of data by packing device 302 and can be added in packet.
Each bag comprises packet header and may comprise the data from data segment or metadata section.Packet header of each bag comprises the relevant information that the data structure belonging to Jiang Bao and Bao connects.Such as, packet header can comprise the side-play amount that object identifier or other data structure identifier and instruction form the data segment of packet, object, data structure or data block.Packet header can also comprise the logical address for storing bag that memory bus controller 348 uses.Packet header can also comprise the information of the size about bag, such as, is included in the byte number in bag.Packet header can also comprise sequence number, and when data reconstruction section or data structure, described sequence number identification data section is relative to the position belonging to other bags in data structure.Packet header can comprise packet header type field.Type field can comprise data, properties of data structures, metadata, data segment delimiter (when there being multiple bag), type of data structure, data structure link etc.One of ordinary skill in the art would recognize that other information that can be included in and be added to other information in the packet header of data or metadata by packing device 302 and can be added in bag.
Write data pipeline 106 one or more bag comprised for receiving from packing device 302 generates the ECC maker 304 of one or more error-correcting code (" ECC ").The usual mistake in correcting algorithm of ECC maker 304 generates the ECC stored together with one or more packet and checks position.The ECC code that ECC maker 304 generates and one or more data relevant to ECC code comprise ECC block together.The ECC data stored together with one or more wraps is for detecting and correcting by transmitting and storing the mistake be incorporated in data.In one embodiment, uncoded Bulk transport as length N is wrapped to ECC maker 304.Calculate, add whole features of length S, and it can be used as the encoding block of length N+S to export.The value of N and S depends on the feature of ECC algorithm, selects the feature of described ECC algorithm to realize particular characteristic, efficiency and robust tolerance.In one embodiment, do not have fixing relation between ECC block and bag, bag can comprise more than one ECC block; ECC block can comprise more than one bag; And the first bag can terminating in ECC block Anywhere, the second bag can start after the first end-of-packet in same ECC block.In one embodiment, dynamically ECC algorithm is not revised.In one embodiment, the enough robusts of the ECC data stored together with packet, thus the mistake more than 2 bits can be corrected.
Advantageously, the life-span allowing the robust ECC algorithm corrected more than individual bit or even dibit corrects to allow to extend non-volatile memory medium 110 is used.Such as, if flash memory is used as the storage medium in non-volatile memory medium 110, this flash memory just can be written into general 100 when each erase cycles can not be made mistakes, 000 time.Use robust ECC algorithm can expand this use restriction.Nonvolatile memory devices 102 is provided with ECC maker 304 and corresponding ECC correction module 322, if the ECC algorithm more weak with using robustness (such as, single bit correction) to compare, Nonvolatile memory devices 102 can carry out built-in error correction and have longer serviceable life.But, in other embodiments, the algorithm that ECC maker can use robustness more weak and can corrects single bit or double-bit errors.In another embodiment, in order to increase capacity, Nonvolatile memory devices 110 can comprise storer not too reliably, such as, multi-level unit (" MLC ") flash memory, wherein, owing to not having the ECC algorithm of robust more, this storer may not be very reliable.
In one embodiment, write streamline 106 comprises input buffer 306, described input buffer 306 reception will be written to the data segment of non-volatile memory medium 110 and be stored into input data segment, until the next stage of write data pipeline 106, such as, packing device 302 (or other stages of more complicated write data pipeline 106) prepares the next data segment of process.Input buffer 306 allows the difference between speed usually, uses the data buffer of suitable capacity to receive and process segments of data by write data lines 106.In order to improve the work efficiency of data bus 204, input buffer 306 also allows data bus 204 to transmit data with the speed can born higher than write data pipeline 106 to write data pipeline 106.Usually, in time writing data pipeline 106 and not comprising input buffer 306, then realize pooling feature in other places, such as, in Nonvolatile memory devices 102 but write data pipeline 106 outside, in host computing system 114 (such as, in network interface unit (" NIC ")) or in other devices (such as, when using direct distance input and output (" RDMA ")).
In another embodiment, write data pipeline 106 also comprises write sync buffering device 308, and it is before Jiang Bao is written to non-volatile memory medium 110, and the bag received from ECC maker 304 is carried out buffer memory.The border of write sync buffering device 308 between local clock domain and non-volatile memories clock zone, and consider the difference of clock zone and buffer memory is provided.In other embodiments, synchronous non-volatile memory medium 110 can be used, and sync buffering device 308,328 can be removed.
In one embodiment, write data pipeline 106 also comprises media encryption module 318, it is directly or indirectly from receiving one or more bag from packing device 302, and before Jiang Bao is sent to ECC maker 304, use the encryption key for Nonvolatile memory devices 102 is unique to be encrypted one or more bag.Usually, the whole bag comprising packet header is encrypted.In another embodiment, packet header is not encrypted.In this article, encryption key is interpreted as refers to by the secret cryptographic key of memory controller 104 at external management.
Media encryption module 318 and corresponding medium deciphering module 332 provide level of security for the data be stored in non-volatile memory medium 110.Such as, when utilizing media encryption module 318 to be encrypted data, if non-volatile memory medium 110 is connected to different memory controllers 104, Nonvolatile memory devices 102 or server, when be not used in write data into non-volatile memory medium 110 during use same decruption key, do not pay king-sized effort, usually just can not the content of reading non-volatile storage medium 110.
In an exemplary embodiment, encryption key is not stored in nonvolatile memory by non-volatile memory medium 102, and does not allow external reference encryption key.During initialization, encryption key is supplied to memory controller 104.Nonvolatile memory devices 102 can use and store the encrypted random number of the non-confidential used together with encryption key.Each bag can store together from different random numbers.In order to improve the protection of cryptographic algorithm, can between multiple bags with unique random number partition data section.
Can from host computing system 114, server, Key manager or other devices reception encryption key managing the encryption key used by memory controller 104.In another embodiment, non-volatile memory medium 110 can have two or more subregion, and what memory controller 104 showed is two or more memory controller 104 as it, and each memory controller works in the single subregion in non-volatile memory medium 110.In this embodiment, each subregion can use unique media encryption key.
In another embodiment, write data pipeline 106 also comprises encrypting module 314, it is encrypted the data segment received from input buffer 306 directly or indirectly or metadata section, wherein, before data segment is sent to packing device 302, the encryption key received together with data segment is used to be encrypted data segment.The encryption key for enciphered data that encrypting module 314 uses can not be that all data stored in Nonvolatile memory devices 102 are publicly-owned, but can as mentioned below, different and receive together when receiving data segment on the basis of each data structure.Such as, the encryption key of data segment encrypting module 314 will be able to encrypted receives together with data segment, or the part of encryption key as the order being used for the data structure write belonging to data segment can be received.Solid-state storage device 102 can use and store the encrypted random number of the non-confidential be combined with encryption key in each data structure bag.Each bag can store together from different random numbers.In order to improve the protection of cryptographic algorithm, can between multiple bags with unique random number partition data section.
Can from host computing system 114, another computing machine, Key manager or other devices reception encryption key preserving the encryption key being used for enciphered data section.In one embodiment; encryption key is transferred to memory controller 104 from Nonvolatile memory devices 102, host computing system 114, computing machine or other external agents; wherein, other external agents described have execution industry standard approach to transmit and to protect the ability of private key and PKI safely.
In one embodiment, encrypting module 314 utilizes the first encryption keys first received together with wrapping to wrap, and utilizes the second encryption keys second received together with second wraps to wrap.In another embodiment, encrypting module 314 utilizes the first encryption keys first received together with wrapping to wrap, and when not encrypting directly by the second data packets to the next one stage.Advantageously, the encrypting module 314 be included in the write data pipeline 106 of Nonvolatile memory devices 102 allows an a data structure then data structure or data segment then the carrying out data encryption an of data segment and without the need to Single document system or other external systems, to keep following the tracks of the different encryption key for storing corresponding data structure or data segment.Each request unit 155 or relevant Key manager manage the encryption key being used for only encrypting data structure or the data segment sent by request unit 155 independently.
In one embodiment, encrypting module 314 can use for unique one or more bag of encryption keys of Nonvolatile memory devices 102.Encrypting module 314 can perform this media encryption independently, or performs other encryptions except encryption described above.Usually, the whole bag comprising packet header is encrypted.In another embodiment, packet header is not encrypted.The media encryption that encrypting module 314 performs provides level of security for the data be stored in non-volatile memory medium 110.Such as, when utilizing for the media encryption encrypted data that specific Nonvolatile memory devices 102 is unique, if non-volatile memory medium 110 is connected to different memory controllers 104, Nonvolatile memory devices 102 or host computing system 114, when be not used in write data into non-volatile memory medium 110 during use same decruption key, do not pay king-sized effort, usually just can not the content of reading non-volatile storage medium 110.
In another embodiment, write data pipeline 106 comprises compression module 312, and it is before being sent to packing device 302 by data segment, the data of compression metadata section.Compression module 312 usually uses and well known to a person skilled in the art condensing routine packed data section or metadata section, to reduce the memory capacity that data segment or metadata section take.Such as, if data segment comprises 512 " 0 " character strings, compression module 312 just can use the code or mark replacement this 52 " 0 " that represent 512 " 0 ", and wherein, compared with the space taken with 512 " 0 ", code is compacter.
In one embodiment, compression module 312 uses the first condensing routine to compress the first data segment, and transmits the second data segment when not compressing.In another embodiment, compression module 312 utilizes the first condensing routine to compress the first data segment, and utilizes the second condensing routine to compress the second data segment.In Nonvolatile memory devices 102, have this dirigibility is favourable, such computer system 114 or each writing data in other devices of Nonvolatile memory devices 102 can specified compression programs, or one can specified compression program, and another can be specified and not compress.According to the default setting based on each type of data structure or data structure classification, the selection of condensing routine can also be carried out.Such as, first data structure of specific data structure can not use the condensing routine of acquiescence to arrange, and the second data structure of same data structure classification and same data structure type can use acquiescence condensing routine, and the 3rd data structure of same data structure classification and same data structure type can use and not compress.
In one embodiment, write data pipeline 106 comprises refuse collection bypass 316, and its part as the data bypass in garbage collection system receives data segment from reading data flow waterline 108.Garbage collection system (also referred to as " cleaner " or cleaning operation) marks no longer valid bag usually, and this normally because this bag is marked as deletion, or to be modified and the data revised are stored in different positions.In some cases, determine can the particular sector (such as, erase block) of recovering for garbage collection system.This determine may be owing to lacking available memory capacity, be labeled as the number percent of invalid data reach threshold value, the merging of valid data, the sector of storer false detection rate reach threshold value or based on Data distribution8 improving SNR etc.Garbage collection algorithm can consider that many factors is to determine when the sector of recovering.
Once the sector of storer is marked as recovery, the effective bag in this sector usually just must be reorientated.Garbage collector bypass 316 allows by digital independent in reading data flow waterline 108, and subsequently it is directly transferred to write data pipeline 106 and without the need to being moved out to outside memory controller 104.In one embodiment, garbage collector bypass 316 is the parts of the autonomous garbage collector system be operated in Nonvolatile memory devices 102.This allows Nonvolatile memory devices 102 management data, so that the propagation of data system in whole non-volatile memory medium 110, to improve performance, data reliability, and avoid excessively using and using deficiency of any one position in non-volatile memory medium 110 or region, and extend the serviceable life of non-volatile memory medium 110.
Garbage collector bypass 316 pairs of data segments are inserted into write data pipeline 106 and computing system 114 or other devices and write other data segments and coordinate.In the described embodiment, after the de-packetizer 314 of Data Collection bypass 316 before writing the packing device 302 in data pipeline 106 and in reading data flow waterline 108, but also can be arranged in any position of described reading data flow waterline 108 and write data pipeline 106.During filling the remainder of logical page (LPAGE) by a large amount of use write streamline 108, refuse collection bypass 316 can be used, to improve the storage efficiency in non-volatile memory medium 110, thus reduce the frequency of refuse collection.
Cleaning can comprise the data updated stored on non-volatile memory medium 110.As time goes on the data be stored on non-volatile memory medium 110 may degenerate.Memory controller 104 can comprise cleaner, " outmoded " data data of movement (do not revise within the predetermined time and/or) on its mark Nonvolatile memory devices 102, and upgrade outmoded data by data being re-write different memory locations.
In certain embodiments, temporarily can forbid garbage collection system, cleaner and/or refuse collection bypass 316, to allow data to be stored into continuously in the physical storage locations of Nonvolatile memory devices 102.Forbidding garbage collection system and/or bypass 316 can ensure to write data in data pipeline 106 not with other data interlace.
In certain embodiments, can at the specific part restriction refuse collection of the amount of physical memory of Nonvolatile memory devices and/or cleaner.Such as, storing metadata (such as, reverse indexing described below) periodically can be reserved in nonvolatile storage locations.Refuse collection can be limited and/or clear up to operate in the part of the non-volatile memory medium corresponding with reserved storing metadata.
In one embodiment, write data pipeline 106 and comprise the data cached write impact damper 320 in order to efficient write operation.Usually, write impact damper 320 comprises enough capacity for wrapping, to fill at least one logical page (LPAGE) in non-volatile memory medium 110.This allows write operation, when not being interfered, the whole logical page (LPAGE) of data is sent to non-volatile memory medium 110.By the capacity of the impact damper in the write impact damper 320 of write data pipeline 106 and read data buffer 108 is set to identical, or be greater than the capacity of the storage write impact damper in non-volatile memory medium 110, write and reading data can be more effective, this is because just the whole logical page (LPAGE) of data can be sent to non-volatile memory medium 110 by the single write order of establishment, and do not need to work out multiple order.
When filling write impact damper 320, non-volatile memory medium 110 may be used for other read operations.This is favourable because when data be written to store write impact damper and flow into the data dead storing write impact damper time, other non-volatile apparatus having less write impact damper or do not write impact damper can take non-volatile memories.Read operation can be prevented from, until be full of and be programmed whole storage write impact damper.Another kind of method for the system not writing impact damper or have small-sized write impact damper cleans the storage write impact damper be not full of, can read.On the other hand, this is poor efficiency, because fill the multiple write/programming cycle of page request.
For what describe, there is the embodiment that capacity is greater than the write impact damper 320 of logical page (LPAGE), then the single write order comprising multiple subcommand can be followed by single program command, the page of the data from the storage write impact damper in each non-volatile memory device 216,218,220 to be sent to the specified page in each non-volatile memory device 216,218,220.This technology has the advantage eliminated partial page programming, as everyone knows, partial page programming is reduced to reliability and the persistence of data, and discharges when impact damper is full of for the target storage volume read and other are ordered.
In one embodiment, write impact damper 320 is ping-pong buffers, and wherein, the side of impact damper is full of and is appointed as subsequently at suitable time tranfer, and the opposite side of ping-pong buffers is filled.In another embodiment, write impact damper 320 comprises first-in first-out (" the FIFO ") register that capacity is greater than the logical page (LPAGE) of data segment.One of ordinary skill in the art would recognize that other write impact dampers 310 configure, it is before writing data into non-volatile memory medium 110, allows the logical page (LPAGE) storing data.
In another embodiment, the capacity of write impact damper 320 is less than logical page (LPAGE), so that the information being less than the page can be written in the storage write impact damper in non-volatile memory medium 110.In this embodiment, read operation is not kept in order to prevent the stagnation write in data pipeline 106, use garbage collection system to sort to data, as a part for garbage collection process, need garbage collection system to move to another position from a position.When writing the data dead in data pipeline 106, by refuse collection bypass 316, the storage that feeds of data is also fed in non-volatile memory medium 110 subsequently to write impact damper 320 can be write impact damper, with before programming to data, fill the page of logical page (LPAGE).In this way, the data dead write in data pipeline 106 can not stagnate the reading from Nonvolatile memory devices 102.
In another embodiment, write data pipeline 106 comprises write-in program module 310, and it has one or more the user-defined function in write data pipeline 106.Write-in program module 310 allows User Defined to write data pipeline 106.User based on specific data demand or can apply self-defined write data pipeline 106.When memory controller 104 is FPGA, user can use self-defining order and function relatively easily to programme to write data pipeline 106.User can also use write-in program module 310 self-defining function to be included in ASIC, but custom asic is more possible than FPGA difficult many.Write-in program module 310 can comprise impact damper and bypass mechanism, to allow to the first data segment executable operations in write-in program module 310, the second data segment can be made to continue through write data pipeline 106 simultaneously.In another embodiment, can comprise can by the processor core of software programming for write-in program module 310.
It is to be noted, write-in program module 310 is shown as between input buffer 306 and compression module 312, but write-in program module 310 can be positioned at write data pipeline 106 Anywhere, and can be distributed between each stage 302-320.In addition, the multiple write-in program modules 310 being distributed between each stage 302-320, independent programming and operation may be had.In addition, the order of stage 302-320 can be changed.Those skilled in the art will appreciate that can based on specific user's request feasible change the order of stage 302-320.
Reading data flow waterline 108 comprises ECC correction module 322, and it, by using the ECC stored together with each ECC block of asking to wrap, determines whether there is error in data the ECC block of the request bag received from non-volatile memory medium 110.If there is any mistake and ECC can be used to correct described mistake, any mistake of ECC correction module 322 just subsequently in correction request bag.Such as, if ECC can detect that 6 bits make a mistake, but can only correct the mistake of 3 bits, ECC correction module 322 just corrects the ECC block of the request bag that 3 bits at the most make a mistake.ECC correction module 322 is by changing into correct bit by the bit made a mistake or 0 state corrects the bit made a mistake, so that request bag is identical with the bag when being written into non-volatile memory medium 110, and for wrapping generation ECC.
If ECC correction module 322 determine request handbag draw together the bit more made a mistake that can correct than ECC, ECC correction module 322 just can not correction request bag damage ECC block in mistake, and send interruption.In one embodiment, ECC correction module 322 sends and has the interruption that the message made a mistake is wrapped in instruction request.This message can comprise ECC correction module 322 can not error recovery information or can imply that ECC correction module 322 can not error recovery.In another embodiment, ECC correction module 322 utilizes the ECC block of the damage of interruption and/or message send request bag.
In one embodiment, master controller 224 reads the ECC block of the damage of the request bag that ECC correction module 322 can not correct or the ECC block of partial destruction, it is corrected and turns back to ECC correction module 322, to be further processed by reading data flow waterline 108.In one embodiment, the ECC block of damage of request bag or the ECC block of partial destruction are sent to the device of request msg.Request unit 155 can correct described ECC block or use another copy (such as, backup or mirror image copies) replace described data, and the replacement data of request data package can be used subsequently or return it to reading data flow waterline 108.Request unit 155 can use the header packet information identification in the request bag made a mistake to need to replace the request bag damaged or the data of replacing the data structure belonging to bag.In another embodiment, memory controller 104 uses the RAID of some type to store data and can the data of Recover from damaging.In another embodiment, ECC correction module 322 sends and interrupts and/or message, and the read operation relevant to request data package abandoned by receiving trap.Those skilled in the art will appreciate that and determine to ask one or more ECC block of bag to damage as ECC correction module 322 and ECC correction module 322 can not correct the result of described mistake, other options and action can be taked.
Reading data flow waterline 108 comprises de-packetizer 324, and it receives the ECC block of request bag directly or indirectly from ECC correction module 322, and checks and remove one or more handbag head.De-packetizer 324 can by checking the Packet Identifier, data length, Data Position etc. in packet header, checking handbag head.In one embodiment, packet header comprises and can be used for verifying that the bag being sent to reading data flow waterline 108 is the Hash codes of request bag.De-packetizer 324 also removes packet header from the request bag added by packing device 302.De-packetizer 324 to specific bag executable operations, just can not add amendment and just forwards these bags.Possible example is, during index reconstruction requires the process of reconstruction of header packet information, ask containers labels.Other example comprises the transmission specifying in the various types of bags used in Nonvolatile memory devices 102.In another embodiment, the operation of de-packetizer 324 can depend on the type of bag.
Reading data flow waterline 108 comprises calibration module 326, and it receives data from de-packetizer 324 and removes unwanted data.In one embodiment, the reading order being sent to non-volatile memory medium 110 obtains bag.The device of request msg may not need all data in the bag obtained, and calibration module 326 removes unwanted data.If the data in the page obtained are data of request, alignment modules 326 does not just remove any data.
Data, before data being sent to the next stage, are reformatted as the data segment with the data structure of the device format compatible of request msg section by calibration module 326.Usually, when being processed by reading data flow waterline 108 pairs of data, the size of data segment or bag can change in each stage.Alignment modules 326 uses the data received be applicable to being sent to request unit 155 to be changed into by data layout and combine the data segment forming response.Such as, the data from a part of first packet can combine with the data from a part of second packet.If data segment is greater than the data that request unit 155 is asked, calibration module 326 just can abandon unwanted data.
In one embodiment, reading data flow waterline 108 comprises reading sync buffering device 328, and it is before reading data flow waterline 108 processes, one or more request bag that buffer memory receives from non-volatile memory medium 110.Read the border of sync buffering device 328 between non-volatile memories clock zone and bus clock territory, local, and consider the difference of clock zone and buffer memory is provided.
In another embodiment, reading data flow waterline 108 comprises output buffer 330, and it receives request bag from calibration module 326 and stored described bag before the bag received is transferred to request unit 155.Output buffer 330 take into account and when receives data segment and when by the mistiming data segment transmission to other parts or request unit 155 of memory controller 104 from the stage of reading data flow waterline 108.Output buffer 330 also allows data bus 204 to receive data with the speed can born higher than reading data flow waterline 108 from reading data flow waterline 108, to improve the work efficiency of data bus 204.
In one embodiment, reading data flow waterline 108 comprises media encryption module 332, it receives the request bag of one or more encryption from ECC correction module 322, and before one or more request bag described is sent to de-packetizer 324, utilize the encryption key for Nonvolatile memory devices 102 is unique to be decrypted one or more request bag described.Usually, the encryption key that uses of medium deciphering module 332 data decryption is identical with the encryption key that media encryption module 318 uses.In another embodiment, non-volatile memory medium 110 can have two or more subregion, and what memory controller 104 showed is two or more memory controller 104 as it, and each memory controller works in the single subregion in non-volatile memory medium 110.In this embodiment, each subregion can use unique media encryption key.
In another embodiment, reading data flow waterline 108 comprises deciphering module 334, and it encrypted described data segment before data segment de-packetizer 324 formatd is sent to output buffer 330.Can use the encryption key decryption data segment received together with read requests, described read requests starts the acquisition to the request bag received from reading sync buffering device 328.Deciphering module 334 can use and read the encryption key decryption first received together with the read requests of the first bag and wrap, and different encryption key decryption second can be used subsequently to wrap, or encryption and decryption is not just sent to the second bag the next stage of reading data flow waterline 108.When using the encrypted random number not having to encrypt to store bag, be combined random number and encryption key decryption packet.Other devices of the encryption key that can use from host computing system 114, client, Key manager or managed storage controller 104 receive encryption key.
In another embodiment, reading data flow waterline 108 comprises the decompression module 336 decompressed to the data segment formatd by de-packetizer 324.In one embodiment, decompression module 336 uses of being stored in handbag head and containers labels or the compressed information in the two to select supplementary procedure, and compression module 312 uses described supplementary procedure packed data.In another embodiment, by the gunzip of asking the device of just decompressed data segment appointment decompression module 336 to use.In another embodiment, decompression module 336, according to the default setting based on each type of data structure or data structure classification, is selected to add condensing routine.First bag of the first object can not use the gunzip of acquiescence, and the second bag of the second data structure of same data structure classification or same data structure type can use the gunzip of acquiescence, and the 3rd data structure of same data structure classification or same data structure type can use and not decompress.
In another embodiment, reading data flow waterline 108 comprises fetch program module 338, and it comprises one or more the user-defined function in reading data flow waterline 108.Fetch program module 338 has the feature similar with write-in program module 310 and allows user to provide self-defining function to reading data flow waterline 108.Fetch program module 338 can be positioned at position as shown in Figure 3, also can be positioned at other positions of reading data flow waterline 108, or can be included in multiple parts of the multiple positions in reading data flow waterline 108.In addition, multiple fetch program modules 338 that be positioned at multiple positions of reading flow waterline 108, that work alone can be had.One of ordinary skill in the art would recognize that the other forms of fetch program module 338 in reading data flow waterline 108.The same with write data pipeline 106, the stage of reading data flow waterline 108 can be reset, and one of ordinary skill in the art would recognize that the stage of other orders in reading data flow waterline 108.
Memory controller 104 comprises control and status register 340 and controls queue 342 accordingly.Control and status register 340 and control queue 342 and contribute to order and subcommand controls and sorts, described order and subcommand to write the data processed in data pipeline 106 and reading data flow waterline 108 relevant.Such as, the data segment in packing device 302 can have one or more corresponding control command in the control queue 342 relevant to ECC maker 304 or instruction.When packing to data segment, some instruction or order can be performed in packing device 302.When by that produce from data segment, the new data packets formed to the next stage time, can by control and other orders or instruction are sent to next one control queue 342 by status register 340.
Can simultaneously by order or instruction load to the control queue 342 of bag being used for just being sent to write data pipeline 106, when flow line stage performs each bag, each flow line stage takes out suitable order or instruction.Similarly, can simultaneously by order or instruction load to the control queue 342 being used for the bag that reading data flow waterline 108 is just being asked, when flow line stage to each bag perform time, each flow line stage takes out suitable order or instruction.One of ordinary skill in the art would recognize that other Characteristic and function of control and status register 340 and control queue 342.
Memory controller 104 and/or Nonvolatile memory devices 102 can also comprise memory bank interleaving controller 344, sync buffering device 346, memory bus controller 348 and multiplexer (" MUX ") 350.
In certain embodiments, virtual store layer provides and is convenient to store the interface that client executing stores operation lastingly.Virtual store layer can simplify the data storage operations and the storage feature showing reinforcement that store client, such as, and separability, food support, recovery etc.Fig. 4 describes an embodiment of the system 400 comprising virtual store layer (VSL) 430, and the logical address space of Nonvolatile memory devices 402 is presented to the storage client application 412 run on calculation element 401 by described virtual store layer (VSL).Calculation element 401 can comprise processor, non-volatile memories, storer, man-machine interaction (HMI) parts, communication interface (communication for being undertaken by network 420) etc.
Nonvolatile memory devices 402 can comprise single Nonvolatile memory devices, can comprise multiple Nonvolatile memory devices, groups of memories or other suitable configurations.Virtual store layer 430 can driver, user space application etc.In certain embodiments, virtual store layer 430 is realized in conjunction with above-mentioned driver 118.Virtual store layer 430 and/or storage client 412 can be presented as the instruction be stored on Nonvolatile memory devices.
VSL 430 can keep logical address space, and described logical address space is presented to and stored client 412 by one or more interface provided by VSL 430 (VSL interface 436) and/or API.Store client 412 can include but not limited to: operating system, virtual opetrating system (such as, Client OS, supervisory routine etc.), file system, database application, server application, general purpose application program etc.In certain embodiments, one or more storage client 452 operated on remote computing device 450 accesses VSL 430 by network 420.
VSL 430 is configured to perform on the Nonvolatile memory devices 402 that can comprise Nonvolatile memory devices as above store operation lastingly.VSL 430 is communicated with Nonvolatile memory devices 402 by communication bus 421, described communication bus 421 includes but not limited to: PCE-e bus, network connect (such as, wireless bandwidth (Infiniband)), storage networking, fibre channel protocol (FCP) network, HyperSCSI etc.Configuration store operation can be carried out according to the capacity of Nonvolatile memory devices 402 and/or configuration.Such as, if Nonvolatile memory devices 402 comprises one-time write, the erasable device of block, VSL 430 just can be configured to perform accordingly storage operation (the storage data such as, on the memory location of initialized or erasing etc.).
In certain embodiments, VSL 430 accesses storing metadata 434, to keep the contact between the logical identifier (such as, block) in logical address space 432 and the physical storage locations on Nonvolatile memory devices 402.As used herein, physical storage locations can refer to any memory location of Nonvolatile memory devices 402, and it can include but not limited to: partition holding, erase block, storage unit, the page, logical page (LPAGE), logical erase block etc.
VSL 430 keeps the distribution of " arbitrarily to any " between the logical identifier in logical address space 432 and the physical storage locations on Nonvolatile memory devices 402.VSL 430 can cause data to be written into or to be updated to " non-former address " (" out-of-place ") on Nonvolatile memory devices 402.In certain embodiments, order and with the format memory data based on daily record.Data are stored into the benefit that " non-former address " provides wear leveling, and embody the performance of " disposable erasing and the programming " of a lot of Nonvolatile memory devices.In addition, non-former address write (and writing data into logical storage location instead of the single page) embodies the asymmetric property of Nonvolatile memory devices 402.Asymmetric property refers to the concept that time quantum that different storage operation (read, write, erasing) takies is different.Such as, programming data into that time that non-volatile memory medium 410 takies grows to is 10 times of the time shared by data of reading from solid-state memory element medium 410.In addition, in some cases, can only program data into first through the physical storage locations of initialization process (such as, wiping).The time that erase operation takies grows to the time that programming operation takies 10 times (and will extend 100 times than the time shared by read operation).Relation between physical storage locations on logical identifier in logical address space 432 and Nonvolatile memory devices 402 is kept in storing metadata 434.
In certain embodiments, VSL 430 make data with order, be saved on nonvolatile memory 402 based on the form of daily record.Order, the order that can comprise the storage operation continuing to perform on Nonvolatile memory devices 402 based on the storage of daily record.In certain embodiments, data store together with persistent metadata, are saved on Nonvolatile memory devices 402 together with described persistent metadata and data itself.Such as, the sequence identifier of the current storage location (such as, annex point discussed below) being stored in Nonvolatile memory devices 402 and/or Nonvolatile memory devices 402 can be used to keep storage to operate the order performed.
With order, can comprise metadata lastingly on the Nonvolatile memory devices 402 describing described data based on the form persistant data of daily record.Persistent metadata can store together with data itself (such as, same program and/or store operate and/or in the minimum writing unit supported at Nonvolatile memory devices 402); Thus can ensure to store together with the data that persistent metadata and it describe.In certain embodiments, data store with Container Format (such as, bag, ECC code word etc.).Persistent metadata can be included in wherein as a part for the packet format of data (such as, as other fields in packet header, bag tail or bag).Alternatively or additionally, the data that part persistent metadata can describe with its separately store.In this case, persistent metadata can be linked to the data (vice versa) of (or reference) its description.
Persistent metadata data of description and can including but not limited to: the logical identifier (or other identifiers) of data, safety or access control parameter, sequence information are (such as, sequence identifier), persistent metadata mark (such as, instruction be included in atom magnitude store in operation), transaction identifiers etc.Persistent metadata can comprise some part of rebuilding storing metadata 434 and/or the order of resetting the storage operation performed on Nonvolatile memory devices 402.
According to order described herein, the data that store based on the form of daily record can be included in " event log " of the storage operation that Nonvolatile memory devices 402 performs.Therefore, VSL 430 to be stored in the data on non-volatile memory medium 410 with the particular order matched with the order of event log by access, the order that the storage of execution on Nonvolatile memory devices 402 of can resetting operates.When there are invalid disconnection (or other failure conditions), order, make VSL 430 can rebuild storing metadata 434 and other data based on the data layout of daily record.Although following patent documentation describes the equipment for still when illegally closing carrying out fault restoration and/or guarantee data integrity, the example of system and method: application on Dec 17th, 2010, name is called " APPARATUS, SYSTEM, AND METHOD FOR PERSISTENT MANAGEMENT OF DATA IN A CACHE DAVICE " U.S. Provisional Patent Application No.61/424, application on Dec 20th, 585 and 2010, be entitled as " APPARATUS, SYSTEM, AND METHOD FOR PERSISTENT MANAGEMENT OF DATA IN A CACHE DAVICE " U.S. Provisional Patent Application No.61/425, 167, entirety is it can be used as to be incorporated in herein by way of reference.In certain embodiments, Nonvolatile memory devices 402 comprises accessory power supply 407 (such as, battery, capacitor etc.), when there occurs illegal closedown, described accessory power supply 407 is powered for memory controller 404 and/or non-volatile memory medium 410.Nonvolatile memory devices 402 (or controller 404) thus can comprise " protected field " or " power outage security territory " (being defined by accessory power supply 407).Once transfer data to described Nonvolatile memory devices at the protected field of Nonvolatile memory devices, just can ensure that lasting data is stored into non-volatile memory medium 410.Alternatively or additionally, the storage that memory controller 404 can perform independent of Host computing device 401 operates.
The order that VSL 430 realizes, based on the storage format of daily record for being stored in data on nonvolatile memory 402 and storing metadata 434 provides fault restoration and/or ensures the integrality of data.After illegal closedown and reconstruction operation, VSL 430 can make the storing metadata 434 of reconstruction be exposed to and store client 412.Store client 412 thus the integrality of fault restoration and data can be entrusted to VSL 430, can simplify significantly like this and store client 412 and/or allow to store the more effective work of client 412.Such as, file system stores client 412 and can require that some metadata (such as, I node table, file allocation table etc.) to it performs fault restoration and/or data integrity business.Store client 412 may have to realize these business itself, this may increase the burden and/or complicacy that store client 412 significantly.Storing client 412 can by entrusting to VSL 430 to remove this burden the integrality of fault restoration and/or data.As mentioned above, VSL 430 with order, based on the format memory data of daily record.Like this, when there is invalid closedown, VSL 430 can rebuild storing metadata 434, and/or by use order on Nonvolatile memory devices 402, based on " current " versions of the data identification data of journal format.VSL 430 provides by the storing metadata 434 of VSL interface 436 to reconstruction and/or the access of data.Therefore, after illegal closedown, file system stores client 412 can file system metadata after access fault reparation and/or can ensure the integrality of the file data of being accessed by VSL 430.
Logical address space 432 can be " sparse ", this presentation logic address space 432 is enough large, large to distribute/logical identifier of specifying is discontinuous and separated by the part of one or more unappropriated/unspecified address, like this, the logical capacity comprised just can exceed the physical storage capacity of Nonvolatile memory devices 402.Therefore, logical address space 432 can be defined independent of Nonvolatile memory devices 402; Logical address space 432 can provide the address space larger than the physical storage capacity of Nonvolatile memory devices 402, the memory location subregion provided with Nonvolatile memory devices 402 is compared with block size, different memory location subregions and/or block size can be provided, etc.Contact between logical address space 432 and nonvolatile memory 402 is managed (using storing metadata 434) by VSL 430.Store client 412 and can utilize VSL interface 436, instead of the more limited block accumulation layer that provides of specific Nonvolatile memory devices 402 and/or other memory interfaces.
In certain embodiments, logical address space 432 can be very large, the address space of 64 bits that it logical identifier (LID) comprising 64 bits marks.The logical identifier of each 64 bits in logical address space 432 (such as, the address of 64 bits) marks each virtual storage location.As used herein, virtual storage location refers to logical storage volume block (such as, allocation block).VSL 430 can be configured to the virtual storage location realizing random capacity; Typical range of capacity is from 512 to 4086 bytes (or based on storing the demand of client 412, even from 8kb to 16kb); But the disclosure is not limited to this.Because logical address space 432 (with virtual storage location wherein) is independent of the amount of physical memory of Nonvolatile memory devices 402 and/or partition holding, so logical address space 432 can be applicable to the requirement storing client 412.
VSL 430 can use the distribution in storing metadata 434 management logic address space.In certain embodiments, VSL 430 keeps storing metadata 434, and described storing metadata 434 uses the distribution of forward index trace logic address space 432.VSL 430 can memory range in assignment logic address space 432, uses for particular memory client 412.Logical identifier can distribute to particular memory client 412 with lasting storage entity.As used herein, storage entity refers to can for any data in the lasting logical address space 412 of Nonvolatile memory devices 402 or data structure; Therefore, storage entity can include but not limited to: file system object (such as, file, stream, I node etc.), database primitive (such as, database table, panel etc.), stream, persistent-memory space, memory mapped files etc.Memory entities can also refer to virtual memory cell (VSU).File system object refers to any data structure that file system uses, and described data structure includes but not limited to: file, data stream, file attribute, file index, volume index, node table etc.
As mentioned above, assignment logic identifier refers to as specific user or stores client reserves logical identifier.Logical identifier can refer to one group or a series of logical address space 432 (such as, a group or a series of virtual storage location).The logical capacity of the logical identifier distributed can be determined by the size of the virtual storage location of logical address space 432.As mentioned above, logical address space 432 can be configured to the virtual storage location providing any pre-sizing.The size of the configuration virtual memory location such as client 412, VSL 430 can be stored by one or more.
But except non-required, the logical identifier of distribution can be relevant to the physical storage locations on Nonvolatile memory devices 402, and/or the logical identifier that can not distribute is assigned to described physical storage locations.In certain embodiments, the logical identifier that VSL 430 distributes includes large, continuous print scope in logical address space 432.The large logical address space (such as, the space of 64 bit addresses) provided by VSL 430 realizes the feasibility that is large, continuous print scope in logical address space.Such as, VSL 430 can by the logical identifier distributing to file to be used for file data logical address space 432 in address realm be that the continuously and virtually memory location of 2^32 is relevant.If each virtual storage location (such as, allocation block) is 512 bytes, then the logical identifier distributed can represent the logical capacity of 2 terabytes.The physical storage capacity of Nonvolatile memory devices 402 can be less than 2 terabytes and/or only can enough store a small amount of this file, if logical identifier distribution causes the equivalence of amount of physical memory to distribute like this, VSL 430 just will consume the capacity of light Nonvolatile memory devices 402 very soon.But advantageously, VSL 430 is configured to large, continuous print scope in assignment logic address space 432, and postpones the physical storage locations on Nonvolatile memory devices 402 to distribute to logical identifier, until just distribute when needing.Similarly, VSL 430 can support the logic scope that use " sparse " distributes.Such as, storing client 412 can ask the first data segment to persist the logical identifier " afterbody " persisting in " packet header " of the logical identifier distributed and the second data segment and distributing.VSL 430 only can distribute those physical storage locations on Nonvolatile memory devices 402, that need to persist the first and second data segments.For the logical identifier being not used in the distribution persisting data, the physical storage locations that VSL 430 can not distribute for this logical identifier or be reserved on Nonvolatile memory devices 402.
VSL 430 keeps storing metadata, with the distribution in trace logic address space and the logical identifier in trace logic address space 432 and the distribution between the physical storage locations on non-volatile memory medium 410.In certain embodiments, VSL 430 uses the distribution of the distribution of single metadata structure trace logic and physical storage locations.Alternatively or additionally, VSL 430 can be configured to use independently physics to reserve metadata trace logic and distributes the assignment of logical in metadata and the distribution following the tracks of the physical storage locations on non-volatile memory medium 410.
Store client 412 and can access VSL 430 by VSL interface 436.In certain embodiments, store client 412 and specific function can be entrusted to VSL.Such as, as mentioned above, store client 412 can utilize the order of VSL 430, based on the data layout of daily record, fault restoration and/or data integrity function are entrusted to VSL 430.In certain embodiments, logical address space 432 and/or physical store can be reserved and entrust to VSL 430 by storage client.
Usually, store client 412 (such as, file system) and follow the tracks of available logical address and/or physical storage locations.Can by store client 412 can logical storage location be restricted to the physical storage capacity (or subregion wherein) of bottom Nonvolatile memory devices.Therefore, one group of logical address that client 412 can keep the physical storage locations of " reflection " Nonvolatile memory devices is stored.Such as, as shown in Figure 4, storing client 412 can be one or more available LBA (Logical Block Addressing) (LBA) of new file identification.Because LBA can map directly to physical storage locations in traditional realization, so LBA is unlikely continuous print; The availability of continuous LBA can depend on the capacity of bottom block memory storage and/or whether this file is " fragment ".Store client 412 execution block level operation subsequently, particularly to pass through block accumulation layer (such as, block deicing interface) storage file.If bottom memory storage provides man-to-man mapping between LBA (Logical Block Addressing) and physical storage locations, then the same with conventional memory device, block accumulation layer perform suitable LBA to the conversion of physical address and perform request storage operation.But, if bottom Nonvolatile memory devices does not support man-to-man mapping (such as, bottom memory storage is order or the device writing improper position, such as, Nonvolatile memory devices according to disclosure embodiment), then need the conversion (such as, flash translation layer (FTL) or other mappings) of another group redundancy.The conversion of this group redundancy and can represent to the heavy burden storing the storage operation that client 412 performs to storing the requirement that client 412 keeps logical address to distribute, and it is just very difficult or can not distribute continuous print LBA scope to make not carry out time-consuming " defragmentation " operation.
In certain embodiments, store client 412 and distribution function is entrusted to VSL 430.Store client 412 and can access VSL interface 436 to ask the logic scope in logical address space 432.VSL 430 uses the distribution state of storing metadata 434 trace logic address space 432.If VSL 430 determines that the ranges of logical addresses of asking also is not assigned with, VSL 430 just distributes the ranges of logical addresses of request for storing client 412.If the scope of request is assigned with (or only having part range not to be assigned with), VSL 430 just can return the scope of the replacement in logical address space 432 and/or can return failure information.In certain embodiments, VSL 430 can return the scope of the replacement in logical address space 432, and the scope of this replacement comprises the logical address of successive range.The logical address with successive range often simplify the management of storage entity about ranges of logical addresses.The relation between the physical storage locations on storing metadata maintenance logical address space 432 and Nonvolatile memory devices 402 is used, so do not need the address of one group of redundancy to change due to VSL 430.In addition, VSL 430 uses storing metadata 434 to identify unappropriated logical identifier, this storage client 412 is broken away from this burden.
In certain embodiments, VSL 430 distributes as above in logical address space 432.VSL 430 can access the index (such as, the forward index of Fig. 5) of the ranges of logical addresses comprising distribution to identify unappropriated logical identifier, is distributed to by described unappropriated logical identifier store client 412 when receiving request.Such as, VSL 430 can keep the storing metadata 434 of the tree data structure comprising scope coding (range-coded) as above; Entry in tree can the logical identifier of distribution in presentation logic address space 432, and " hole " in tree represents the logical identifier not have distribution.Alternatively or additionally, VSL 430 keeps the index (such as, not needing search forward index) can distributing to the unappropriated logical identifier storing client.
Fig. 5 describes storing metadata, particularly an example of forward index, and described forward index 504 keeps the distribution of the logical address space of one or more Nonvolatile memory devices (such as, above-described memory storage 106).Forward index 504 can by be configured to further keep distribute logical identifier and Nonvolatile memory devices on physical storage locations between distribution.Forward index 504 can be kept by VSL 430, memory controller (such as, above-described memory controller 404) and/or driver (such as, above-described driver 118) etc.
In the example of fig. 5, data structure 504 is embodied as B tree (B-tree) of scope coding.But the disclosure is not limited to this; Suitable data structure can be used to realize forward index 504, and described data structure includes but not limited to: tree, B tree, the B tree of scope coding, radix tree, mapping, content addressable map (CAM), table, Hash table or other suitable data structures (or combination of data structure).
Forward index 504 comprises multiple entry 505 (entry 505A-G), and each entry represents one or more logical identifier in logical address space.Such as, entry 505B quotes logical identifier 515 (LID 072-083).Data can sequentially or " in improper position " be stored on Nonvolatile memory devices, like this, just may there is no the corresponding relation between logical identifier and physical storage locations.Forward index 504 keeps the distribution between the logical identifier of distribution and physical storage locations (such as, using physical storage locations with reference to 517).Such as, with reference to 517B, logical identifier 515 (LID 072-083) is distributed to one or more physical storage locations of Nonvolatile memory devices.In certain embodiments, the physical address on Nonvolatile memory devices is comprised with reference to 517.Alternatively or additionally, the second data structure (such as, reverse indexing) etc. can be corresponded to reference to 517.Can upgrade in response to the change of the physical storage locations of data (such as, due to cleaning operation, Data Update, amendment, covering etc.) with reference to 517.
In certain embodiments, but one or more entry 505 can represent has distributed to storage client the logical identifier (such as, storing client does not also make data be written to logical identifier) also not distributing to any specific physical storage locations.The physical storage locations of 505 of unappropriated entry can be labeled as " sky " with reference to 517 or not distribute.
By border 507, entry 505 is arranged in data tree structure.In certain embodiments, by logical identifier, index is carried out to entry 505, this provide searching fast and effectively entry 505.In the example of fig. 5, the logically sequence arrangement entry 505 of marker character, like this, entry 505C just quotes " minimum " logical identifier and 505G quotes " the highest " logical identifier.Particular items 505 is accessed by the border 507 of traversal forward index 504.In certain embodiments, forward index 504 is balanced, so that all leaf entries 505 all have the similar degree of depth in tree.
For the sake of clarity, Fig. 5 exemplarily describes the entry 505 comprising multiple logical identifier, but, the disclosure is not limited to this, and one of ordinary skill in the art would recognize that, entry 505 can comprise any suitable logical identifier and represent, includes but not limited to: alpha-numerical characters, hexadecimal character, binary value, textual identifier, Hash codes etc.
The entry 505 of index 504 can introduce the logical identifier of variable-size and/or length; Single entry 51205 can introduce multiple logical identifier (such as, a group of logical identifier, logical identifier scope, discontinuous one group of logical identifier etc.).Such as, entry 505B represents the logical identifier 072-083 of successive range.Other entries of index 504 can represent one group of discontinuous logical identifier; Entry 505G presentation logic identifier 454-477 and 535-598 is that each logical identifier distributes respective physical storage locations by each reference 517G and 527G.Forward index 504 can use any suitable technology to represent logical identifier; Such as, entry 505D introduces logical identifier 178 and corresponds to the length 15 of logical identifier scope 178-192.
In certain embodiments, entry 504 comprises and/or reference metadata 519, described metadata 519 can comprise the metadata about logical identifier, such as: the time limit, size, logical identifier attribute (such as, client identifier, data identifier, filename, group identifier), bottom physical storage locations etc.Logical identifier (logical identifier by relevant to each entry 505) can be passed through index is carried out to metadata 519, like this, no matter how the position of the bottom physical storage locations of data changes, and metadata 519 still can keep and the contacting of entry 505.
Index 504 may be used for effectively determining whether Nonvolatile memory devices comprises specific logical identifier.In one example in which, storing client can the specific logical identifier of request dispatching.If index 504 comprises entry 505, described entry 505 comprises the logical identifier of request, then think that this logical identifier relevant to request can be identified as having distributed.If described logical identifier not in the index, then can be distributed to requestor by creating new entry 505 in index 504 by logical identifier.In another example, store the data of client-requested certain logic identifier.By the reference 517 of the representative physical storage locations of accesses entry 505, can determine the physical storage locations of data, wherein, described entry 505 comprises logical identifier.In another example, client amendment is about the data of logical identifier.In another example, store the available data of client amendment certain logic identifier.Amended data are written sequentially to the new physical storage locations on Nonvolatile memory devices, and the physical storage locations upgrading the entry 505 in index 504 with reference to 517 to introduce the physical storage locations of new data.It can be the invalid data that will carry out reclaiming in cleaning operation by expired data markers.
The forward index of Fig. 5 keeps logical address space, like this, just carries out index by logical identifier to logical address space.As mentioned above, the entry 505 in index 504 can comprise the reference 517 of the physical storage locations represented on Nonvolatile memory devices.In certain embodiments, the physical address (or address realm) of physical storage locations can be comprised with reference to 517.Alternatively or additionally, can be indirectly (such as, introduce the second data structure, as reverse indexing) with reference to 517.
Fig. 6 describes an example of the reverse indexing 622 of the metadata for keeping the physical storage locations about Nonvolatile memory devices.In the example of fig. 6, reverse indexing 622 is embodied as list data structure.But the disclosure is not limited to this, but any suitable data structure can be used to realize reverse indexing 622.Such as, in certain embodiments, utilize above-described forward index 504 in identical data structure, realize reverse indexing 622 (such as, the part of reverse indexing 622 and/or entry can be included as the leaf entry of forward index 504).Index 622 comprises multiple entry 620 (being described as the row in list data structure 622), each entry can comprise Entry ID 624, physical address 626, the data length 628 (in this case, data are packed datas) relevant to the data of the physical address 626 be stored on physical non-volatile storage medium 410, effectively label 630, logical address 632 associated with the data, the data length 634 relevant to logical address 632 and other miscellaneous datas 636.In another embodiment, reverse indexing 622 can comprise the designator whether instruction physical address 626 stores dirty data or clean data etc.
Reverse indexing 622 can be organized according to the configuration of specific Nonvolatile memory devices and/or layout.Therefore, partition holding (such as, erase block), physical storage locations (such as, the page), logical storage location etc. can be passed through and arrange reverse indexing 622.In the example of fig. 6, reverse indexing 622 is arranged to multiple erase block (640,638 and 642), each erase block comprises multiple physical storage locations (such as, the page, logical page (LPAGE) etc.).
Entry 620 comprises the metadata about physical storage locations, and described physical storage locations comprises the data of the entry 505F of Fig. 5.Entry 620 indicates physical storage locations to be positioned at erase block n 638.Being erase block n-1640 before erase block n 638, is erase block n+1642 (content of erase block n-1 and n+1 is not shown) afterwards.
Entry ID 624 can be address, virtual linkage or other data (or other storing metadatas) of the entry in reverse indexing 622 being associated with the entry in forward index 504.Physical address 626 indicates the physical address on Nonvolatile memory devices (such as, non-volatile memory medium 410).The data length 628 relevant to physical address 626 identifies the length of the data being stored in physical address 626.Physical address 626 can be called target component 644 together with data length 628.
Logical identifier 632 and data length 634 can be called source dates 646.The logical identifier of logical address space is utilized to be associated with entry by logical identifier 632.Logical identifier 632 can be used for the entry of reverse indexing 622 to be associated with the entry 505 of forward index 504.Data length 624 refers to the data length (such as, from the angle storing client) in logical address space.Particularly due to data compression, packet header expense, encryption overhead etc., source dates 646 data length 634 can be different from source dates 644 data length 634.In the example of fig. 6, the data relevant to entry 620 can high compression, and can be compressed to 1 block on Nonvolatile memory devices from the block of 64 logical address space.
Whether the data that effective label 630 instruction is mapped to entry 620 are effective.In this case, the data relevant to entry 60 are effective, and describe it as " Y " in the row of entry 620 in figure 6.As used herein, valid data refer to also not having deleted and/or also not making the data that it is expired (covering or amendment) of latest update.The effective status of each physical storage locations of Nonvolatile memory devices can be followed the tracks of in reverse indexing 622.Forward index 504 can comprise the entry only corresponding to valid data.In the example of fig. 6, entry " Q " 648 indicates the data relevant to entry 648 to be invalid.It is pointed out that forward index 504 does not comprise the logical address relevant to entry Q 648.Entry Q 648 can correspond to the expired version (data cover by being stored in now entry " C ") of the data of entry 505C.
Reverse indexing 622 can retain the entry of invalid data, can distinguish fast effectively and invalid data when recovery of stomge (such as, clearing up).In certain embodiments, dirty data and clean data can be followed the tracks of in a similar fashion in forward index 504 and/or reverse indexing 622, using dirty data in distinguishing from clean data when carrying out the operation as speed buffering.
In certain embodiments, reverse indexing 622 can omit source dates 646.Such as, if source dates 646 stores together with data, it may be arranged in the packet header of the data of storage, and reverse indexing 622 just can by comprising physical address 626 tagging logic address associated with the data, wherein, source dates 646 can be gone out from the data identification stored.
Reverse indexing 622 can also comprise other miscellaneous datas 636, such as, and filename, object name, source data, storage client, safety label, atomicity mark, transaction identifiers etc.One of ordinary skill in the art would recognize that other information useful in reverse indexing 622.Although physical address 626 is shown in reverse indexing 622, in other embodiments, physical address 626 or other target components 644 can be included in other positions, such as, are included in forward index 604, intermediate table or data structure etc.
Reverse indexing 622 can be arranged by erase block or erase area (or other partition holdings), to travel through a part of index allowing cleaner identification storage partition (such as, erase block 638) on the contrary in valid data and the quantity of valid data is quantized or, identify invalid data and the quantity of invalid data is quantized.The cleaner quantity that is effective and/or invalid data that can be based in part in each subregion selects the partition holding that will recover.
In certain embodiments, cleaner and/or garbage collection process are restricted to perform in the specific part of amount of physical memory.Such as, part storing metadata 434 can be stored into Nonvolatile memory devices 402 periodically lastingly, thus garbage collector and/or cleaner can be restricted in the physical storage locations that operates in corresponding to persistent storage metadata 434.In certain embodiments, (such as, persist in order) storing metadata 434 by the relative time limit, while newer part retains in the nonvolatile memory, persist older part.Therefore, cleaner and/or garbage collection system can be restricted to the older part executable operations to physical address space, like this, the data of the storage resource request that just unlikely can influence the course.
Refer again to Fig. 4, Nonvolatile memory devices 402 can be configured to order, based on the form of daily record, data are stored on Nonvolatile memory devices 410.Therefore, the content of Nonvolatile memory devices can be included in sequential " event log " of the storage operation on non-volatile memory medium 410.By the annex point additional data in the amount of physical memory of Nonvolatile memory devices 402, the order sequence that storage can be kept to operate.Alternatively or additionally, sequence information can be kept by the persistent data be stored on non-volatile memory medium 410.Such as, each partition holding on non-volatile memory medium 410 (such as, erase block) respective designator (such as, timestamp, sequence number or other designators) can be comprised, to indicate order or the order of the partition holding in event log.
Fig. 7 describes the amount of physical memory 700 of non-volatile memory medium (such as, non-volatile memory medium 410).Amount of physical memory 700 is arranged in partition holding (such as, erase block), and each partition holding comprises data-storable multiple physical storage locations (such as, the page or logical page (LPAGE)).Can be group by the page initialization of partition holding (such as, wiping).
Can be each physical storage locations range of distribution from 0 to N respective physical address.Data sequence must be stored in annex point 720.Annex point 720 can sequentially move by amount of physical memory 700.After storing data in annex point 720 (memory location 716), annex point sequentially proceeds to next available physical storage locations.As used herein, available physical storage locations refers to initialization and prepares to store the physical storage locations of data (such as, being wiped free of).Some non-volatile memory medium (such as, non-volatile memory medium 410) can only be programmed once after an erase.Therefore, as used herein, available physical storage locations can refer to be in the memory location of initialization (such as, erasing) state.As unavailable in the next memory location in infructescence (such as, comprise valid data, also be not wiped free of or initialization, service disconnection etc.), annex point 720 just selects next available physical storage locations.In the example of fig. 7, after data are stored into physical storage locations 716, annex point 720 can be skipped disabled partition holding 713 and proceed to next available position (such as, the physical storage locations 717 of partition holding 714).
Data are stored into " last " memory location (such as, the memory location N 718 of partition holding 815) after, annex point 720 turns back to the first subregion 712 (if or 712 unavailable, just turn back to next available partition holding).Therefore, annex point 720 physical address space can be considered as circulating or the cycle.
Refer again to Fig. 4, with order, can to comprise based on the format memory data of daily record on the non-volatile memory medium 410 that metadata is stored into lastingly and describes data stored thereon.Persistent metadata can comprise logical identifier associated with the data and/or provide the order operated with the storage that non-volatile memory medium 410 performs to sort relevant sequence information.Therefore, this order, " event log " that can represent the order of following the tracks of the storage operation performed on Nonvolatile memory devices 402 based on the data of daily record.
Fig. 8 describe order, based on an example of the form (packet format 810) of daily record data.Packet 810 comprises data segment 812, and described data segment 812 comprises the data of one or more logical identifier.In certain embodiments, data segment 812 comprise compression, encryption and/or the data of albefaction.As used herein, " data of albefaction " refer to be biased, coding and/or the data that are configured to have AD HOC and/or statistical property.In addition, data segment 812 can be encoded to one or more error-correcting code data structure (such as, ECC coded word) and/or symbol.Data segment 812 can be predetermined size (such as, fixing " block " or " data segment " size).Alternatively, the size of data segment 812 can be variable.
Bag 810 comprises the persistent metadata 814 be stored on non-volatile memory medium.In certain embodiments, persistent metadata 814 stores (such as, as handbag head, bag tail etc.) together with data segment 812.Persistent metadata 814 can comprise logical identifier designator 815, its logical identifier belonging to identification data section 812.Logical identifier designator 815 may be used for rebuilding storing metadata, such as, and forward index (such as, forward index 504) and/or reverse indexing (such as, reverse indexing 622).Persistent metadata 814 may further include one or more metadata token 817.As mentioned below, mark 817 to can be used for supporting that atom magnitude stores operation, affairs, caching etc.
In certain embodiments, wrap 810 to be associated with sequence indicators 818.Sequence indicators 818 can be stored into memory location (such as, the page) lastingly and/or be stored into the partition holding (such as, erase block) of bag 810 lastingly together with bag 810.Alternatively, sequence indicators 818 can be stored into independently memory location lastingly.In certain embodiments, when making partition holding available (such as, when initialization, erasing, when first or last memory location are programmed, etc.), application sequence designator.Sequence indicators 818 can be used for the order sequence of the storage operation determined on Nonvolatile memory devices.Order, " event log " that provide the storage on Nonvolatile memory devices (such as, Nonvolatile memory devices 402) to operate based on the data layout of daily record.
Refer again to Fig. 4, when there are invalid closedown (or other events causing part storing metadata 434 to be lost), order disclosed herein, make VSL 430 can rebuild storing metadata 434 and other data based on the form of daily record.
As mentioned above, storing metadata 434 (such as, the forward index 504 of Fig. 5) keeps the distribution of " arbitrarily to any " between the physical storage locations on logical identifier and Nonvolatile memory devices.Therefore, predetermined mapping may be there is no between the physical storage locations on logical identifier and non-volatile memory medium 410; The data of logical identifier can be stored into any physical memory location of non-volatile memory medium 410.
As mentioned above, can by be stored in order on Nonvolatile memory devices 402, rebuild storing metadata 434 based on the data of daily record.The data of latest edition are identified based on the position of annex point and/or sequence indicators associated with the data.During rebuilding, the data that persistent metadata identification (and abandoning) associated with the data is as shown in Figure 8 relevant to incomplete atom magnitude storage resource request can be used.
In certain embodiments, system 400 can comprise cache layer 440, and it is configured to use the data cached of Nonvolatile memory devices 402 buffer memory backing store 460.Backing store 460 can comprise one or more hard disk, Network Attached Storage (NAS), storage area network (SAN) or other non-volatile storages.Backing store 460 can comprise multiple physical storage locations 461 that can store the data storing client 412.Backing store 460 can be coupled to the bus 421 of calculation element 401 communicatedly.Alternatively or additionally, backing store 460 can be coupled to calculation element 401 (with VSL 430) communicatedly by network 420.
Cache layer 440 can be configured to utilize VSL 430 by the data buffer storage of backing store 460 on non-volatile memory medium 410.In certain embodiments, VSL 430 is configured to provide the logical address space 432 corresponding to the address space of backing store 460.Thus logical address space 432 can correspond to the physical storage locations 461 of backing store 461.Therefore, VSL 430 can keep storing metadata 434, so that the data cached memory location (physical storage locations such as, on Nonvolatile memory devices 402) on the logical identifier of backing store 469 and non-volatile memory medium 410 is associated.Logical address space 432 can have the logical capacity identical with the physical storage capacity of backing store 460.Alternatively, logical address space 432 can be " sparse ", with the physical storage capacity making it exceed backing store 460.The logical capacity (and physical capacity of backing store 460) of logical address space 432 can exceed the physical capacity of Nonvolatile memory devices 402.As mentioned above, VSL 430 can the logical address space 432 of managing non-volatile storage medium 402 and the distribution of physical storage capacity.In certain embodiments, VSL 430 can provide multiple logical address space 432, and each logical address space corresponds to different backing stores 460 and/or different storage clients 412.VSL 430 can keep independently storing metadata 434 for each logical address space 432.
The data of the storing metadata 434 buffer memory backing store 460 that cache layer 440 can utilize logical address space 432 and VSL 430 to keep.It is data cached that cache layer 440 can use the logical identifier of backing store 460 (being obtained by the logical address space 432 of VSL 430) to introduce on non-volatile memory medium 410.Therefore, cache layer 440 can keep the storing metadata of itself; Cache layer can not keep independently index to be associated the high speed storing position on the logical identifier of backing store 460 and non-volatile memory medium 410.By utilizing logical address space 432 and the storing metadata 434 of VSL 430, the expense of cache layer 440 can be alleviated significantly.
Cache layer 440 can optionally receive the data of backing store 460 to enter buffer memory.As used herein, " receiving " data enter buffer memory and refer to data buffer storage on non-volatile memory medium 410.Can in response to cause cache miss (such as, with the relevant data of request on Nonvolatile memory devices 402 be not effective-read to lack or write lack) data access and receive data to enter buffer memory.Data can be received in response to data are applicable to the determination of buffer memory (such as, can not make Cache Poisoning).As used herein, the data being applicable to buffer memory refer to the data that one or more storage client 412 is probably accessed subsequently.By contrast, make buffer memory " poisoning " refer to and receive the unlikely data (such as, the data of " single use ") of asking subsequently of storage client 412 to enter buffer memory.As used herein, data access refers to any operation associated with the data, includes but not limited to: reading, write, amendment, intercepting etc.
Cache layer 440 can make the decision of buffer memory receiving based on accesses meta-data 442.The information that the data access features of the logical identifier in the logical address space 432 that accesses meta-data 442 can comprise to VSL 430 provides is relevant.Accesses meta-data 442 can independent of the storing metadata 434 of VSL 430.Therefore, separately and/or in different data structures accesses meta-data 442 can kept with storing metadata 434 (such as, independent and/or different of forward index, reverse indexing etc.).
Accesses meta-data 442 can comprise the information relevant to the access characteristics crossing over the whole logical address space 432 that VSL 430 provides.Therefore, accesses meta-data 442 can comprise the accesses meta-data relevant to " buffer memory " logical identifier and with " not having buffer memory " accesses meta-data that logical identifier is relevant.As used herein, " buffer memory " logical identifier refers to the logical identifier of the data be buffered on non-volatile memory medium 410.Logical identifier refers to the logical identifier of current data be not buffered on non-volatile memory medium 410 " not have buffer memory ".Measure different from the buffer memory of traditional " least recently used ", accesses meta-data 442 can be used for data current whether in the buffer all identifiable design be suitable for the data of buffer memory.
Cache layer 440 can be configured in response to the data access in logical address space 432 and upgrade accesses meta-data 442.Upgrade accesses meta-data 442 can comprise the instruction of request of access is included in accesses meta-data 442.In certain embodiments, cache layer 440 comprises buffer memory and receives module 444, and it is configured to use cache access metadata 442 to make cache access and determines (such as, identifying the data being applicable to buffer memory).In certain embodiments, buffer memory receives module 444 can determine the access tolerance of logical identifier (such as in response to cache miss, data access request about logical identifier), and when the access tolerance of data meets " access thresholds " or other receiving standards, data can be received to enter buffer memory.As used herein, " the access tolerance " of logical identifier refers to any value of the access characteristics (such as, access frequency etc.) quantizing logical identifier.Access tolerance can include but not limited to: indicate the binary value of access logic identifier within predetermined time interval, one group of orderly this binary value, one or more Counter Value etc.As used herein, " access thresholds " refers to one or more predetermined or dynamic threshold value, and " receiving standard " refers to for optionally receiving data to enter the predetermined of buffer memory or dynamic standard (such as, threshold value).
In certain embodiments, the data not meeting receiving standard (such as, access thresholds) can be received to enter buffer memory as " low value " data.As used herein, " low value " is although refer to that not meeting buffer memory standard of receiving still can receive the data entering buffer memory.Can the access of standard (such as, lower access thresholds) tolerance be received in response to meeting more undemanding buffer memory and receive low value data to enter buffer memory.The receiving of the data of low value can to have available buffer memory capacity or other performance parameters for foundation.Before the data (such as, meeting the data of receiving standard) of other higher-value, the data of low value can be cleared up away from buffer memory.Therefore, can in the data of buffer memory internal labeling low value; Receive the data of low value to enter buffer memory can comprise: the data and/or other metadata that by data identification are " low value " on non-volatile memory medium 410.Instruction can comprise as the persistent metadata above as described in composition graphs 8.Alternatively or additionally, described instruction can be included in volatibility buffer memory, and/or be included in storing metadata that cache layer 440 and/or VSL 430 keep.
Fig. 9 A describes an example of accesses meta-data.In the example of Fig. 9 A, accesses meta-data 442 comprises visit data structure 946, described visit data structure 946 comprises multiple entry 948, and each entry comprises the access characteristics of each logical identifier in logical address space (logical address space 432 such as, mentioned above).Therefore, in certain embodiments, data structure 946 can represent the whole address space of backing store 460; Data structure 946 can comprise the entry 948 of each physical storage locations 461 representing backing store 460.The entry 948 of visit data structure 946 combines and can to correspond to all logical identifiers (with the physical storage locations 461 of backing store 460) in address space, that comprise " buffer memory " and " not having buffer memory " logical identifier.Visit data structure 946 can be sparse, to create entry 948 (or scope of entry 948) as required.Therefore, can never create and/or distribute the entry (such as, the afterbody of logical address space) of the logical address space representing particular range.
In certain embodiments, visit data structure 946 can comprise bitmap (or bit array), and wherein, each entry 948 comprises a position.Place value can represent during the specific time interval, whether there occurs one or more data access relevant to the logical identifier of entry 948.After the time interval expires, can " reset " data structure 946.As used herein, " reset " visit data structure 946 refers to clears up away access instruction (such as, entry 948 being reset to " 0 " value) from visit data structure 946.Therefore, " 1 " value can represent there occurs (or multiple) data access during the time interval, and " 0 " value can represent data access does not occur during the time interval.
In another example, entry 948 can comprise the multi-bit counter of the number of times of the request of access during determining the time interval.Can be resetted (or successively decreasing) counter after the time interval (such as, clock scan interval) expires.Therefore, the value of counter can represent the number of times (or frequency) of access relevant to the logical identifier of entry 948 during the time interval.
Refer again to Fig. 4, cache layer 440 can be configured to based on the logical identifier corresponding with data access metric sebection receive data to enter buffer memory.As mentioned above, access tolerance can be obtained from accesses meta-data 442.In certain embodiments, access tolerance can comprise step-by-step instruction, and whether its instruction have accessed the data of logical identifier during the time interval.Alternatively, access tolerance can comprise the access times relevant to logical identifier.Access tolerance can compare with access thresholds by cache layer 440, and when accessing tolerance and meeting access thresholds, data can be received to enter buffer memory.Describedly relatively can to comprise: determine whether the place value corresponding to logical identifier represents data access, and/or can comprise: multi-bit counter value and multidigit access thresholds are compared.
In certain embodiments, accesses meta-data 442 can comprise one group of orderly visit data structure 946.Fig. 9 B describes one group of orderly visit data structure 946A-N, it comprises " current " visit data structure 946A and one or more " previously " visit data structure 946B-N.Each visit data structure 946A-N can comprise respective entry 948, and as mentioned above, described entry 948 comprises the access characteristics of one or more logical identifier.
Current accessed data structure 946A can be dynamically upgraded in response to the data access during current time interval.One or more previous visit data structure 946B-N can comprise the access characteristics of preceding time interval, and can not dynamically update during current time interval.When expiring during current time interval, visit data structure 946A-N can overturn; " reset " data structure can replace current data structure 946A, and current accessed data structure 946A can be appointed as previous data structure 946B (such as, replacing original 946B), data structure 946B can replace 946C, by that analogy.Finally, data structure 946N (or reset and be appointed as current data structure 946A) can be removed.
Can by merging the access tolerance of the entry determination logical identifier of visit data structure 946A-946N.In certain embodiments, described merging can comprise additive operation, and such as, logical OR (OR) computing, so that any access of access tolerance reflection to data structure 946A-N.If upgrade visit data structure with the time interval " T ", then logical OR merges any access indicating and occur during the N*T time interval.Additive operation can indicate the access frequency during the N*T time interval.
In certain embodiments, merge the step-by-step that can comprise the entry 948 of two or more data structure 946A-N to merge.Merging can comprise any suitable merging, includes but not limited to: logic and (AND) or (OR), XOR (XOR) etc.Similarly, merging can comprise the entry 948 of two or more data structure 946A-N and or long-pending.In certain embodiments, the weighting access characteristics of the recency degree that can comprise according to them is merged; The weight of nearest access characteristics can be greater than the weight of older access characteristics.Therefore, when determining the access tolerance of logical identifier, the weight of giving the access characteristics of more nearest entry (such as, the characteristic of visit data structure 946B-N) can be greater than the weight of the access characteristics of older data structure 946B-N.As shown in equation 1 below, determine that access tolerance can comprise access characteristics and be multiplied by parameter (such as, by access characteristics position or the Counter Value of moving to left) recently:
AM = Σ i = o N - 1 R i · AC i Equation 1
In equation 1, access tolerance (AM) is the access characteristics (AC of each entry 948 in visit data structure 946A-N i) weighted array.Current accessed characteristic 0 (AC 0) can corresponding to the entry 948 in current accessed data structure 946A, access characteristics 1 (AC 1) entry 948 of the visit data structure 946B of " next up-to-date " and access characteristics N-1 (AC can be corresponded to n-1) access characteristics of entry 948 of " the oldest " visit data structure 946N can be corresponded to, by that analogy.Recently weight parameter (R i) can according to access characteristics AC irelative recency degree and different; Be applied to current accessed data structure 946A (AC 0) (R of parameter recently of access characteristics 0) (the R of parameter recently of the access characteristics being applied to " older " visit data structure 946B-N can be greater than n-1).
In another example, can pass through one or more access characteristics (AC of each entry 948 in visit data structure 946A-N i) " moving to left " can determine logical identifier access tolerance (AM), as follows:
AM = &Sigma; i = 0 N - 1 AC i < < ( N - i ) Equation 2
In equation 2, access tolerance (AM) is the access characteristics (AC of each entry 948 in visit data structure 946A-N i) weighted array; As mentioned above, access characteristics AC 0corresponding to the entry 948 in current accessed data structure 946A, and access characteristics N-1 (AC n-1) correspond to the access characteristics of entry 948 of " the oldest " visit data structure 946N.Access characteristics (the AC of current accessed data structure 946A can be given 0) larger weight (such as, move to left N-1), and the weight (such as, mobile N-i position) that the entry 948 of giving older visit data structure 946B-N is less; In equation 2, the oldest visit data structure 946N is not weighted (such as, in the access tolerance of combination, giving minimum weight).Comprise at data structure 946A-N in the embodiment of bitmap (such as, each entry 948 comprises a position), the summation of equation 2 can comprise logical OR computing.Although this document describes the particular technology of determining to access tolerance, the disclosure is not limited to this, and can be suitable for combined access characteristic and/or to access characteristics weighting in any suitable manner.
With reference to figure 4, as mentioned above, cache layer 440 keeps accesses meta-data 442, with the access characteristics of the logical identifier in trace logic address space 432.Accesses meta-data 442 can comprise one or more visit data structure, and described visit data structure comprises the entry of the access characteristics corresponding to one or more logical identifier.In certain embodiments, cache layer 440 follows the tracks of the access characteristics of each logical identifier individually, to form the correspondence one by one between the entry in logical identifier and accesses meta-data 442.In other embodiments, accesses meta-data 442 can the access characteristics of trace logic group identifier, so that each entry corresponds to the access characteristics of multiple logical identifier.Cache layer can use any suitable mechanism logical identifier to be mapped to entry in accesses meta-data 442, and described mechanism includes but not limited to: Hash mapping, range mappings, mixed-use developments etc.Therefore, in certain embodiments, cache layer 440 comprises mapping block 445, and it is configured to the entry be mapped to by logical identifier in accesses meta-data 1032.
Figure 10 A describes the example of the mapping based on Hash between the logical identifier of logical address space 1032 and the entry 1048 of visit data structure 1046.In the example of Figure 10 A, logical address space 1032 comprises M logical identifier (0 to M-1), and visit data structure 1046 comprises E entry (0 to E-1).Every ((k*E)+i) individual logical identifier will be mapped to bitmap index i by Hash mapping, and wherein, k is the ratio of the quantity E of entry in the size of logical address space M and visit data structure 1046.Therefore, the mould can getting the quantity of the entry (E) in visit data structure 1046 by the index of logical identifier determines the entry of logical identifier.As shown in Figure 10 A, logical identifier 0, E, 2E and 3E are mapped to the same entry 0 in visit data structure 1046.Similarly, logical identifier 1, E+1,2E+1 and 3E+1 are all mapped to same entry, by that analogy.In the example of Figure 10 A, the ratio of logical identifier and entry 1048 is 4 to 1, and such 4 logical identifiers are just mapped to each entry 1048.
Figure 10 B describes the example of the mapping based on scope between the logical identifier of logical address space 1032 and the entry 1048 of visit data structure 1046.The logical identifier of successive range is mapped to respective entry 1048 according to the ratio of logical address space M and entry E by the mapping based on scope of Figure 10 B.In the example of Figure 10 B, the ratio of M and E is 4 to 1.Therefore, logical identifier 0 to 3 is mapped to entry 0, and logical identifier 4 to 7 is mapped to entry 1, and by that analogy, wherein entry M-4 to M-1 is mapped to entry E-1.
Figure 10 C describes the example of the mixed-use developments between the logical identifier of logical address space 1032 and the entry 1048 of visit data structure 1046.Mixed-use developments is mapped to entry 1048i by from (i*E) to the logical identifier of ((i+1) * E-1).Therefore, mixed-use developments by multiple logical identifier range mappings to same entry 1048.In the example of Figure 10 C, the ratio between the entry 1048 in logical identifier and visit data structure 1046 is 16 to 1.4 of 4 scopes logical identifiers are mapped to each entry 1048 by mixed-use developments.The first scope from logical identifier 0 with together with other scopes with 3*R*E of logical identifier R*E, 2*R*E, be mapped to entry 0, wherein, R is range size (4), and E is the quantity of entry 1048.Setting range size and/or the overlapping ratio of Hash can be come by test and experience.In another example, different range size can be used to realize the ratio of 16 to 1 of Figure 10 C, which results in different Hash mapping (such as, 2 overlapping ranges of 8 logical identifiers).
Although this document describes specific mapping, the disclosure is not limited to this, and can be suitable for any suitable mapping that is incorporated between logical address space 1032 and visit data structure 1046.In addition, Ben Kai can be suitable for using the visit data structure 1046 of any suitable ratio had between logical address space 1032 and entry 1048.
Figure 11 is the process flow diagram of an embodiment of the method 1100 that management buffer memory is received.Implementation method 1100 and additive method and/or process disclosed herein can be come in calculation element (such as, above-described calculation element 401) and/or in conjunction with calculation element.In certain embodiments, can in other drivers of driver or calculation element, storer and/or cache layer the step of implementation method 1100.Therefore, Part Methods 1100 can be implemented as computer-readable instruction on the processor operating in calculation element (such as, the VSL 430 of Fig. 4 and/or calculation element 401) or module.The instruction of method 1100 and/or module can be stored on computer-readable recording medium.
In step 1110, method 1100 starts and initialization.Step 1110 can comprise initialization and/or Resources allocation with managing non-volatile memory storage (such as, Nonvolatile memory devices 402) on buffer memory, it can include but not limited to: accumulation layer (such as, VSL 430), communication interface (such as, bus 421, network 420 etc.), distribute volatile memory etc.Initialization may further include: accumulation layer be configured to provide the logical address space corresponding to backing store as above.
Step 1120 comprises and will correspond to the data buffer storage of backing store 460 on non-volatile memory medium 410.It is data cached that step 1120 can comprise use accumulation layer (such as, VSL 430).Therefore, step 1120 can comprise: utilize storing metadata (such as, comprising the index of the distribution between the physical storage locations on logical identifier and non-volatile memory medium 410) data cached.
Step 1130 comprises the maintenance accesses meta-data relevant to the data access in logical address space.Accesses meta-data can be separated and/or difference with the storing metadata of accumulation layer (such as, VSL 430).Accesses meta-data can comprise one or more bitmap, and described bitmap comprises the entry (such as, position) corresponding to one or more logical identifier.Step 1130 can comprise in response to the data access in logical address space and upgrade accesses meta-data.Upgrade accesses meta-data and can comprise the entry (such as, using mapping as above) and the access characteristics (such as, flip bit, count-up counter etc.) upgrading entry that identify and correspond to data access.In certain embodiments, accesses meta-data draws together one group of orderly visit data structure (such as, data structure 946A-N).Step 1130 can be included in predetermined time interval and specify current data structure and/or " upset " data structure.
Step 1140 comprises and determines whether to receive the data of logical identifier to enter in buffer memory.The determination of step 1140 can be made in response to the request of access relevant with the data of (such as, cache miss) not in the buffer.Step 1140 can comprise the access tolerance determining logical identifier as above.Step 1140 can comprise one or more entry (use as above one to one or other mapping) identifying and correspond to logical identifier, use the access of the access characteristics determination logical identifier of one or more entry to measure, and access tolerance is compared with access thresholds.In response to the access tolerance meeting access thresholds, flow process can proceed to step 1150; Otherwise flow process just terminates in step 1160.
In certain embodiments, step 1140 comprises and determines whether data to receive as " low value " data.As mentioned above, if the access tolerance of logical identifier does not meet access thresholds (or other receive standard), data can be received as " low value " data.Data can be received to low value data in response to the access tolerance meeting lower access thresholds, and/or access tolerance and how all data to be received as low value data.Low value data can be marked on non-volatile memory medium 410 and/or in cache metadata 442.
Step 1150 can comprise receives data to enter in buffer memory.Receive data to comprise data are stored on Nonvolatile memory devices (such as, Nonvolatile memory devices 402).Step 1150 may further include: as above, utilizes the storing metadata of accumulation layer (such as, VSL 430) logical identifier and physical storage locations to be associated.
Refer again to Fig. 4, cache layer 440 can be configured to receive data to enter buffer memory in advance.As discussed above, buffer memory receives module 444 can consider to receive the data (such as, relevant with the data be not stored on non-volatile memory medium 410 data access) entering buffer memory in response to cache miss.Buffer memory receives module 444 can further be configured to consider to receive the data entering other " close " of buffer memory.As used herein, the data of " close " refer to the data close to the logical identifier of (" distance " such as, between logical identifier is less than (or equaling) close to threshold value) in window of another logical identifier in logical address space 432.
Pre-receiving can comprise: buffer memory receiving module 444 determines that one or more access close to logical identifier is measured, and receives the data of logical identifier to enter buffer memory in response to access tolerance meets the access thresholds of pre-receiving.In certain embodiments, pre-access thresholds of receiving can be different from access thresholds (such as, lower than or higher than access thresholds).Can according to calculation element 401 and/or the data access features storing client 412, the pre-access thresholds (and access thresholds as above) of receiving of adjustment.Such as, store client 412 and can operate relatively large, continuous print data area.Responsively, can the access thresholds of receiving in advance be set to lower than access thresholds, receive module 444 to be biased towards the pre-continuous data segment received to make buffer memory.On the contrary, the buffer memory business of accessing relatively little, discontinuous data segment stores client 412 can be set to equal (or higher than) access thresholds by the access thresholds of receiving in advance.
In addition, can according to calculation element 401 and/or store client 412 access characteristics adjustment buffer memory receive module 444 close to window.The large quantity that can increase the pre-candidate received close to window, and the less scope that can limit the pre-candidate received close to window.In certain embodiments, buffer memory receiving supvr 444 can apply change with the close of the pre-candidate received dynamic and receive access thresholds in advance.Buffer memory receives module 444 access thresholds of lower pre-receiving can be applied to multiple close logical identifier, and higher pre-receiving access thresholds is applied to logical identifier far away.
Figure 12 is the process flow diagram that management enters an embodiment of the method 1200 of the receiving of buffer memory.In step 1210, method 1200 starts and initialization as above.Step 1210 may further include: as above, uses accumulation layer by data buffer storage on Nonvolatile memory devices, and keeps the accesses meta-data relevant to the data access in logical address space.
Step 1220 can be included in cache layer (cache layer 440) and receive data to enter the request of buffer memory.Can in response to cache miss (such as, with do not have to store (and/or not having to upgrade) to the relevant data access of the data on non-volatile memory medium 410) and the request of receiving step 1220.
Step 1230 can comprise: the tolerance of the access close to one or more logical identifier in window determining data.Step 1230 can comprise: the data in recognition logic address space (such as, logical address space 432) close to the logical identifier in window, and as mentioned above, determine the access tolerance of each logical identifier identified.As mentioned above, the data access features adjustment of client can be stored close to window according to calculation element and/or one or more.
Step 1240 can comprise: the whether satisfied pre-access thresholds of receiving of access tolerance determining or close logical identifier.The access thresholds of pre-receiving can lower than, higher than or equal above-described buffer memory and receive access thresholds.The pre-access thresholds of receiving of data access features adjustment of client can be stored according to calculation element and/or one or more.In certain embodiments, pre-access thresholds of receiving is dynamic (such as, proportional with the degree of closeness of logical identifier).
As mentioned above (such as, the data of logical identifier can be stored into non-volatile memory medium 410), the logical identifier had in the access tolerance of the satisfied pre-threshold value of receiving of step 1240 can be received to enter buffer memory in step 1250.Do not meet pre-terminated to receive the logical identifier of access thresholds and can not be entered buffer memory by pre-receiving.In step 1260 process ends, until receive the next one request of receiving data to enter buffer memory.
Refer again to Fig. 4, in certain embodiments, cache layer 440 comprises block 446, and it is configured to identification data, and this is a part for sequential access.As used herein, " sequential access " refers to (or close) data access of order in logical address space 432.As discussed above, alphabetic data accesses the access of normally " single use ", so just may be not suitable for buffer memory (such as, may make Cache Poisoning).The example of alphabetic data access includes but not limited to: data stream, backup are quoted, virus scan application etc.
Cache layer 440 can comprise block 446, and it is configured in response to receiving data enter the request (such as, in response to cache miss) of buffer memory, and formation sequence is measured.Sequence measurement can quantize the possibility that data are parts of alphabetic data access.Buffer memory receive module 444 can use sequence measurement (with described above access measure together with) determine whether to receive data to enter buffer memory.
In certain embodiments, block 446 keeps the accesses meta-data comprising orderly data access sequence.Figure 13 describes an example of the ordered sequence of the data access 1360 comprising current data access 1362, and window 1364 comprises multiple past data access 1365A-N.By the current data access logical identifier of 1362 and the logical identifier of data access 1365A-N being compared, the sequence measurement of current data access 1362 can be determined.In certain embodiments, sequence measurement can comprise binary sequence indicators, if the logical identifier of any one data access in window 1364 is positioned at predetermined close to threshold value, just described binary sequence designator is inserted in the logical identifier of current data access 1362.
In certain embodiments, sequence measurement can comprise multiple bit value, and current data access 1362 is that the possibility of a part for alphabetic data access quantizes by this multiple bit value.Sequence measurement can be positioned at increasing progressively close to threshold value of current data access 1362 in response to the logical identifier identified in window 1364.Sequence measurement can increasing progressively (such as, logical identifier is more close, and it is more that sequence measurement increases progressively) with the degree of closeness between logical identifier in ratio.Sequence measurement can be positioned at close to remain unchanged outside threshold value (or successively decreasing) in response to the logical identifier in window 1364.
In certain embodiments, can by the relative ranks of data access in window 1364 (such as, data access 1365A-N to current data access 1362 temporal close) measurement data access 1365A-N to the contribution of sequence measurement.Such as, the contribution of data access 1365A is weighed for more than the contribution of past data access 1365B-N, by that analogy.
In certain embodiments, the size (and/or queue thresholds) of window 1364 can be adjusted in response to user preference, performance monitoring etc.Client (such as, database, file system etc.), processor configuration (such as, the quantity of processor core, the quantity etc. of concurrent thread) etc. can be stored according to one or more to adjust window.
Figure 14 is the process flow diagram of the embodiment using accesses meta-data management buffer memory to receive.In step 1410, method 1400 can start and initialization as above.
Step 1420 comprises the request receiving and receive data to enter buffer memory.Can in response to cause cache miss data access and in the request of cache layer 440 receiving step 1420.
Step 1430 can comprise the sequence measurement determining request of access.Step 1430 can comprise: the window keeping the ordered sequence (such as, the ordered sequence 1364 of cache layer 440 maintenance) comprising data access.By being compared by the logical identifier of the data access in the logical identifier of data access and window, sequence measurement can be determined.As mentioned above, can be positioned in response to the logical identifier identified in window data access close to threshold value, and recognition sequence data access and/or increasing sequence tolerance.
Step 1440 comprises determines whether data access is the part that alphabetic data is accessed.Step 1440 thus can comprise: the sequence measurement of step 1430 and queue thresholds (such as, estimated sequence measures to determine that data access is the possibility that alphabetic data is accessed) are compared.If the access of step 1440 designation data is not a part for alphabetic data access, flow process just proceeds to step 1450; Otherwise flow process can terminate in step 1460.
Step 1450 comprises receives data to enter buffer memory, and as mentioned above, it can comprise use VSL 430 and data are stored into non-volatile memory medium 410.
Refer again to Fig. 4, in certain embodiments, buffer memory receives module to use access tolerance and sequence measurement to determine whether to receive data to enter buffer memory.Such as, even if data are parts of alphabetic data access, if store data described in client repeated accesses (indicated by the access tolerance of data), then described data still may be suitable for receiving and enter in buffer memory.Similarly, if sequence measurement designation data is not a part for alphabetic data access, then the data not meeting access thresholds can be received.
In certain embodiments, buffer memory receives module 444 that one or more dynamic buffering can be used to receive threshold value determination buffer memory to receive.Such as, having designation data is that the described data of sequence measurement of a part for alphabetic data access may meet stricter access thresholds.Similarly, there is low access tolerance (such as, not meeting the access tolerance of access thresholds) stricter queue thresholds can be met.In another example, having designation data is not that the described data of sequence measurement of a part for alphabetic data access can meet more undemanding access thresholds, and has high access tolerance, meets the data of access thresholds and can meet more undemanding queue thresholds.
Figure 15 describes the curve map 1500 that the dynamic buffering measured based on sequence measurement and access receives an example of standard 1571.Curve map 1500 comprises scope from low access tolerance to the access tolerance axle 1572 of height access tolerance and scope from the sequence measurement of instruction sequential access to the sequence measurement axle 1574 of the tolerance of instruction non-sequential access.Consider sequence measurement, reason is: as discussed above, and the data belonging to a part for alphabetic data access may pollute buffer memory, and the data not belonging to a part for sequential access may be more suitable for buffer memory receiving.The data that applicable receiving is entered buffer memory (region 1575) by dynamic accommodation standard 1571 come with the data separation being not suitable for receiving and entering buffer memory (region 1577).As shown in curve map 1500, although sequence measurement instruction alphabetic data access (point 1581), the data with high access tolerance can be received to enter buffer memory.Due to the part that the sequence measurement designation data putting 1581 is sequential access, so it may meet for receiving the higher access thresholds entering buffer memory.Such as, although receive the data of point 1582 to have relatively high access tolerance, because access tolerance does not meet the stricter access thresholds applied due to its sequence measurement, so the data of a little 1582 can not be received to enter buffer memory.In another example, although receive the data of point 1583 to have relatively low access tolerance, the sequence measurement due to it indicates these data not to be the part of sequential access, so the data of a little 1583 can be received to enter buffer memory.Although the data of point 1584 have good sequence measurement, because access tolerance does not meet stricter access tolerance, so the data of a little 1584 can not be received to enter buffer memory.
Although dynamic accommodation standard 1571 be described as linearly, the disclosure is not limited to this, and can be suitable for the dynamic accommodation standard using other types, comprising: para-curve, curve, index curve etc.In addition, the disclosure is not limited to the sequence measurement dynamic accommodation standard equal with access tolerance ratio.
Figure 15 B be another example describing dynamic accommodation standard 1573 curve map 1501.Dynamic accommodation standard 1573 gives to access the weight ratio of tolerance 1572, and to give the weight of sequence measurement 1574 larger.As put shown in 1585, the data with relatively high access tolerance can be received to enter buffer memory, and seldom consider sequence measurement.On the contrary, as put shown in 1586, although sequence measurement instruction non-sequential access, the data with relatively low access tolerance can not be received.
Figure 15 C is the curve map 1502 of another example describing the dynamic accommodation standard comprising receiving standard 1591 and low value receiving standard 1592.Receiving standard 1591 and 1592 can define and receive region 1575, do not receive region 1577 and low value to receive region 1578.Can receive to have to fall into receives the access tolerance in region 1575 and/or the data of sequence measurement to enter buffer memory (such as, the data of 1587).Receiving standard 1591 can not met but meeting low value receives the data of standard 1592 to receive (such as, the data of 1588) as low value data as above.Neither meet data that standard 1591 do not meet again standard 1592 can not be accepted and enter buffer memory (such as, the data of 1589).
Figure 16 is the process flow diagram that an embodiment of the method 1600 into buffer memory is received in management.In step 1610, method 1600 starts and initialization.Step 1610 may further include: as mentioned above, uses accumulation layer by data buffer storage to non-volatile memory medium, and keep the accesses meta-data relevant to data access in logical address space.
Step 1620 can comprise: as mentioned above, receives the request of receiving data to enter buffer memory.Step 1630 can comprise: as mentioned above, uses the access tolerance of accesses meta-data determination data and the sequence measurement of data.
Step 1640 can comprise: determine whether data are applicable to receiving and enter buffer memory.Module 444 can be received to realize step 1640 by the buffer memory of cache layer 440.Step 1640 can comprise: access tolerance compared with access thresholds, and/or sequence measurement and queue thresholds are compared.The value of access tolerance that the comparison of step 1640 is determined according to step 1630 and/or the value of sequence measurement can be dynamic.As discussed above, the data with sufficiently high access tolerance can be accepted and enter buffer memory, and do not consider sequence measurement (and/or can meet more undemanding sequence measurement).Similarly, having designation data is not that the data of the sequence measurement of a part for sequential access can be accepted and enter buffer memory, and does not consider access tolerance (and/or can meet more undemanding access thresholds).The access characteristics of client can be stored, the receiving standard of set-up procedure 1640 according to calculation element and/or one or more.
If data meet the receiving standard of step 1540, flow process just proceeds to step 1650, in step 1650, as mentioned above, receives data to enter buffer memory; Otherwise flow process terminates in step 1560, until receive the next one request of receiving data to enter buffer memory.
In order to complete understanding embodiment described herein, above description provides a large amount of concrete details.But, those skilled in the art will appreciate that and can omit one or more detail, or additive method, parts or material can be used.In some cases, carefully do not illustrate or describe operation.
In addition, in one or more embodiment, the characteristic of description, operation or characteristic can be combined in any suitable manner.Also it is easily understood that the step of method that disclosed in combining, embodiment describes or the order of action can change, this is apparent to those skilled in the art.Therefore, unless had clear and definite requirement to order, accompanying drawing or any order in describing in detail are all only used to illustrational object, and do not mean that the order comprising requirement.
Embodiment can comprise various step, and these steps can be embodied in the machine-executable instruction performed by general or special purpose computer (or other electronic installations).Alternatively, can by the hardware component of the certain logic comprised for performing described step, or the combination of hardware, software and/or firmware performs described step.
Embodiment can also be provided as the computer program comprising computer-readable recording medium, described computer-readable recording medium has instruction stored therein, and described instruction can be used for programming to perform process described herein to computing machine (or other electronic installations).Computer-readable recording medium can include but not limited to: hard disk, floppy disk, CD, CD-ROM, DVD-ROM, ROM, RAM, EPROM, EEPROM, magnetic or optical card, solid-state storage device or be suitable for the medium/machine readable media of other types of store electrons instruction.
As used herein, software module or parts can comprise computer instruction or the computer-executable code of any type being positioned at memory storage and/or computer-readable recording medium.Such as, software module can comprise one or more physics or the logical block of computer instruction, described computer instruction can be organized as the routine, program, object, parts, data structure etc. that perform one or more task or realize particular abstract data types type.
In certain embodiments, specific software module can comprise the diverse instruction of the diverse location being stored in memory storage, and these instructions realize the function of the description of module together.In fact, module can comprise single instruction or multiple instruction, and can be distributed in multiple different code segment, in different programs and cross over multiple memory storage.Some embodiment can be implemented in a distributed computing environment, wherein, is executed the task by the remote processing device connected by communication network.In a distributed computing environment, software module can be arranged in this locality and/or remote memory storage device.In addition, in data-base recording, combination or the data be presented on together can reside in identical memory storage, or cross over multiple memory storage, and can be linked to the field of the record in database together by network.
It will be appreciated by those skilled in the art that when not deviating from ultimate principle of the present disclosure, a lot of change can be made to the details of above-described embodiment.

Claims (21)

1. a method for the buffer memory of managing non-volatile memory storage, described method comprises:
The index that utilization is kept by the accumulation layer of described Nonvolatile memory devices is by data buffer storage to Nonvolatile memory devices, and the logical identifier of the logical address space corresponding to backing store is associated with the physical storage locations of described Nonvolatile memory devices by described index;
Keep the accesses meta-data be separated with described index, to indicate the access characteristics of the logical identifier in described logical address space;
In response to the request of the data of the described logical identifier of access, upgrade the access tolerance of the logical identifier in described accesses meta-data; And
In response to the access tolerance meeting access thresholds, the data relevant to described logical identifier are received to enter buffer memory.
2. method according to claim 1, wherein, the access characteristics of the logical address of the whole described logical address space of described accesses meta-data instruction, described logical address comprises the logical address not corresponding to and receive and enter the data of described buffer memory.
3. method according to claim 2, wherein, visit data structure comprises multiple entry, and each entry indicates the access characteristics of one or more logical identifier during each time interval.
4. method according to claim 3, wherein, each entry comprises one of following content:
The single position of bitmap, described single position indicates the access to one or more logical identifier corresponding to described entry, and
Counter, described counter instruction is to the access of one or more logical identifier corresponding to described entry.
5. method according to claim 3, wherein, the access characteristics of the corresponding multiple logical identifier of each entry instruction, and the quantity of the logical identifier corresponding to each entry is wherein determined according to user preference.
6. method according to claim 3, wherein, the access of each visit data structure entry instruction to multiple logical identifier, described method comprises further: use one of following manner that logical identifier is mapped to visit data structure entry:
Hash mapping, each logical identifier in multiple logical identifier is mapped to corresponding visit data structure entry by hash function by it;
Successive range map, its by the continuation address range mappings of logical identifier to corresponding visit data structure entry, and
Mixed-use developments, each logical identifier in the logical identifier of multiple successive range is mapped to corresponding visit data structure entry by hash function by it.
7. method according to claim 1, wherein, described accesses meta-data comprises one group of orderly visit data structure, each visit data structure is the access of trace logic identifier during the corresponding time interval, described one group of orderly visit data structure comprises current accessed data structure and one or more previous visit data structure, and described method comprises further:
By merging the visit data in described one group of orderly visit data structure, determine the access tolerance of described logical identifier.
8. method according to claim 1, wherein, described accesses meta-data comprises one group of orderly visit data structure, each visit data structure is the access of trace logic identifier during the corresponding time interval, described one group of orderly visit data structure comprises current accessed data structure and one or more previous visit data structure, and described method comprises further:
Upgrade described current accessed data structure to indicate logical identifier request of access;
Do not consider the instruction of logical identifier request of access, trigger in response to the time interval, generate new visit data structure; And
Described one group of orderly visit data structure is utilized to determine the access tolerance of described logical identifier.
9. method according to claim 1, comprises further:
Determine the sequence measurement of the request to the described logical identifier of access, wherein said sequence measurement instruction described request is the possibility of a part for alphabetic data access; And
In response to the sequence measurement of instruction non-sequential data access, described data are received to enter described buffer memory.
10. method according to claim 1, wherein, keeps described accesses meta-data to comprise: in described logical address space, keep request of access record according to time sequence, described method comprises further:
Based on described request of access record according to time sequence, determine the sequence measurement of described logical identifier, wherein, described sequence measurement indicates described request of access to be the possibility of a part of alphabetic data access; And
In response to one of the following, described data are received to enter described buffer memory:
Sequence measurement and the access tolerance meeting access thresholds of the access of instruction non-sequential data;
The sequence measurement that instruction non-sequential data is accessed and the access tolerance not meeting described access thresholds, and
Sequence measurement and the access tolerance meeting access thresholds of the access of instruction alphabetic data.
11. methods according to claim 1, comprise further: enter described buffer memory in response to the described data of receiving,
Access is arranged in the access tolerance of the close logical identifier of the presumptive address scope of the described logical identifier of described logical address space, and
Receiving the access of the described close logical identifier of the close logical identifier access thresholds of standard to measure in response to meeting buffer memory, receiving the data of described close logical identifier.
12. methods according to claim 1, comprise further:
Receive described data to enter described buffer memory, and receive the access thresholds of standard in response to not meeting buffer memory but meet the access tolerance that described buffer memory receives the second lower access thresholds of standard, described data are indicated with low value and is associated; And
Indicate based on described low value, before other data are in described buffer memory, described data are gone out from described cache cleaner.
13. methods according to claim 1, wherein, the logical capacity of described logical address space exceedes the physical storage capacity of described Nonvolatile memory devices.
14. 1 kinds manage the equipment receiving data to enter buffer memory, comprising:
The accumulation layer of Nonvolatile memory devices, it is for keeping the storing metadata relevant with backing store, and described storing metadata comprises the index be associated with the physical storage locations of described Nonvolatile memory devices by the logical identifier of the logical address space corresponding to described backing store;
Cache layer, it is configured to upgrade described accesses meta-data with the request of access of instruction to the described logical identifier in described logical address space, wherein, described accesses meta-data is separated with the described storing metadata of described accumulation layer, and wherein, described cache layer, in response to the request relevant with the data be not buffered on described Nonvolatile memory devices, upgrades described accesses meta-data, and
Buffer memory receives module, and it is configured to the access tolerance determining described accesses meta-data determination logical identifier, and in response to meeting the access tolerance of access thresholds, receives the data of described logical identifier to enter described buffer memory.
15. equipment according to claim 14, wherein, described accesses meta-data comprises one group of orderly visit data structure, each visit data structure is the access of trace logic identifier during the corresponding time interval, described one group of orderly visit data structure comprises current accessed data structure and one or more previous visit data structure, wherein, cache layer is configured to:
Upgrade described current accessed data structure to indicate logical identifier request of access;
Do not consider the instruction of logical identifier request of access, trigger in response to the time interval, generate new visit data structure; And
Described one group of orderly visit data structure is utilized to determine the access tolerance of described logical identifier.
16. equipment according to claim 15, wherein, visit data structure comprises multiple entry, and each entry indicates the access characteristics of one or more logical identifier during the corresponding time interval, and wherein, each entry comprises one of following content:
The single position of bitmap, described single position indicates the access to one or more logical identifier corresponding to described entry, and
Counter, described counter instruction is to the access of one or more logical identifier corresponding to described entry.
17. equipment according to claim 15, wherein, visit data structure comprises multiple entry, each entry indicates the access characteristics of one or more logical identifier, wherein, the access characteristics of the corresponding multiple logical identifier of each entry instruction, and wherein, described cache layer is configured to use one of following manner that logical identifier is mapped to entry:
Hash mapping, each logical identifier in multiple logical identifier is mapped to corresponding visit data structure entry by hash function by it;
Successive range map, its by the continuation address range mappings of logical identifier to corresponding visit data structure entry, and
Mixed-use developments, each logical identifier in the logical identifier of multiple successive range is mapped to corresponding visit data structure entry by hash function by it.
18. equipment according to claim 14, wherein, described buffer memory receives module to be configured to the sequence measurement determining the request of accessing described logical identifier, and wherein said sequence measurement instruction described request is the possibility of a part for alphabetic data access; And wherein, described buffer memory receiving layer, in response to the sequence measurement of instruction non-sequential data access, receives described data to enter described buffer memory.
19. 1 kinds of computer-readable recording mediums, it comprises the instruction of the method being configured to the buffer memory making calculation element execution managing non-volatile memory storage, and described method comprises:
The index that utilization is kept by the accumulation layer of described Nonvolatile memory devices is by data buffer storage to Nonvolatile memory devices, and the logical identifier of the logical address space corresponding to backing store is associated with the physical storage locations of described Nonvolatile memory devices by described index;
Keep the accesses meta-data be separated with described index, and comprise the entry of the access characteristics of each logical identifier in the described logical address space of instruction, described logical identifier comprises the logical identifier corresponding to the data be not cached on described Nonvolatile memory devices;
In response to the request of the data of the described logical identifier of access, upgrade the entry of described accesses meta-data with the data access of instruction to logical identifier;
The entry of the described logical identifier in described accesses meta-data is utilized to determine the access tolerance of described logical identifier, and
In response to the access tolerance meeting access thresholds, the data relevant to described logical identifier are received to enter buffer memory.
20. computer-readable recording mediums according to claim 19, wherein, described accesses meta-data comprises one group of orderly visit data structure, each visit data structure comprises multiple entry, each entry comprises the access characteristics of one or more logical identifier during the corresponding time interval, described one group of orderly visit data structure comprises current accessed data structure and one or more previous visit data structure, and described method comprises further:
Upgrade described current accessed data structure to indicate logical identifier request of access;
Do not consider the instruction of logical identifier request of access, trigger in response to the time interval, generate new visit data structure; And
Utilize described one group of orderly visit data structure to determine the access tolerance of described logical identifier, wherein, determine that described access tolerance comprises according to the recency degree of described entry to each entry weighting in multiple entry.
21. computer-readable recording mediums according to claim 20, comprise further:
Based on request of access record according to time sequence, determine the sequence measurement of described logical identifier, wherein, described sequence measurement indicates described request of access to be the possibility of a part of alphabetic data access; And
In response to one of the following, described data are received to enter described buffer memory:
Sequence measurement and the access tolerance meeting access thresholds of the access of instruction non-sequential data;
The sequence measurement that instruction non-sequential data is accessed and the access tolerance not meeting described access thresholds, and
Sequence measurement and the access tolerance meeting access thresholds of the access of instruction alphabetic data.
CN201280071235.9A 2012-01-12 2012-01-12 The system and method received for managing caching Active CN104303162B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2012/021094 WO2013105960A1 (en) 2012-01-12 2012-01-12 Systems and methods for managing cache admission

Publications (2)

Publication Number Publication Date
CN104303162A true CN104303162A (en) 2015-01-21
CN104303162B CN104303162B (en) 2018-03-27

Family

ID=48781765

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201280071235.9A Active CN104303162B (en) 2012-01-12 2012-01-12 The system and method received for managing caching

Country Status (3)

Country Link
EP (1) EP2802991B1 (en)
CN (1) CN104303162B (en)
WO (1) WO2013105960A1 (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111461A (en) * 2015-02-03 2017-08-29 高通股份有限公司 Bandwidth of memory is provided using back-to-back read operation in the system based on CPU (CPU) by compressed Memory Controller (CMC) to compress
CN107729142A (en) * 2017-09-29 2018-02-23 郑州云海信息技术有限公司 A kind of thread dispatching method for simplifying metadata certainly
CN107943718A (en) * 2017-12-07 2018-04-20 网宿科技股份有限公司 A kind of method and apparatus for clearing up cache file
CN108183835A (en) * 2017-12-08 2018-06-19 中国航空工业集团公司成都飞机设计研究所 A kind of military 1394 bus data integrality monitoring method of distributed system
CN108228088A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For managing the method and apparatus of storage system
CN108733585A (en) * 2017-04-17 2018-11-02 伊姆西Ip控股有限责任公司 Caching system and correlation technique
CN109144894A (en) * 2018-08-01 2019-01-04 浙江大学 Memory access patterns guard method based on data redundancy
CN109213450A (en) * 2018-09-10 2019-01-15 郑州云海信息技术有限公司 A kind of associated metadata delet method, device and equipment based on flash array
CN109213699A (en) * 2018-09-21 2019-01-15 郑州云海信息技术有限公司 A kind of metadata management method, system, equipment and computer readable storage medium
CN109885550A (en) * 2018-12-28 2019-06-14 安徽维德工业自动化有限公司 A kind of document storage system based on full connection routing layer
CN110389904A (en) * 2018-04-20 2019-10-29 北京忆恒创源科技有限公司 The storage equipment of FTL table with compression
CN110428359A (en) * 2019-08-09 2019-11-08 南京地平线机器人技术有限公司 Device and method for handling regions of interest data
CN111260025A (en) * 2016-12-30 2020-06-09 上海寒武纪信息科技有限公司 Apparatus and method for performing LSTM neural network operations
CN111324557A (en) * 2018-12-17 2020-06-23 爱思开海力士有限公司 Data storage device, method of operating the same, and storage system including the same
CN111367827A (en) * 2018-12-26 2020-07-03 爱思开海力士有限公司 Memory system and operating method thereof
CN111552434A (en) * 2019-02-10 2020-08-18 慧与发展有限责任合伙企业 Securing a memory device
CN112148217A (en) * 2020-09-11 2020-12-29 北京浪潮数据技术有限公司 Caching method, device and medium for deduplication metadata of full flash storage system
TWI716918B (en) * 2019-06-27 2021-01-21 旺宏電子股份有限公司 Electronic device, memory device and method of reading memory data thereof
CN112639746A (en) * 2018-09-07 2021-04-09 株式会社东芝 Database device, program, and data processing method
CN112860594A (en) * 2021-01-21 2021-05-28 华中科技大学 Solid-state disk address remapping method and device and solid-state disk
CN113360858A (en) * 2020-03-04 2021-09-07 武汉斗鱼网络科技有限公司 Method and system for processing function switch data
CN113569508A (en) * 2021-09-18 2021-10-29 芯行纪科技有限公司 Database model construction method and device for data indexing and access based on ID
CN114201109A (en) * 2020-09-18 2022-03-18 慧与发展有限责任合伙企业 Tracking changes to a storage volume during data transfers

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150046772A1 (en) * 2013-08-06 2015-02-12 Sandisk Technologies Inc. Method and device for error correcting code (ecc) error handling
US10223208B2 (en) 2013-08-13 2019-03-05 Sandisk Technologies Llc Annotated atomic write
US10037149B2 (en) * 2016-06-17 2018-07-31 Seagate Technology Llc Read cache management
CN110554833B (en) * 2018-05-31 2023-09-19 北京忆芯科技有限公司 Parallel processing IO commands in a memory device
DE102020103229B4 (en) 2019-02-10 2023-10-05 Hewlett Packard Enterprise Development Lp BACKING UP A STORAGE DEVICE
US11442654B2 (en) * 2020-10-15 2022-09-13 Microsoft Technology Licensing, Llc Managing and ranking memory resources
CN115037610B (en) * 2022-04-24 2023-09-22 浙江清捷智能科技有限公司 Automatic configuration system and automatic configuration method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030046493A1 (en) * 2001-08-31 2003-03-06 Coulson Richard L. Hardware updated metadata for non-volatile mass storage cache
US20110258391A1 (en) * 2007-12-06 2011-10-20 Fusion-Io, Inc. Apparatus, system, and method for destaging cached data

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7934074B2 (en) * 1999-08-04 2011-04-26 Super Talent Electronics Flash module with plane-interleaved sequential writes to restricted-write flash chips
US7010645B2 (en) * 2002-12-27 2006-03-07 International Business Machines Corporation System and method for sequentially staging received data to a write cache in advance of storing the received data
JP2005115857A (en) * 2003-10-10 2005-04-28 Sony Corp File storage device
US7885921B2 (en) * 2004-11-18 2011-02-08 International Business Machines Corporation Managing atomic updates on metadata tracks in a storage system
US7533215B2 (en) * 2005-09-15 2009-05-12 Intel Corporation Distributed and packed metadata structure for disk cache
KR20090087119A (en) * 2006-12-06 2009-08-14 퓨전 멀티시스템즈, 인크.(디비에이 퓨전-아이오) Apparatus, system, and method for managing data in a storage device with an empty data token directive

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030046493A1 (en) * 2001-08-31 2003-03-06 Coulson Richard L. Hardware updated metadata for non-volatile mass storage cache
US20110258391A1 (en) * 2007-12-06 2011-10-20 Fusion-Io, Inc. Apparatus, system, and method for destaging cached data

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107111461A (en) * 2015-02-03 2017-08-29 高通股份有限公司 Bandwidth of memory is provided using back-to-back read operation in the system based on CPU (CPU) by compressed Memory Controller (CMC) to compress
CN108228088A (en) * 2016-12-21 2018-06-29 伊姆西Ip控股有限责任公司 For managing the method and apparatus of storage system
CN108228088B (en) * 2016-12-21 2020-10-23 伊姆西Ip控股有限责任公司 Method and apparatus for managing storage system
CN111260025A (en) * 2016-12-30 2020-06-09 上海寒武纪信息科技有限公司 Apparatus and method for performing LSTM neural network operations
CN111260025B (en) * 2016-12-30 2023-11-14 上海寒武纪信息科技有限公司 Apparatus and method for performing LSTM neural network operation
CN108733585A (en) * 2017-04-17 2018-11-02 伊姆西Ip控股有限责任公司 Caching system and correlation technique
CN107729142A (en) * 2017-09-29 2018-02-23 郑州云海信息技术有限公司 A kind of thread dispatching method for simplifying metadata certainly
CN107943718A (en) * 2017-12-07 2018-04-20 网宿科技股份有限公司 A kind of method and apparatus for clearing up cache file
CN108183835A (en) * 2017-12-08 2018-06-19 中国航空工业集团公司成都飞机设计研究所 A kind of military 1394 bus data integrality monitoring method of distributed system
CN108183835B (en) * 2017-12-08 2021-05-07 中国航空工业集团公司成都飞机设计研究所 Military 1394 bus data integrity monitoring method for distributed system
CN110389904A (en) * 2018-04-20 2019-10-29 北京忆恒创源科技有限公司 The storage equipment of FTL table with compression
CN109144894A (en) * 2018-08-01 2019-01-04 浙江大学 Memory access patterns guard method based on data redundancy
CN109144894B (en) * 2018-08-01 2023-04-07 浙江大学 Memory access mode protection method based on data redundancy
CN112639746A (en) * 2018-09-07 2021-04-09 株式会社东芝 Database device, program, and data processing method
CN109213450B (en) * 2018-09-10 2021-08-31 郑州云海信息技术有限公司 Associated metadata deleting method, device and equipment based on flash memory array
CN109213450A (en) * 2018-09-10 2019-01-15 郑州云海信息技术有限公司 A kind of associated metadata delet method, device and equipment based on flash array
CN109213699B (en) * 2018-09-21 2021-10-29 郑州云海信息技术有限公司 Metadata management method, system, equipment and computer readable storage medium
CN109213699A (en) * 2018-09-21 2019-01-15 郑州云海信息技术有限公司 A kind of metadata management method, system, equipment and computer readable storage medium
CN111324557B (en) * 2018-12-17 2023-03-21 爱思开海力士有限公司 Data storage device, method of operating the same, and storage system including the same
CN111324557A (en) * 2018-12-17 2020-06-23 爱思开海力士有限公司 Data storage device, method of operating the same, and storage system including the same
CN111367827A (en) * 2018-12-26 2020-07-03 爱思开海力士有限公司 Memory system and operating method thereof
CN111367827B (en) * 2018-12-26 2023-03-03 爱思开海力士有限公司 Memory system and operating method thereof
CN109885550B (en) * 2018-12-28 2022-09-13 安徽维德工业自动化有限公司 File storage system based on all-connected routing layer
CN109885550A (en) * 2018-12-28 2019-06-14 安徽维德工业自动化有限公司 A kind of document storage system based on full connection routing layer
CN111552434A (en) * 2019-02-10 2020-08-18 慧与发展有限责任合伙企业 Securing a memory device
CN111552434B (en) * 2019-02-10 2023-01-06 慧与发展有限责任合伙企业 Method for protecting memory device of computing system, computing system and storage medium
TWI716918B (en) * 2019-06-27 2021-01-21 旺宏電子股份有限公司 Electronic device, memory device and method of reading memory data thereof
CN110428359B (en) * 2019-08-09 2022-12-06 南京地平线机器人技术有限公司 Apparatus and method for processing region of interest data
CN110428359A (en) * 2019-08-09 2019-11-08 南京地平线机器人技术有限公司 Device and method for handling regions of interest data
CN113360858A (en) * 2020-03-04 2021-09-07 武汉斗鱼网络科技有限公司 Method and system for processing function switch data
CN112148217A (en) * 2020-09-11 2020-12-29 北京浪潮数据技术有限公司 Caching method, device and medium for deduplication metadata of full flash storage system
CN112148217B (en) * 2020-09-11 2023-12-22 北京浪潮数据技术有限公司 Method, device and medium for caching deduplication metadata of full flash memory system
CN114201109A (en) * 2020-09-18 2022-03-18 慧与发展有限责任合伙企业 Tracking changes to a storage volume during data transfers
CN112860594A (en) * 2021-01-21 2021-05-28 华中科技大学 Solid-state disk address remapping method and device and solid-state disk
CN113569508B (en) * 2021-09-18 2021-12-10 芯行纪科技有限公司 Database model construction method and device for data indexing and access based on ID
US11500828B1 (en) 2021-09-18 2022-11-15 X-Times Design Automation Co., LTD Method and device for constructing database model with ID-based data indexing-enabled data accessing
CN113569508A (en) * 2021-09-18 2021-10-29 芯行纪科技有限公司 Database model construction method and device for data indexing and access based on ID

Also Published As

Publication number Publication date
WO2013105960A1 (en) 2013-07-18
EP2802991B1 (en) 2020-05-06
EP2802991A4 (en) 2015-09-23
EP2802991A1 (en) 2014-11-19
CN104303162B (en) 2018-03-27

Similar Documents

Publication Publication Date Title
CN104303162B (en) The system and method received for managing caching
CN101636712B (en) The device of object requests, system and method is served in memory controller
CN102598019B (en) For equipment, the system and method for memory allocated
US9075710B2 (en) Non-volatile key-value store
US20190073296A1 (en) Systems and Methods for Persistent Address Space Management
EP2598996B1 (en) Apparatus, system, and method for conditional and atomic storage operations
CN101622594B (en) Apparatus, system, and method for managing data in a request device with an empty data token directive
US8898376B2 (en) Apparatus, system, and method for grouping data stored on an array of solid-state storage elements
US8725934B2 (en) Methods and appratuses for atomic storage operations
US8782344B2 (en) Systems and methods for managing cache admission
US20130205114A1 (en) Object-based memory storage
US20140297929A1 (en) Non-volatile memory interface
CN109992530A (en) A kind of solid state drive equipment and the data read-write method based on the solid state drive
CN102460371A (en) Flash-based data archive storage system
CN102124527A (en) Apparatus, system, and method for detecting and replacing failed data storage
CN102890621A (en) Apparatus, system and method for determining a configuration parameter for solid-state storage media
JP2010512584A5 (en)
CN103098034B (en) The apparatus and method of operation are stored for condition and atom

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C41 Transfer of patent application or patent right or utility model
TA01 Transfer of patent application right

Effective date of registration: 20160701

Address after: Luxemburg, Grand Duchy of Luxemburg

Applicant after: Longitude Business Flash Memory Co.

Address before: Luxemburg, Grand Duchy of Luxemburg

Applicant before: PS12 Lukesike Co.

Effective date of registration: 20160701

Address after: Luxemburg, Grand Duchy of Luxemburg

Applicant after: PS12 Lukesike Co.

Address before: Delaware

Applicant before: Intellectual property holding company (2)

C41 Transfer of patent application or patent right or utility model
CB02 Change of applicant information

Address after: Texas, USA

Applicant after: SANDISK TECHNOLOGIES LLC

Address before: Texas, USA

Applicant before: SANDISK TECHNOLOGIES Inc.

COR Change of bibliographic data
TA01 Transfer of patent application right

Effective date of registration: 20160718

Address after: Texas, USA

Applicant after: SANDISK TECHNOLOGIES Inc.

Address before: Luxemburg, Grand Duchy of Luxemburg

Applicant before: Longitude Business Flash Memory Co.

GR01 Patent grant
GR01 Patent grant