CN111752866A - Virtual parity data caching for storage devices - Google Patents

Virtual parity data caching for storage devices Download PDF

Info

Publication number
CN111752866A
CN111752866A CN201910246121.7A CN201910246121A CN111752866A CN 111752866 A CN111752866 A CN 111752866A CN 201910246121 A CN201910246121 A CN 201910246121A CN 111752866 A CN111752866 A CN 111752866A
Authority
CN
China
Prior art keywords
data cache
unit
virtual
check data
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910246121.7A
Other languages
Chinese (zh)
Inventor
邵蔚然
刘玉进
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN201910246121.7A priority Critical patent/CN111752866A/en
Publication of CN111752866A publication Critical patent/CN111752866A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1045Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache
    • G06F12/1063Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] associated with a data cache the data cache being concurrently virtually addressed

Abstract

The application discloses a method for providing virtual check data caching. The method for providing the virtual check data cache comprises the following steps: acquiring a request for allocating a unit of virtual check data cache; acquiring a first unit of an available virtual check data cache; acquiring a first unit of an available check data cache, and recording the association relationship between the acquired first unit of the virtual check data cache and the first unit of the check data cache; the request to allocate the unit of the virtual check data cache is responded with the first unit of the virtual check data cache and/or the first unit of the check data cache.

Description

Virtual parity data caching for storage devices
Technical Field
The present application relates to storage technology, and in particular, to virtual parity data caching for storage devices.
Background
FIG. 1 illustrates a block diagram of a solid-state storage device. The solid-state storage device 102 is coupled to a host for providing storage capabilities to the host. The host and the solid-state storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the solid-state storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component interconnect Express), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The Memory device 102 includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory) 110.
NAND flash Memory, phase change Memory, FeRAM (Ferroelectric RAM), MRAM (magnetoresistive Memory), RRAM (Resistive Random Access Memory), XPoint Memory, and the like are common NVM.
The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the DRAM 110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in various manners of software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-specific integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands. The control component 104 may also be coupled to the DRAM 110 and may access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM.
Control section 104 includes a flash interface controller (otherwise known as a media interface, a media interface controller, a flash channel controller) that is coupled to NVM chip 105 and issues commands to NVM chip 105 in a manner that conforms to the interface protocol of NVM chip 105 to operate NVM chip 105 and receive command execution results output from NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", etc.
In the storage device, mapping information from logical addresses to physical addresses is maintained by using a Flash Translation Layer (FTL). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented using an intermediate address modality in the related art. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address. In these cases, the read/write commands received by the storage device indicate logical addresses.
A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Typically, entries of the FTL table record address mapping relationships in units of data pages in the storage device.
The FTL of some memory devices is provided by a host to which the memory device is coupled, the FTL table is stored by a memory of the host, and the FTL is provided by a CPU of the host executing software. Still other storage management devices disposed between hosts and storage devices provide FTLs. In these cases, the read/write commands received by the storage device indicate physical addresses.
Commands provided by the host to the storage device may access a logical address space corresponding to one or more entries of the FTL table. And the control component may morph commands received from the interface 103 (e.g., split the commands according to the logical address space size corresponding to the FTL entry) and process the morphed commands.
The memory device includes a plurality of NVM chips. Each NVM chip includes one or more DIEs (DIE) or Logical Units (LUNs). The dies or logic units can respond to read and write operations in parallel. Multiple read, write, or erase operations are performed sequentially on the same die or logic.
Fig. 2 shows a schematic diagram of a large block. The large block includes physical blocks from each of the plurality of logical units. Preferably, each logical unit provides one physical block for a large block. By way of example, large blocks are constructed on every 16 Logical Units (LUNs). Each large block includes 16 physical blocks, from each of 16 Logical Units (LUNs). In the example of FIG. 2, chunk 0 includes physical block 0 from each of the 16 Logical Units (LUNs), and chunk 1 includes physical block 1 from each Logical Unit (LUN). There are many other ways to construct the bulk mass.
As an alternative, page stripes are constructed in large blocks, with physical pages of the same physical address within each Logical Unit (LUN) constituting a "page stripe". In FIG. 2, physical page P0-0, physical page P0-1 … …, and physical page P0-x form page stripe 0, where physical page P0-0, physical page P0-1 … …, physical page P0-14 is used to store user data, and physical page P0-x is used to store parity data computed from all user data within the stripe. Similarly, in FIG. 2, physical pages P2-0, P2-1 … …, and P2-x constitute page strip 2. Alternatively, the physical page used to store parity data may be located anywhere in the page stripe.
To write data to the page stripe, a control section (104) (see fig. 1) of the solid-state storage device provides a check data generation unit. Taking the example of calculating parity data using an exclusive-or operation, for a page stripe including N +1(N ═ 15) physical pages, an exclusive-or is calculated for user data of the N physical pages (e.g., (P0-0) XOR (P0-1) XOR (P0-2) XOR … XOR (P0- (N-1))), and the calculation result is written as a physical page (e.g., P0-x) in which the page stripe stores parity data. Optionally, a plurality of check data generation units (e.g., M) are provided in the control section (104) to write data to M page stripes simultaneously. The check data generation unit comprises a check data cache used for storing an intermediate result or a final result of the check data calculation process. A check data calculator (i.e., a check data generation unit) is provided in chinese patent application No. 201710326110.0 entitled data organization of page stripes and method and apparatus for writing data to page stripes.
Disclosure of Invention
According to a first aspect of the present application, there is provided a method for providing a virtual parity data cache according to the first aspect of the present application, including: acquiring a request for allocating a unit of virtual check data cache; acquiring a first unit of an available virtual check data cache; acquiring a first unit of an available check data cache, and recording the association relationship between the acquired first unit of the virtual check data cache and the first unit of the check data cache; the request to allocate the unit of the virtual check data cache is responded with the first unit of the virtual check data cache and/or the first unit of the check data cache.
According to a first method for providing a virtual check data cache of a first aspect of the present application, a second method for providing a virtual check data cache of a second aspect of the present application is provided, wherein a virtual check data cache table is used to record an association relationship between a unit of the virtual check data cache and a unit of the check data cache; the virtual check data cache table comprises a plurality of entries, and each entry records the state of a unit of the virtual check data cache identified by each index of the virtual check data cache and the index of the unit of the check data cache or an external data cache unit associated with the entry in an associated manner.
The first or second method of providing a virtual parity data cache according to the first aspect of the present application, the third method of providing a virtual parity data cache according to the first aspect of the present application, the unit state of the virtual parity data cache comprising unallocated, allocated and in use and/or allocated and suspended; the virtual parity cache in the unallocated state may be allocated to process write commands; the virtual check cache in the allocated and in-use or allocated and suspended state may no longer be allocated to process write commands.
According to one of the first to third methods for providing a virtual parity data cache in the first aspect of the present application, a fourth method for providing a virtual parity data cache in the first aspect of the present application is provided, where an available unit of the parity data cache is obtained by checking a data cache table, where the parity data cache table records whether each unit of the parity data cache has an associated unit of the virtual parity data cache.
According to one of the first to fourth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the fifth method of providing a virtual parity data cache of the first aspect of the present application, further comprising, in response to there being no available unit of the parity data cache, moving a second unit of data of the parity data cache that has been used to an external data cache so that the second unit of the parity data cache becomes available.
According to a third method for providing a virtual parity data cache of the first aspect of the present application, a sixth method for providing a virtual parity data cache of the first aspect of the present application is provided, further comprising, in response to the virtual parity data cache table indicating that the first unit of the virtual parity data cache is in an allocated and suspended state, further acquiring a third unit of an available parity data cache, moving the data of its corresponding external data cache unit to the third unit of the available parity data cache, and providing the index of the first unit of the virtual parity data cache and/or the index of the third unit of the parity data cache to the parity data calculation request unit.
According to a sixth method for providing a virtual parity data cache according to the first aspect of the present application, there is provided the seventh method for providing a virtual parity data cache according to the first aspect of the present application, further updating the virtual parity data cache table to record an index of a third unit of the virtual parity data cache that is allocated and in use and is associated with the available parity data cache.
According to one of the first to seventh methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the eighth method of providing a virtual parity data cache of the first aspect of the present application, further comprising requesting calculation of parity data using an index of the first unit of the virtual parity data cache or an index of the first unit of the parity data cache.
According to an eighth method for providing a virtual check data cache of the first aspect of the present application, there is provided the ninth method for providing a virtual check data cache of the first aspect of the present application, further comprising obtaining data to be written, calculating check data for the data to be written using the first unit of the check data cache, and writing the data to be written to a storage medium.
According to an eighth method for providing a virtual parity data cache according to the first aspect of the present application, there is provided a tenth method for providing a virtual parity data cache according to the first aspect of the present application, wherein an index of a first unit of the virtual parity data cache is used to obtain, from the virtual parity data cache table, an index of the first unit of the parity data cache associated with the index of the first unit of the virtual parity data cache.
According to a tenth aspect of the present invention, there is provided a method for providing a virtual parity data cache, further comprising, if the virtual parity data cache table indicates that the first unit of the virtual parity data cache is in an allocated and suspended state, further acquiring a fourth unit of the available parity data cache, moving data of the external data cache unit corresponding to the first unit of the virtual parity data cache to the fourth unit of the parity data cache, and requesting to calculate parity data using an index of the fourth unit of the parity data cache.
According to one of the eighth to eleventh methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the twelfth method of providing a virtual parity data cache of the first aspect of the present application, wherein data is moved between a unit of the parity data cache and an external data cache in response to obtaining the data move request message.
According to one of the eighth to eleventh methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the thirteenth method of providing a virtual parity data cache of the first aspect of the present application, in response to the request to obtain calculation parity data, calculating parity data for data to be written using a unit of the parity data cache.
According to one of the first to thirteenth methods for providing a virtual parity data cache in the first aspect of the present application, there is provided a method for providing a virtual parity data cache in the fourteenth aspect of the present application, where a stream ID is further indicated in a request for allocating a unit of the virtual parity data cache, and the stream ID identifies a stream of the multi-stream storage device.
According to a fourteenth method of providing a virtual parity data cache in accordance with the first aspect of the present application, there is provided a method of providing a virtual parity data cache in accordance with the fifteenth aspect of the present application, wherein an entry of the virtual parity data cache table records a stream ID in association with an index of a unit of the virtual parity data cache to indicate an index of one or more units of the virtual parity cache for each stream.
According to the fourteenth or fifteenth method for providing a virtual parity data cache of the first aspect of the present application, there is provided the sixteenth method for providing a virtual parity data cache of the first aspect of the present application, wherein when an index of a unit of an available virtual parity data cache is obtained, a same unit index of the virtual parity data cache is allocated to a request for allocating a unit of the virtual parity data cache having a same stream ID.
According to one of the fourteenth to sixteenth methods of providing a virtual parity data cache according to the first aspect of the present application, there is provided the method of providing a virtual parity data cache according to the seventeenth aspect of the present application, wherein when allocating an index of a unit of the virtual parity data cache to a request for allocating the unit of the virtual parity data cache having a stream ID, the index of the allocated unit of the virtual parity data cache is further recorded in the virtual parity data cache table in association with the stream ID.
According to one of the fourteenth to seventeenth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the eighteenth method of providing a virtual parity data cache of the first aspect of the present application, wherein the stream ID and the index of the unit of the virtual parity data cache are in a relationship of 1 to N, and N is an integer greater than 1.
According to one of the fourteenth to eighteenth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a method of providing a virtual parity data cache of the first aspect of the present application, acquiring an index and a stream ID of a unit of an associated parity data cache from a received message, and acquiring data to be written having the same stream ID, calculating parity data for the data to be written having the stream ID using the unit of the parity data cache.
According to one of the fourteenth to nineteenth methods of providing a virtual parity data cache in the first aspect of the present application, there is provided the method of providing a virtual parity data cache in the twentieth aspect of the present application, wherein the request for allocating a unit of the virtual parity data cache generated for each write command includes a stream ID associated with the write command, and an index of a unit of the virtual parity data cache corresponding to and available from the stream ID and/or an index of a unit of the available parity data cache are/is obtained in response to the request for allocating the unit of the virtual parity data cache.
According to one of the fourteenth to twentieth methods of providing a virtual parity data cache according to the first aspect of the present application, there is provided the twenty-first method of providing a virtual parity data cache according to the first aspect of the present application, wherein the request for allocating a unit of the virtual parity data cache generated for each write command further includes a context of the write command.
According to a twenty-first method of providing a virtual parity data cache according to the first aspect of the present application, there is provided a twenty-second method of providing a virtual parity data cache according to the first aspect of the present application, calculating parity data for data corresponding to the write command using a unit of the parity data cache corresponding to an index of the unit of the virtual parity data cache, wherein the unit of the virtual parity data cache and the write command are associated with the same stream ID.
According to one of the first to twenty second methods of providing a virtual check data cache of the first aspect of the present application, there is provided the twenty third method of providing a virtual check data cache of the first aspect of the present application, further maintaining a stream ID mapping table in which a correspondence relationship between a stream ID and an index of a unit of the virtual check data cache is associatively recorded.
A twenty-third method of providing a virtual parity data cache according to the first aspect of the present application provides a twenty-fourth method of providing a virtual parity data cache according to the first aspect of the present application, wherein in the stream ID mapping table, a stream ID is associated to an index of a unit of the one or more virtual parity data caches.
According to a twenty-third or twenty-fourth aspect of the present application, there is provided a method of providing a virtual parity data cache according to the twenty-fifth aspect of the present application, accessing a stream ID mapping table, identifying an index of a unit of the virtual parity data cache associated with a stream ID of a write command, and calculating parity data for data corresponding to the write command using the unit of the parity data cache corresponding to the index of the unit of the virtual parity data cache.
According to a twenty-fifth aspect of the present application, there is provided a method for providing a virtual parity data cache according to the twenty-sixth aspect of the present application, where if a stream ID of a write command is not recorded in a stream ID mapping table, an index of a unit of an available virtual parity data cache is further specified for the stream ID, and the index is recorded in the stream ID mapping table.
According to one of the twenty-third to twenty-sixth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a method of providing a virtual parity data cache of the twenty-seventh aspect of the present application, calculating parity data using one or more parity data cache units associated with the same stream ID for data to be written having the same stream ID.
According to one of the first to twenty-seventh methods of providing a virtual parity data cache according to the first aspect of the present application, there is provided a method of providing a virtual parity data cache according to the twenty-eighth aspect of the present application, wherein if a stream ID is acquired from a request for allocating a unit of the virtual parity data cache, if there is an index of a unit of the virtual parity data cache that is already allocated and in use, which is already associated with the stream ID, the index of the unit of the virtual parity data cache is preferentially selected to respond to the request message; or an index to a unit of the virtual check data cache that is not associated with the stream ID.
According to one of the first to twenty-seventh methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a method of providing a virtual parity data cache of the first aspect of the present application, wherein if an index of a unit of the virtual parity data cache already associated with a stream ID does not exist in a request for allocating the unit of the virtual parity data cache, an index of a unit of the virtual parity data cache in an unallocated state is preferentially allocated thereto, or an index of a unit of the virtual parity data cache not associated with a stream ID is allocated thereto.
According to the twenty-eighth or twenty-ninth aspect of the present application, there is provided a method for providing a virtual parity data cache according to the thirty-eighth aspect of the present application, wherein an index of a unit of the parity data cache and a write command context are obtained from a received message, and parity data is calculated for data to be written by a write command using the unit of the parity data cache.
According to one of the twenty-eighth to thirty-eighth methods of providing a virtual parity data cache according to the first aspect of the present application, there is provided a method of providing a virtual parity data cache according to the thirty-first aspect of the present application, in response to completion of calculation of parity data of a corresponding page stripe using an index of a cell of the virtual parity data cache, further releasing the index of the cell of the virtual parity data cache.
According to one of the twenty-eighth to thirty-first methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a thirty-second method of providing a virtual parity data cache of the first aspect of the present application, wherein the virtual parity data cache table is updated to record that a unit of the released virtual parity data cache is in an unused state.
According to one of the methods of providing a virtual parity data cache according to the first to thirty-second aspects of the present application, there is provided a method of providing a virtual parity data cache according to the thirty-third aspect of the present application, providing a virtual parity data cache table for each stream ID.
According to one of the methods of providing a virtual parity data cache according to the first to thirty-third aspects of the present application, there is provided a method of providing a virtual parity data cache according to the thirty-fourth aspect of the present application, wherein an index of a unit of the virtual parity data cache is allocated for a stream ID for standby even if a pending write command has not occurred.
According to one of the thirty-third or thirty-fourth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a method of providing a virtual parity data cache of the first aspect of the present application, wherein the parity data cache table includes a plurality of entries, and each entry records, in association, an index of a unit of the parity data cache, a status thereof, and an index of a unit of the virtual parity data cache.
According to a thirty-fifth aspect of the present application, there is provided a method for providing a virtual parity data cache according to the thirty-sixth aspect of the present application, wherein the state of the index of the unit of the parity data cache includes occupied and unoccupied; for the index of the unit of the check data cache in the occupied state, the unit of the check data cache is changed into an unoccupied state by moving the data of the check data cache unit corresponding to the index of the unit of the check data cache to an external data cache; an index of a location of the check data cache in an unoccupied state may be assigned to respond to a write command.
According to a thirty-fifth or thirty-sixth aspect of the present application, there is provided a method for providing a virtual parity data cache according to a thirty-seventh aspect of the present application, wherein a second virtual parity data cache table is further maintained to record an index of a unit of the parity data cache or an index of an external data unit corresponding to an index of a unit of each virtual parity data cache.
According to one of the thirty-third to thirty-seventh methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a method of providing a virtual parity data cache according to the thirty-eighth aspect of the present application, acquiring an index of a unit of an available parity data cache and a write command context, acquiring an index of a unit of the virtual parity data cache corresponding to a stream ID according to the stream ID of the received write command context, and calculating parity data for the write command context using the unit of the parity data cache.
According to one of the thirty-third to thirty-eighth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a thirty-ninth method of providing a virtual parity data cache of the first aspect of the present application, wherein the amount of data written using the index of the unit of each virtual parity data cache is counted to identify when parity data corresponding to the index of the unit of the virtual parity data cache is calculated.
According to one of the thirty-third to thirty-ninth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the method of providing a virtual parity data cache of the first aspect of the present application, obtaining a stream ID from an acquired message, and assigning an index of a unit of the parity data cache associated with the stream ID to the message.
According to one of the thirty-third to forty-fourth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided the forty-first method of providing a virtual parity data cache of the first aspect of the present application, by finding an index of a unit of the virtual parity data cache associated with a stream ID in a parity data cache table, and providing the index of the unit of the parity data cache associated with the index of the unit of the virtual parity data cache to a parity data calculation requesting unit together with a write command context.
According to one of the thirty-third to forty-fourth methods of providing a virtual parity data cache of the first aspect of the present application, there is provided a forty-second method of providing a virtual parity data cache of the first aspect of the present application, in response to a failure to find an index of a unit of the virtual parity data cache associated with a stream ID, obtaining an index of a unit of the parity data cache in an unoccupied state and providing the index to a parity data calculation request unit.
According to a forty-second method for providing a virtual parity data cache of the first aspect of the present application, there is provided a forty-third method for providing a virtual parity data cache of the first aspect of the present application, in response to an index of a unit of the parity data cache that is not found in an unoccupied state, the unit of the parity data cache is made to become an unoccupied state by moving data of a parity data cache unit corresponding to the index of the unit of the parity data cache to an external data cache.
According to one of the forty-first to forty-third methods of providing a virtual check data cache in the first aspect of the present application, a method of providing a virtual check data cache in the forty-fourth aspect of the present application is provided, where a check data calculation request unit obtains an index of a unit of the virtual check data cache corresponding to a stream ID according to the stream ID of a received write command context, and calculates check data for the write command context using the unit of the check data cache.
According to a second aspect of the present application, there is provided a first method of processing a write command according to the second aspect of the present application, comprising: acquiring a write command; according to the stream ID of the write command, acquiring an index of a unit of the check data cache related to the stream ID; calculating check data for the data to be written in the write command by using a check data cache unit corresponding to the index of the check data cache unit; and writing the data to be written and/or the check data in the check cache into the storage medium.
According to a second aspect of the present application, there is provided a method of processing a write command according to the second aspect of the present application, wherein an index of a unit of the virtual check data cache is bound to the stream ID.
According to the first or second method of processing a write command of the second aspect of the present application, there is provided the third method of processing a write command of the second aspect of the present application by acquiring an index of a unit of the virtual check data cache associated with the stream ID and acquiring an index of a unit of the check data cache associated with the index of the unit of the virtual check data cache as an index of a unit of the check data cache associated with the stream ID.
According to a third method of processing a write command of the second aspect of the present application, there is provided the fourth method of processing a write command of the second aspect of the present application, wherein if the index of the unit of the virtual check data cache associated with the stream ID does not have the associated unit of the check data cache, the unit of the check data cache in the unoccupied state is obtained, and the index of the unit of the check data cache in the unoccupied state is used as the index of the unit of the check data cache associated with the stream ID.
According to a fourth method of processing a write command of the second aspect of the present application, there is provided the fifth method of processing a write command of the second aspect of the present application, wherein if there is no element of the check data cache in an unoccupied state, the element of the check data cache is changed to the unoccupied state by moving the data of the check data cache element to the external data cache.
According to a fourth method of processing a write command of the second aspect of the present application, there is provided the sixth method of processing a write command of the second aspect of the present application, in response to assigning an index of a unit of the parity data buffer to the write command, the parity data buffer assigning unit further updates the parity data buffer table to record therein that the index of the unit of the assigned parity data buffer is associated with an index of a unit of the virtual parity data buffer associated with the stream ID of the write command.
According to a third aspect of the present application, there is provided a method of processing a write command according to the first aspect of the present application, comprising: acquiring a write command; according to the stream ID of the write command, acquiring an index of a unit of the virtual check data cache related to the stream ID; a unit for obtaining available check data cache; calculating check data for data to be written in the write command by using the acquired unit for caching the check data; and writing the data to be written and/or the check data in the check cache into the storage medium.
According to the first method for processing a write command of the third aspect of the present application, there is provided the second method for processing a write command of the third aspect of the present application, wherein the obtained available unit of the check data cache is a unit of the check data cache associated with an index of the unit of the virtual check data cache.
According to a second method of processing a write command of the third aspect of the present application, there is provided the method of processing a write command of the third aspect of the present application, wherein if there is no unit of the check data cache associated with the index of the unit of the virtual check data cache, the unit of the check data cache is further selected, and the check data associated with the index of the unit of the virtual check data cache is moved to the unit of the selected check data cache, so that the unit of the selected check data cache becomes a unit of an available check data cache.
According to one of the first to third methods of processing a write command of the third aspect of the present application, there is provided the fourth method of processing a write command of the third aspect of the present application, further comprising: the index of the unit of the virtual check data cache is bound to the stream ID.
According to one of the first to fourth methods of processing a write command of the third aspect of the present application, there is provided the method of processing a write command of the fifth aspect of the present application, further comprising: and counting the data volume of the write command corresponding to the index of the unit of the virtual check data cache associated with the stream ID, and releasing the index of the unit of the virtual check data cache associated with the stream ID if the data volume reaches a threshold value.
According to a fourth aspect of the present application, there is provided a first information processing apparatus according to the fourth aspect of the present application, comprising a memory, a processor, and a program stored on the memory and executable on the processor, the processor implementing one of the methods for an information processing apparatus according to the first, second, or third aspect of the present application when executing the program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 illustrates a block diagram of a solid-state storage device;
FIG. 2 shows a schematic diagram of a large block;
FIG. 3 illustrates a block diagram of a portion of a storage device associated with verifying a data cache according to an embodiment of the present application;
FIG. 4 illustrates a block diagram of a portion of a storage device and associated check data cache, according to yet another embodiment of the present application;
FIG. 5 illustrates a block diagram of a portion of a storage device associated with a check data cache, according to another embodiment of the present application;
FIG. 6 illustrates a flow diagram for using check data caching according to an embodiment of the present application;
FIG. 7 illustrates a block diagram of a portion of a storage device and associated check data cache, according to yet another embodiment of the present application;
FIG. 8 illustrates a block diagram of a storage device and associated portion of a check data cache, according to yet another embodiment of the present application;
FIG. 9 illustrates a flow diagram for using check data caching according to yet another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
FIG. 3 illustrates a block diagram of a portion of a storage device associated with a check data cache, according to an embodiment of the present application.
The control component of the storage device receives the IO command sent by the host to the storage device. For a write command, the data corresponding to the write command is written to the NVM chip via the media interface 380. After the control section performs a series of processes on the write command, data to be written to the NVM chip and an address (physical address) indicating a storage location of the data in the NVM chip are obtained. It will be appreciated that the control unit may split a single write command into multiple sub-write commands. For simplicity, the write command and the sub-command are collectively referred to as a write command.
To calculate the check data for the data corresponding to the write command, an available check data cache needs to be obtained. The virtual parity data request unit 310 requests allocation of a virtual parity data buffer. The virtual parity data request unit 310 generates a virtual parity data cache request message and adds it to the message queue 312. Optionally, the generated virtual parity data cache request message includes a context of the write command.
The virtual check data buffer allocation unit 320 obtains the message from the message queue 312 and accesses the virtual check data buffer table 350 to obtain the available virtual check data buffer. The virtual check data cache is identified by a virtual check data cache ID.
Virtual parity data cache table 350 includes a plurality of entries, each entry associatively recording a state of one of the virtual parity data caches identified by the virtual parity data cache ID and its associated parity data cache ID or external data cache location. The status of the virtual parity data cache indicates whether the corresponding virtual parity cache (or virtual parity cache ID) is available. The state of the virtual check cache includes unallocated, allocated and in use, and/or allocated and suspended. The virtual parity cache in an unused state may be allocated to process write commands. The virtual check cache in the allocated and in-use or allocated and suspended state may no longer be allocated to process write commands. The virtual parity cache that is allocated and in use is associated with a corresponding parity data cache (identified by a parity data cache ID), and the virtual parity cache that is allocated and pending is associated with an external data cache unit. The external data cache is a memory outside the control unit, for example, for temporarily storing the data of the check data cache.
The verification data generation unit 370 includes a verification data cache 376. Check data cache 376 includes a plurality of cells (378, 379), each cell of the check data cache identified by a check data cache ID. The verification data generation unit 370 can move the data of the unit of the verification data cache 376 to the external data cache 340, and can also move the data of the unit of the external data cache 340 to the verification data cache 376.
It will be appreciated that the virtual parity data cache table 350 may be stored in other data structures to record the virtual parity data cache ID, its state, and its associated parity data cache ID/external data cache location. The pool of available virtual check data cache IDs may also be used to record the available virtual check data cache IDs.
The virtual check data cache allocation unit 320, in response to obtaining the virtual check data cache request message from the message queue 312, accesses the virtual check data cache table 350 to find an available virtual check data cache ID. The virtual check data cache allocation unit 320 also acquires an available check data cache ID (a unit representing an available check data cache), records the found available virtual check data cache ID and the acquired available check data cache ID in the virtual check data cache table 350, and updates the status of the available virtual check data cache ID to allocated and in use. The available virtual check cache ID is provided to the check data calculation requesting unit 360 as a processing result of the virtual check data cache request message. Optionally, the virtual check data cache allocation unit 320 further provides the acquired available check data cache ID and the available virtual check cache ID to the check data calculation request unit 360. Thereby maintaining the parity data generation process for the page stripe by using the virtual parity cache ID and calculating the parity data by operating the parity data generation unit 370 using the parity data cache ID.
Optionally, a check data cache table is further maintained, wherein whether each unit of the check data cache has an associated virtual check data cache ID is recorded. A unit of the check data cache is available which virtually checks the data cache ID without association. It will be appreciated that other data structures may be used to record the usage status of each cell of the data check cache, or to obtain the available check data cache ID by excluding cells recorded in the virtual check data cache table 350 from all cells of the check data cache.
In some cases, there are no available units of the data cache to check. Virtual parity data cache allocation unit 320 also moves data for a unit (e.g., unit 378) of parity data cache 376 that has been used to external data cache 340 to make parity data cache unit 378 available. Alternatively, the virtual parity data buffer allocation unit 320 transmits a data move request in which the parity data buffer unit 378 to be moved and the units of the external data buffer 340 as the data move destination are instructed to the parity data generation unit 370. The index to the parity data cache unit 378 is also updated in the virtual parity data cache table 350 as a unit of the external data cache 340, and the status of the virtual parity data cache ID associated with the parity data cache unit 378 is updated as allocated and pending.
In an alternative embodiment, the virtual check data cache ID indicated in the virtual check data cache request message is obtained from the message queue 312, the virtual check data cache allocation unit 320 obtains the check data cache ID corresponding to the virtual check data cache ID through the virtual check data cache table 350, and provides the virtual check data cache ID and/or the check data cache ID to the check data calculation request unit 360. If the virtual check data cache table 350 indicates that the virtual check data cache ID is in the allocated and suspended state, an available check data cache unit is also obtained, the data of the external data cache unit corresponding to the available check data cache unit is moved to the available check data cache unit, and the virtual check data cache ID and/or the check data cache ID are provided to the check data calculation request unit 360. And also updates the virtual parity data cache table 350 to record that the virtual parity data cache ID is in an allocated and in-use state and associated with the available parity data cache location.
The check data calculation requesting unit 360 instructs the check data generating unit 370 to calculate the check data using the virtual check data buffer ID or the check data buffer ID.
In one embodiment, the check data calculation requesting unit 360 uses the check data cache ID. The check data buffer ID is supplied to the check data generation unit 370 in a calculation check data request message which also indicates the address of the data to be written. The verification data generation unit 370 obtains data to be written, calculates verification data for the data to be written using the verification data cache unit indicated by the verification data cache ID, and writes the data to be written to the NVM storage medium through the media interface 380.
In yet another embodiment, the parity data calculation requesting unit 360 uses a virtual parity data cache ID. The check data buffer ID corresponding to the virtual check data buffer ID is acquired from, for example, the virtual check data buffer table 350, and the check data buffer ID is added in the calculation check data request message to be supplied to the check data generation unit 370. If the virtual check data cache table 350 indicates that the virtual check data cache ID is in the allocated and suspended state, an available check data cache unit is further obtained, the data of the external data cache unit corresponding to the available check data cache unit is moved to the available check data cache unit, and the check data cache ID indicating the available check data cache unit is added in the calculation check data request message and provided to the check data generation unit 370. And also updates the virtual check data cache table 350 to record that the virtual check data cache ID is in an allocated and in-use state and associated with the check data cache ID.
Optionally, the verification data generation unit 370 comprises a message queue (372, 374). Message queue 372 is used to receive data move request messages and message queue 374 is used to receive compute check data requests. In response to obtaining the data transfer request message from message queue 372, data is transferred between the cells (378, 379) of the check data buffer and the external data buffer 340. In response to obtaining the calculate parity data request from message queue 374, parity data is calculated for the data to be written using the units of the parity data cache.
FIG. 4 illustrates a block diagram of a portion of a storage device associated with a check data cache, according to yet another embodiment of the present application.
In contrast to the embodiment of fig. 3, according to the embodiment of fig. 4, the stream ID is also indicated in the virtual check data cache request message generated by the virtual check data request unit 410. The stream ID identifies a stream of the multi-stream storage device. Each entry of virtual parity data cache table 450 also records a stream ID in association with the virtual parity data cache ID to indicate one or more virtual parity cache IDs for the respective stream.
According to the embodiment of fig. 4, the virtual parity data buffer allocation unit 420 attempts to allocate the same virtual parity data buffer ID for the same stream ID when allocating the virtual parity data buffer ID in response to the virtual parity data buffer request message. When the virtual check data buffer ID in the unallocated state is allocated to the stream ID, the allocated virtual check data buffer ID is also recorded in the virtual check data buffer table 450 in association with the stream ID.
Optionally, the stream ID and the virtual check data cache ID are in a 1 to N relationship, where N is an integer greater than 1. Thereby assigning multiple virtual check data cache IDs to the same stream.
Further, the message provided by the virtual check data cache allocation unit 420 to the check data calculation request unit 460 also indicates the stream ID. The check data calculation requesting unit 460 thus obtains the check data buffer ID and the stream ID from the message indicated by the virtual check data buffer allocation unit 420, and obtains the data to be written having the same stream ID, and instructs the check data generating unit to calculate the check data for the data to be written having the stream ID using the check data buffer ID.
And also puts data from the same stream into the same chunk.
In one embodiment, the virtual parity data request unit 410 allocates a virtual parity data buffer for each write command request. The virtual parity data request unit 410 generates a virtual parity data cache request message for each write command including the stream ID associated with the write command.
The virtual check data buffer allocation unit 420 obtains the message from the message queue 412, accesses the virtual check data buffer table 450, and obtains the virtual check data buffer ID corresponding to the stream ID and available, and the ID of the available check data buffer.
Optionally, the virtual check data buffer allocation unit 420 further includes a context of a write command in the message acquired from the message queue 412, and provides the context of the write command, the stream ID, the available virtual check data buffer ID, and the available check data buffer ID to the check data calculation request unit 460, so that the check data calculation request unit 460 acquires data to be written of the write command according to the received write command context, and instructs the check data generation unit to calculate check data for the data to be written using the check data buffer ID specified in the message.
In yet another embodiment, the virtual parity data request unit 410 requests allocation of a virtual parity data cache to be used for a set of write commands. The virtual parity data cache request message includes a stream ID associated with the set of write commands. And each write command is provided to the check data calculation request unit 460. The virtual check data buffer allocation unit 420 obtains an available virtual check data buffer ID and an available check data buffer ID according to the stream ID, and provides the stream ID, the available virtual check data buffer ID and the available check data buffer ID to the check data calculation request unit 460, so that the check data calculation request unit 460 instructs the check data generation unit 370 to calculate check data for the data corresponding to the write command using the check data buffer unit corresponding to the virtual check data buffer ID in response to the received write command and the received available virtual check data buffer ID (and the available check data buffer ID), where the virtual check data buffer ID and the write command are associated with the same stream ID.
FIG. 5 illustrates a block diagram of a portion of a storage device associated with a check data cache, according to another embodiment of the present application.
The embodiment of fig. 3 of the embodiment of fig. 5 is the same or substantially the same. In contrast to the embodiment of fig. 3, according to the embodiment of fig. 5, a stream ID mapping table 464 is also maintained, in which the stream ID and the virtual check data cache ID are recorded in association. Optionally, in the stream ID mapping table 464, the stream IDs are associated with one or more virtual check data cache IDs.
The check data calculation requesting unit 465 acquires the virtual check data cache ID and optionally the physical check data cache ID provided thereto by the virtual check data cache allocating unit 320. The check data calculation requesting unit 465 also acquires the write command carrying the stream ID from the message queue 462. The check data calculation requesting unit 465 accesses the stream ID mapping table 464, identifies the virtual check data cache ID associated with the stream ID of the write command, and instructs the check data generating unit 370 to calculate the check data for the data corresponding to the write command using the unit of the check data cache corresponding to the virtual check data cache ID. If a certain stream ID for a write command retrieved from message queue 462 is not recorded in stream ID mapping table 464, an available virtual check data cache ID is also specified for the stream ID and recorded in stream ID mapping table 464.
Thus, the check data calculation requesting unit 465 calculates check data using the data to be written having the same stream ID with one or more check data buffer units associated with the same stream ID.
Optionally, the stream ID mapping table 464 records available virtual check data cache IDs (and/or available check data cache IDs). The check data calculation requesting unit 465 allocates an available virtual check data buffer to the stream ID according to the stream ID indicated by the write command, and records the buffer in the stream ID mapping table 464. And upon subsequent re-receipt of the write command with the stream ID, obtains the virtual check data cache ID (and/or the available check data cache ID) associated with the same stream ID for the write command from stream ID mapping table 464 based on the stream ID of the write command.
FIG. 6 shows a flow chart of using check data caching according to an embodiment of the application.
The virtual parity data cache requesting unit applies for a virtual parity data cache to the virtual parity data cache allocating unit (610).
In one embodiment, the virtual parity data cache request unit applies for a virtual parity data cache for each write command. In yet another example, the virtual parity data cache request unit applies for virtual parity data caching for a group of write commands. In still another example, the virtual parity data cache request unit actively applies for the virtual parity data cache independently of whether the pending write command exists.
The virtual check data cache allocation unit allocates an available virtual check data cache ID for the request in response to the request (620).
In one embodiment, the virtual parity data cache allocation unit allocates a virtual parity data cache ID in an unused state. In yet another embodiment, the virtual parity data cache allocates a virtual parity data cache ID that is in an allocated and in-use state or an allocated and on-hold state. In still another embodiment, the virtual parity data cache allocation unit selects which state of the virtual parity data cache ID is allocated according to the received request, and allocates the unused virtual parity data cache or the used virtual parity data cache ID if the received request indicates allocation. In still another embodiment, the virtual check data buffer allocation unit extracts a stream ID from the received request and allocates a virtual check data buffer ID according to the stream ID.
The virtual check data cache allocation unit also obtains an available check data cache unit (indicated by the check data cache ID) (630).
If the virtual check data cache table records the check data cache ID corresponding to the virtual check data cache ID obtained in the synchronizing step 620, the check data cache ID is used as an available check data cache unit. If the virtual check data cache table records that the virtual check data cache ID obtained in the synchronization step 620 corresponds to an external data cache unit or a virtual check data cache ID in an unused state, an available check data cache unit is also allocated. For example, the virtual parity data cache allocation unit maintains the status of each parity data cache unit to identify available parity data cache units. If no available check data cache unit exists, one of the check data cache units is selected, and the data in the check data cache unit is moved to an external data cache, so that the check data cache unit becomes an available check data cache.
The virtual parity data cache allocation unit provides the available virtual parity data cache ID, the available parity data cache ID, the stream ID, and/or the write command context to the parity data calculation request unit.
The check data calculation request unit instructs the check data generation unit to calculate check data for the data to be written in the write command using the check data cache corresponding to the check data cache ID, and initiates a programming operation (640).
In an embodiment, the check data calculation requesting unit obtains the context of the write command and the check data cache ID from the message provided by the virtual check data cache allocating unit, and instructs the check data generating unit to calculate the check data for the data to be written in by the write command by using the check data cache corresponding to the check data cache ID. In another embodiment, the check data calculation requesting unit obtains the check data cache ID from the message provided by the virtual check data cache allocating unit, and additionally receives the write command context, and instructs the check data generating unit to calculate the check data for the data to be written by the write command using the check data cache corresponding to the check data cache ID. In another embodiment, the check data calculation requesting unit further obtains a stream ID associated with the virtual check data cache ID from the message provided by the virtual check data cache allocating unit, and instructs, according to the stream ID indicated by the write command context, the check data generating unit to calculate the check data for the data to be written of the write command using the check data cache corresponding to the check data cache ID having the same stream ID as the stream ID indicated by the write command.
The check data generation unit calculates check data for the data to be written indicated in the message using the check data cache specified in the message according to the message provided by the check data calculation request unit (650). And writing the data to be written or the check data in the check buffer to the NVM storage medium by using the medium interface of the control unit (660).
FIG. 7 illustrates a block diagram of a portion of a storage device and associated check data cache, according to yet another embodiment of the present application.
Compared with the embodiment of fig. 3, according to the embodiment of fig. 7, the virtual parity data request unit 710 generates a virtual parity data cache request message for requesting the virtual parity data cache, and provides the virtual parity data cache allocation unit 720 with the generated virtual parity data cache request message.
Virtual parity data cache allocation unit 720 accesses virtual parity data cache table 750 to obtain the available virtual parity data cache ID.
The virtual check data cache table 750 includes a plurality of entries, each entry further records a check data cache ID or an index of an external data cache unit in association with the virtual check data cache ID. If the entry of the virtual parity data cache table 750 records the parity data cache ID, it indicates that the virtual parity data cache ID of the entry is in an allocated and in-use state. If the entry of the virtual parity data cache table 750 records the index of the external data cache unit, it indicates that the virtual parity data cache ID of the entry is in the allocated and suspended state. If the entry of the virtual parity data cache table 750 records neither the parity data cache ID nor the external data cache unit index, it indicates that the virtual parity data cache ID of the entry is in an unallocated state.
If the found available virtual check data cache ID is associated with the check data cache ID, the virtual check data cache allocating unit 720 provides the virtual check data cache ID and the check data cache ID together to the check data calculation requesting unit 760. If the found available virtual check data cache ID is not associated with the check data cache ID, the virtual check data cache allocating unit 720 provides the available virtual check data cache ID to the check data cache allocating unit 725 to obtain the available check data cache ID.
The check data cache assignment unit 725 selects a unit of the check data cache 776, for example, in response to a request for an available check data cache ID, selects the unit 778 as an example, moves data of the unit 778 of the check data cache 776 to the external data cache 740 so that the unit 778 of the check data cache 776 becomes available, and may send the received available virtual check data cache ID to the check data calculation request unit 760 together with the check data cache ID associated with the unit 778 of the check data cache 776. Virtual check data cache table 750 is also updated to record the location where the virtual check cache ID originally associated with location 778 is now associated with external data cache 740.
The check data request unit 760 instructs the check data generation unit 770 to calculate check data for data to be written to the NVM storage medium using the check data buffer ID received from the virtual check data buffer allocation unit 720 or the check data buffer allocation unit 725.
Alternatively, when the virtual parity data cache allocation unit 720 allocates a virtual parity data cache ID in response to the virtual parity data cache request message, it tries to allocate the same virtual parity data cache ID for the virtual parity data cache request message having the same stream ID. When a virtual parity data buffer ID in an unallocated state is allocated to a stream ID, the allocated virtual parity data buffer ID is also recorded in the virtual parity data buffer table 750 in association with the stream ID.
In one embodiment, the virtual check data request unit 710 requests a virtual check data cache for each write command, generates a virtual check data cache request message, and indicates a stream ID corresponding to the write command in the virtual check data cache request message. The write command context is also indicated in the request message. The virtual parity data buffer allocation unit 720 obtains the stream ID from the received request message, and selects the virtual parity data buffer ID according to the stream ID (from the virtual parity data buffer table 750). For example, if the flow ID S1 is obtained from the request message, if there is a virtual parity data cache ID already associated with the flow ID S1 in an allocated and in-use state, the virtual parity data cache ID is preferentially selected to respond to the request message; or selects the virtual check data cache ID already associated with stream ID S1. If there is no virtual parity data cache ID already associated with stream ID S1, it is preferentially allocated with a virtual parity data cache ID in an unallocated state, or allocated with a virtual parity data cache ID not associated with stream ID S1.
The virtual parity data buffer allocation unit 720 provides the acquired available virtual parity data buffer ID and parity data buffer ID to the parity data calculation request unit 760, or requests the parity data buffer allocation unit 725 to acquire the available parity data buffer ID without parity data buffer ID.
The check data calculation requesting unit 760 obtains the check data cache ID and the write command context from the received message, and instructs the check data generating unit 770 to calculate the check data for the data to be written in the write command using the check data cache ID.
Further, in response to completion of the calculation of the parity data of the corresponding page stripe using the virtual parity data cache ID, the virtual parity data cache request unit 710 also generates a message indicating release of the virtual parity data cache ID, and provides it to the virtual parity data cache allocation unit 720. Accordingly, the virtual parity data cache allocation unit 720 updates the virtual parity data cache table 750 to record that the released virtual parity data cache ID is in an unused state.
FIG. 8 is a block diagram illustrating a portion of a storage device and associated check data cache according to yet another embodiment of the present application.
According to the embodiment of fig. 8, the virtual parity data cache allocation unit 820 allocates virtual parity data cache IDs to the respective streams and supplies to the parity data calculation request unit 860.
In one example, after the storage device is initialized, the virtual check data cache allocation unit 820 establishes a binding relationship between the virtual check data cache ID and the stream ID, and records the binding relationship in the virtual check data cache table 850. Virtual check data cache table 850 includes a plurality of entries, each entry recording a virtual check data cache ID and its associated stream ID.
In yet another example, virtual parity data cache assignment unit 820 maintains a dynamic association of virtual parity data cache IDs with stream IDs. In response to the message for assigning a virtual check data buffer ID to the specified stream ID acquired from the message queue 812, an available virtual check data buffer ID is acquired and recorded in the virtual check data buffer table 850 in association with the stream ID. The association relationship between the virtual check data cache ID and the stream ID recorded in the virtual check data cache table 850 is made clear by the message for releasing the virtual check data cache ID acquired from the message queue 812.
Optionally, in the embodiment of fig. 8, a virtual check data cache table is provided for each stream ID. For example, virtual check data cache table 850 is for stream ID1, virtual check data cache table 852 is for stream ID2, and virtual check data cache table 854 is for stream ID 3. If the message retrieved from message queue 812 specifies that stream ID1 is assigned a virtual check data cache ID, then virtual check data cache table 850 is accessed to obtain the available virtual check data cache ID.
Assigning a virtual check data cache ID to the stream ID may be independent of receiving a write command. Even if a pending write command has not occurred, a virtual check data buffer ID is assigned to the stream ID for standby.
The write command is supplied to the check data buffer allocation unit 825 by the supply message queue (826, 827, 828). The messages in the message queues (826, 827, 828) indicate the context of the write command, and the messages also indicate the stream ID associated with the write command. Alternatively, write commands with different stream IDs are provided to different message queues. For example, a write command with flow ID1 is provided to message queue 826, a write command with flow ID2 is provided to message queue 827, and a write command with flow ID3 is provided to message queue 828.
The check data buffer unit 825 acquires a message from the message queue (826, 827, 828), and assigns an available check data buffer ID thereto.
In one embodiment, the check data cache allocation unit 825 maintains a check data cache table 862. Check data cache table 826 includes a plurality of entries, each entry recording, in association, a check data cache ID, its status, and a virtual check data cache ID. The status of the check data cache ID includes, for example, occupied and unoccupied. The check data buffer ID in an unoccupied state may be assigned to respond to a write command. For the check data cache ID in the occupied state, optionally, the check data cache ID is changed into an unoccupied state by moving the data of the check data cache unit corresponding to the check data cache ID to the external data cache 840. And also maintains a second virtual check data cache table 864 to record check data cache IDs or indexes of external data units corresponding to the respective virtual check data cache IDs.
The check data buffer allocation unit 825 supplies the available check data buffer ID and the write command context to the check data calculation request unit 860. The check data calculation requesting unit 860 obtains the virtual check data cache ID corresponding to the stream ID according to the stream ID of the received write command context, and instructs the check data generating unit 870 to calculate the check data for the write command context using the check data cache ID. And counting the written data amount of each virtual check data cache ID to identify when the check data corresponding to the virtual check data cache ID is calculated.
Alternatively or additionally, the message retrieved from the message queue (826, 827, 828) also indicates a flow ID, e.g., flow ID 1. The check data buffer allocation unit 825 allocates a check data buffer ID associated with the stream ID1 to the message. By looking up the virtual check data cache ID associated with stream ID1 in check data cache table 862 and providing the check data cache ID associated with the virtual check data cache ID to check data calculation request unit 860 along with the write command context. If the virtual check data buffer ID associated with the stream ID1 cannot be found, the check data buffer ID in an unoccupied state is acquired and supplied to the check data calculation request unit 860. If the check data cache ID in the unoccupied state is not found, the check data cache ID is changed to the unoccupied state by moving the data of the check data cache unit corresponding to the check data cache ID to the external data cache 840. The check data calculation requesting unit 860 obtains the virtual check data cache ID corresponding to the stream ID according to the stream ID of the received write command context, and instructs the check data generating unit 870 to calculate the check data for the write command context using the check data cache ID.
FIG. 9 illustrates a flow diagram for using check data caching according to yet another embodiment of the present application.
The virtual check data cache allocation unit binds the virtual check data cache ID to the stream ID (910). So that write commands with the same stream ID are processed using the same virtual parity data cache ID.
In one example, after the storage device is initialized, the virtual check data cache allocation unit establishes a binding relationship between the virtual check data cache ID and the stream ID, and records the binding relationship in the virtual check data cache table. In yet another example, the virtual parity data cache allocation unit maintains a dynamic association relationship for the virtual parity data cache ID and the stream ID.
The check data buffer allocation unit receives a write command indicating a stream ID (920). The parity data buffer allocation unit obtains a parity data buffer ID for the write command associated with the stream ID it indicates (924). For example, a virtual check data cache ID associated with the stream ID indicated by the write command is found, and a check data cache ID associated with the virtual check data cache ID in a check data cache table (862) is assigned to the write command. If the virtual check data cache ID associated with the stream ID indicated by the write command is not recorded in the check data cache table (862), the check data cache ID in an unoccupied state is acquired and allocated to the write command. If the check data cache ID in the unoccupied state cannot be found, the check data cache ID is changed to the unoccupied state by transferring the data of the check data cache unit corresponding to the check data cache ID to the external data cache (922). In response to assigning the check data cache ID to the write command, the check data cache table (862) is also updated to record therein that the assigned check data cache ID is associated with the virtual check data cache ID associated with the stream ID of the write command.
The virtual check data cache allocation unit provides the binding relationship between the virtual check data cache ID and the stream ID to the check data calculation request unit. The check data buffer allocation unit supplies the check data buffer ID indicating the allocated check data buffer unit to the check data calculation request unit.
The verification data request unit instructs the verification data generation unit to calculate verification data for the data to be written by the write command using the verification data cache corresponding to the verification data cache ID, and initiates a programming operation (930). And counting the data amount of the check data calculated by using each virtual check data cache ID to identify when the check data corresponding to the virtual check data cache ID is completely calculated.
The check data generation unit calculates check data for the data to be written indicated in the message by using the check data cache unit specified in the message according to the message provided by the check data calculation request unit (940). And writing the data to be written or the check data in the check buffer to the NVM storage medium using the media interface of the control unit (950).
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for providing a virtual parity data cache, comprising:
acquiring a request for allocating a unit of virtual check data cache;
acquiring a first unit of an available virtual check data cache;
acquiring a first unit of an available check data cache, and recording the association relationship between the acquired first unit of the virtual check data cache and the first unit of the check data cache;
the request to allocate the unit of the virtual check data cache is responded with the first unit of the virtual check data cache and/or the first unit of the check data cache.
2. The method of claim 1, wherein the association relationship between the unit of the virtual check data cache and the unit of the check data cache is recorded by using a virtual check data cache table; the virtual check data cache table comprises a plurality of entries, and each entry records the state of a unit of the virtual check data cache identified by each index of the virtual check data cache and the index of the unit of the check data cache or an external data cache unit associated with the entry in an associated manner.
3. The method of claim 1 or 2, further comprising requesting calculation of the check data using an index of the first location of the virtual check data cache or an index of the first location of the check data cache.
4. The method of claim 3, further comprising obtaining data to be written, computing parity data for the data to be written using the first unit of the parity data cache, and writing the data to be written to the storage medium.
5. The method according to any of claims 1 to 4, wherein a stream ID is further indicated in the request for allocating a unit of the virtual parity data cache, the stream ID identifying a stream of the multi-stream storage device.
6. The method of claim 5, wherein an entry of the virtual check data cache table records a stream ID in association with an index of a unit of the virtual check data cache to indicate an index of one or more units of the virtual check cache for each stream.
7. The method of claim 5 or 6, wherein when obtaining the index of the unit of the available virtual check data buffer, allocating the unit index of the same virtual check data buffer for a request of allocating the unit of the virtual check data buffer having the same stream ID.
8. A method as claimed in any one of claims 5 to 7, characterized by retrieving from the received message the stream ID and the index of the associated unit of the check data buffer, and retrieving the data to be written having the same stream ID, and calculating the check data for the data to be written having the stream ID using the unit of the check data buffer.
9. The method of any one of claims 5 to 8, wherein the request for allocating a unit of the virtual parity data cache generated for each write command comprises a stream ID associated with the write command, and wherein an index of the unit of the virtual parity data cache corresponding to the stream ID and available and/or an index of the unit of the parity data cache available is/are obtained in response to the request for allocating a unit of the virtual parity data cache.
10. An information processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, wherein the processor implements the method for an information processing apparatus of any one of claims 1 to 9 when executing the program.
CN201910246121.7A 2019-03-28 2019-03-28 Virtual parity data caching for storage devices Pending CN111752866A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910246121.7A CN111752866A (en) 2019-03-28 2019-03-28 Virtual parity data caching for storage devices

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910246121.7A CN111752866A (en) 2019-03-28 2019-03-28 Virtual parity data caching for storage devices

Publications (1)

Publication Number Publication Date
CN111752866A true CN111752866A (en) 2020-10-09

Family

ID=72671643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910246121.7A Pending CN111752866A (en) 2019-03-28 2019-03-28 Virtual parity data caching for storage devices

Country Status (1)

Country Link
CN (1) CN111752866A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721844A (en) * 2022-03-10 2022-07-08 云和恩墨(北京)信息技术有限公司 Data caching method and device, computer equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114721844A (en) * 2022-03-10 2022-07-08 云和恩墨(北京)信息技术有限公司 Data caching method and device, computer equipment and storage medium
CN114721844B (en) * 2022-03-10 2022-11-25 云和恩墨(北京)信息技术有限公司 Data caching method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
JP6538940B2 (en) Nonvolatile memory control method
JP6524039B2 (en) Memory system and control method
EP2802991B1 (en) Systems and methods for managing cache admission
WO2013097618A1 (en) Storage virtualization device, storage system and data storage method and system
US11416162B2 (en) Garbage collection method and storage device
US20210247922A1 (en) Storage device and operating method thereof
US11662952B2 (en) Memory system and method of controlling nonvolatile memory and for reducing a buffer size
CN109558334B (en) Garbage data recovery method and solid-state storage device
CN107066202B (en) Storage device with multiple solid state disks
CN108228470A (en) A kind of method and apparatus for handling the write order to NVM write-in data
CN110554833B (en) Parallel processing IO commands in a memory device
CN108877862B (en) Data organization of page stripes and method and device for writing data into page stripes
CN108628762B (en) Solid-state storage device and IO command processing method thereof
US11347637B2 (en) Memory system and non-transitory computer readable recording medium
CN111324414A (en) NVM storage medium simulator
CN110865945B (en) Extended address space for memory devices
CN109840048A (en) Store command processing method and its storage equipment
CN111752866A (en) Virtual parity data caching for storage devices
US10891239B2 (en) Method and system for operating NAND flash physical space to extend memory capacity
CN112148626A (en) Storage method and storage device for compressed data
CN111290975A (en) Method for processing read command and pre-read command by using unified cache and storage device thereof
CN111290974A (en) Cache elimination method for storage device and storage device
WO2018041258A1 (en) Method for processing de-allocation command, and storage device
CN111752862A (en) Providing multi-stream storage with virtual parity data caching
CN113051189A (en) Method and storage device for providing different data protection levels for multiple namespaces

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd.

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination