CN110968527A - FTL provided caching - Google Patents

FTL provided caching Download PDF

Info

Publication number
CN110968527A
CN110968527A CN201811154190.7A CN201811154190A CN110968527A CN 110968527 A CN110968527 A CN 110968527A CN 201811154190 A CN201811154190 A CN 201811154190A CN 110968527 A CN110968527 A CN 110968527A
Authority
CN
China
Prior art keywords
cache
ftl
data
logical address
address
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811154190.7A
Other languages
Chinese (zh)
Other versions
CN110968527B (en
Inventor
孙清涛
路向峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN201811154190.7A priority Critical patent/CN110968527B/en
Publication of CN110968527A publication Critical patent/CN110968527A/en
Application granted granted Critical
Publication of CN110968527B publication Critical patent/CN110968527B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1009Address translation using page tables, e.g. page table structures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0877Cache access modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The application relates to a storage technology, and relates to a method for providing cache by an FTL (Flash Translation Layer), wherein the method for the storage device comprises the following steps: acquiring a logical address for accessing the storage device; the logical addresses are mapped to physical addresses by the FTL table. The address mapping and caching method and device aim at improving the FTL, and address mapping and caching of the storage device are managed by the FTL in a unified mode, so that the performance of the storage device is improved.

Description

FTL provided caching
Technical Field
The present application relates to a storage technology, and more particularly, to providing a cache by a Flash Translation Layer (FTL).
Background
FIG. 1 illustrates a block diagram of a solid-state storage device. The solid-state storage device 102 is coupled to a host for providing storage capabilities to the host. The host and the solid-state storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the solid-state storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component interconnect Express), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The Memory device 102 includes an interface 103, a control section 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory) 110.
NAND flash Memory, phase change Memory, FeRAM (Ferroelectric RAM), MRAM (magnetoresistive Memory), RRAM (Resistive Random Access Memory), XPoint Memory, and the like are common NVM.
The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the DRAM110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in various manners of software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array), an ASIC (Application-specific integrated Circuit), or a combination thereof. The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands. The control component 104 may also be coupled to the DRAM110 and may access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM.
Control section 104 includes a flash interface controller (or referred to as a media interface controller, a flash channel controller) that is coupled to NVM chip 105 and issues commands to NVM chip 105 in a manner that conforms to an interface protocol of NVM chip 105 to operate NVM chip 105 and receive command execution results output from NVM chip 105. Known NVM chip interface protocols include "Toggle", "ONFI", etc.
The memory Target (Target) is one or more Logic Units (LUNs) that share CE (Chip Enable) signals within the NAND flash package. One or more dies (Die) may be included within the NAND flash memory package. Typically, a logic cell corresponds to a single die. The logical unit may include a plurality of planes (planes). Multiple planes within a logical unit may be accessed in parallel, while multiple logical units within a NAND flash memory chip may execute commands and report status independently of each other.
Data is typically stored and read on a storage medium on a page-by-page basis. And data is erased in blocks. A block (also referred to as a physical block) contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes.
In the storage device, mapping information from logical addresses to physical addresses is maintained by using a Flash Translation Layer (FTL). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented using an intermediate address modality in the related art. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.
A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device.
The FTL of some memory devices is provided by a host to which the memory device is coupled, the FTL table is stored by a memory of the host, and the FTL is provided by a CPU of the host executing software. Still other storage management devices disposed between hosts and storage devices provide FTLs.
And a cache is provided for the storage device to improve the performance of the storage device. Distributed caching for solid state storage is provided, for example, in chinese patent applications 201710219077.1, 201710219096.4 and 201710219112. X. The cache may also be provided by the host or by the storage management device.
Disclosure of Invention
The address mapping and caching method and device aim at improving the FTL, and address mapping and caching of the storage device are managed by the FTL in a unified mode, so that the performance of the storage device is improved.
According to a first aspect of the present application, there is provided a method for a storage device according to the first aspect of the present application, comprising: acquiring a logical address for accessing the storage device; the logical addresses are mapped to physical addresses by the FTL table.
The first method for a memory device according to the first aspect of the present application, wherein the physical address provided by the one or more first FTL entries of the FTL table is a physical address to access the NVM chip, and the physical address provided by the one or more second FTL entries of the FTL table is a physical address to access the cache.
The first or second method for a storage device according to the first aspect of the present application, wherein the FTL table is provided by a control part of the storage device or by a host accessing the storage device.
One of the first to third methods for a memory device according to the first aspect of the present application, wherein the physical address provided by the one or more first FTL entries of the FTL table is the NVM data frame address of the NVM chip; the physical address provided by one or more second FTL entries of the FTL table is a cache container index of DRAM or SRAM.
The fourth method for a storage device according to the first aspect of the present application, wherein identifying the value indicates the NVM data frame address or the cache container index according to the value of the FTL entry.
A fifth method for a memory device according to the first aspect of the present application, wherein FTL entry values greater than a threshold are mapped as NVM data frame addresses, and FTL entry values less than or equal to the threshold are mapped as cache container addresses.
One of the first to fourth methods for a memory device according to the first aspect of the present application, wherein a flag bit indicating that a value of the FTL entry indicates the NVM data frame address or the cache container index is recorded in the FTL entry.
One of the first to seventh methods for a storage device according to the first aspect of the present application, wherein the FTL table is stored in a DRAM, an SRAM, or a memory of a host accessing the storage device of the storage device.
One of the fourth to eighth methods for a storage device according to the first aspect of the present application, wherein the NVM data frame address is a physical address of accessing a physical page of the NVM chip, a physical address of accessing a plurality of combined physical pages of the NVM chip, a physical address of accessing a portion of data cells within a physical page of the NVM chip.
One of the fourth to eighth methods for a storage device according to the first aspect of the present application, wherein the cache container index indicates an address of a cache location or an address of a descriptor of the cache location in a DRAM, an SRAM, or a memory of a host accessing the storage device of the storage device.
A tenth method for a storage device according to the first aspect of the present application, wherein the cache unit is a DRAM, an SRAM, or a segment of storage space in a host memory.
One of the fourth to eighth methods for a storage device according to the first aspect of the present application, wherein the size of the NVM data frame is the same as the small block size of the logical address space, and the size of the buffer unit is the same as the small block size of the logical address space.
One of the first to twelfth methods for a storage device according to the first aspect of the present application, wherein in response to identifying that the physical address corresponding to the logical address indicates a cache container index, the cache unit of the corresponding cache container is accessed according to the cache container index.
A thirteenth method for a memory device according to the first aspect of the present application, wherein if a read command is accessed to the logical address, data is read from the corresponding cache unit in response to the read command.
According to a thirteenth or fourteenth method for a storage device of the first aspect of the present application, if the access logical address is a write command, the data indicated by the write command is written into the corresponding cache unit.
A fifteenth method for a storage device according to the first aspect of the present application, wherein after writing data to the cache unit, completion of write command processing is indicated to a host that issued the write command.
The fifteenth or sixteenth method for a storage device according to the first aspect of the present application, wherein in response to identifying that the physical address corresponding to the logical address indicates a cache container index, further identifies whether a cache unit of a cache container corresponding to the cache container index is in use, and if the cache unit is in use, allocates a new cache unit to the cache container, and writes data indicated by the write command to the new cache unit.
The seventeenth method for a storage device according to the first aspect of the present application, wherein if the cache unit is not currently used, the data indicated by the write command is written.
The seventeenth or eighteenth method for a memory device according to the first aspect of the present application, wherein the other write commands are writing data to the buffer unit, reading data from the buffer unit, or the control section of the memory device is writing data of the buffer unit to the NVM chip, the buffer unit is being used.
One of the fifteenth to nineteenth methods for a storage device according to the first aspect of the present application, wherein, in response to the data indicated by the write command, the validity bitmap identifying that the data in the cache unit is inconsistent with the data of the NVM data frame in the FTL entry is updated.
One of the first to twentieth methods for a memory device according to the first aspect of the present application, wherein in response to identifying that the physical address corresponding to the logical address indicates the NVM data frame address, if the access logical address is a read command, accessing the NVM data frame of the NVM chip using the NVM data frame address, and reading data indicated by the read command.
One of the first to twentieth methods for a storage device according to the first aspect of the present application, wherein in response to identifying that the physical address corresponding to the logical address indicates the NVM data frame address, if a write command is accessing the logical address, a new buffer container is allocated for the logical address, and the data indicated by the write command is written into the buffer unit of the newly allocated buffer container.
A twenty-first or twenty-second method for a storage device according to the first aspect of the present application, wherein after data is written into a cache unit of a newly allocated cache container, an index of the newly allocated cache container is recorded in an FTL entry corresponding to a logical address of a write command.
The twenty-first to twenty-third aspects of the present application provide one of the methods for a storage device, wherein the method further comprises: and writing the data stored in the cache unit into the NVM chip, and replacing the cache container index recorded by the FTL entry with the NVM data frame address of the NVM chip.
A twenty-third or twenty-fourth method for a storage device according to the first aspect of the present application, wherein in response to identifying that the physical address corresponding to the logical address indicates that the logical address read has not yet been written with data, indicating to the host that a read command processing error has occurred, or responding to the read command with a specified value.
According to one of the twenty-first to twenty-fifth methods for a storage device of the first aspect of the present application, wherein the validity bitmap in the FTL entry is updated in response to writing the data indicated by the write command to the cache unit of the newly allocated cache container.
According to one of the twenty-first to twenty-sixth methods for a storage device of the first aspect of the present application, an NVM data frame address corresponding to a logical address obtained from an FTL table is recorded in a newly allocated cache container.
According to one of the twenty-first to twenty-seventh methods for a storage device in the first aspect of the present application, after the physical address corresponding to the identified logical address indicates the NVM data frame address, or after the data indicated by the write command is written into the cache unit of the newly allocated cache container, it is identified whether the data to be written by the write command occupies a complete cache unit.
The twenty-eighth method for a memory device according to the first aspect of the present application, wherein if it is identified that data to be written by a write command does not occupy a complete first cache unit, reading data of a partial logical address space of a small block that is not occupied by data to be written by the write command from the NVM chip from a physical address of a logical address accessed according to the write command provided by the FTL, and filling the data into the first cache unit.
The fourteenth method for a storage device according to the first aspect of the present application, wherein the identified physical address corresponding to the logical address indicates a cache container index, identifies whether the cache container can provide complete data to be read by the read command, and reads data from the cache container if the cache container can provide the complete data to be read by the read command.
A thirty-first aspect of the present application is a method for a memory device, wherein if a buffer container cannot provide complete data to be read by a read command, reading out partial data which cannot be provided by the buffer container from an NVM chip from an NVM data frame address corresponding to a logical address accessed by the read command and obtained from the buffer container.
A thirty-third or thirty-first method for a memory device according to the first aspect of the present application, wherein the identified physical address corresponding to the logical address indicates an NVM data frame address, and then data is read from the NVM chip indicated by the NVM data frame address.
According to a second aspect of the present application, there is provided a first storage device according to the second aspect of the present application, comprising: a control component that performs any of the methods described above to accomplish FTL management. The first memory device according to the second aspect of the present application, wherein the control section FTL manages portions of the NVM chip and the DRAM or SRAM.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1 illustrates a block diagram of a solid-state storage device;
FIG. 2 is a schematic diagram of FTL managed memory according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a logical address (LBA) space of an embodiment of the present application;
FIG. 4 illustrates an FTL table in accordance with an embodiment of the present application;
FIG. 5 is a diagram illustrating an embodiment of an FTL providing cache;
FIG. 6A illustrates a flow chart for writing data to a storage device according to an embodiment of the present application;
FIG. 6B illustrates a flow diagram for writing data to a storage device according to yet another embodiment of the present application;
FIG. 6C illustrates a flow chart for reading data from a memory device according to another embodiment of the present application;
FIG. 7 illustrates a schematic diagram of an FTL entry cache according to an embodiment of the present application;
FIG. 8A illustrates a flow diagram for reading data from a memory device according to the embodiment of FIG. 7 of the present application;
FIG. 8B illustrates a flow diagram for writing data to a storage device according to the embodiment of FIG. 7 of the present application;
FIG. 9 illustrates an FTL table according to yet another embodiment of the present application;
FIG. 10A illustrates a flow diagram for writing data to a storage device according to the embodiment of FIG. 9 of the present application;
FIG. 10B illustrates yet another flow chart for writing data to a storage device according to the embodiment of FIG. 9 of the present application;
FIG. 10C illustrates yet another flow chart for writing data to a memory device according to the embodiment of FIG. 9 of the present application;
FIG. 11 illustrates a flow chart for writing data to a storage device according to the still another embodiment of the present application;
FIG. 12A illustrates a flow chart for reading data from a memory device according to the still another embodiment of the present application; and
FIG. 12B illustrates yet another flow chart for reading data from a memory device according to the still another embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 2 is a schematic diagram of FTL managed memory space according to an embodiment of the present application.
According to an embodiment of the present application, the control component 204 of the storage device 102 provides an FTL. The storage device provides the host with logical address (referred to as LBA) space. The host accesses the storage device using the logical address. The FTL maps logical addresses provided by the host to physical addresses.
According to an embodiment of the present application, the FTL maps the physical address, partly the physical address accessing the NVM chip 105 and partly the physical address accessing the DRAM 110. Thus, referring to FIG. 2, portions of the NVM chips 105 and the DRAM110 provide memory space 200 managed by the FTL.
Alternatively, other types of memory, such as SRAM, are used in addition to DRAM110 to provide storage space managed by the FTL.
It is to be appreciated that the FTL according to yet another embodiment of the present application is provided by a host coupled to a storage device or a storage management apparatus disposed between the host and the storage device.
The FTL is implemented by software, firmware, for example, running in the CPU of the control component 204 and/or hardware as part of an ASIC.
Fig. 3 is a schematic diagram of a logical address (LBA) space according to an embodiment of the present application. By way of example, the logical address space is a continuous address space. The FTL maintains a logical address space.
The direction from top to bottom in fig. 3 is the direction in which the logical address space is incremented. The logical address space includes a plurality of areas having the same size, each area being called a tile. Each entry of the FTL table, called FTL entry, records mapping of small blocks of logical address space to physical addresses. A plurality of entries of the FTL table are shown in fig. 3, including FTL entry 0, FTL entry 1 … … FTL entry 5. For example, the size of the logical address space corresponding to each FTL entry may be, for example, 512 bytes, 1KB, or 4 KB. FTL entries are indexed according to the address of the logical address space. For example, the quotient obtained by dividing the address of the logical address space by the size of the logical address space corresponding to the FTL entry is the index of the FTL entry.
Fig. 4 shows FTL tables of embodiments of the present application.
The FTL table includes a plurality of FTL entries, each FTL entry corresponding to one of the small blocks, and the value of FTL entry records NVM data frame address or cache container index providing storage space for the small block. Optionally, from the value of the FTL entry, it is identified whether the value indicates the NVM data frame address or the cache container index. For example, FTL entry values greater than a threshold are mapped to NVM data frame addresses, while FTL entry values not greater than a threshold are mapped to cache container indices. As yet another example, a flag bit is recorded in the FTL entry to indicate whether the value of the FTL entry indicates the NVM data frame address or the cache container index.
The FTL table is stored in, for example, DRAM110 (see also fig. 2) or SRAM. And the FTL calculates the index of the corresponding FTL entry according to the accessed logical address, and obtains the NVM data frame or the cache container which provides the storage space for the small block from the FTL entry.
The NVM data frame address is, for example, a physical address for accessing a physical page of the NVM chip, a physical address for accessing multiple combined physical pages of the NVM chip (a combined physical page is, for example, a physical page with the same physical page number on multiple planes (planes) of the same Logical Unit (LUN)), a physical address for accessing a portion of data units within a physical page of the NVM chip. The cache container index is, for example, an address of a cache location in the DRAM or an address of a descriptor of the cache location. A cache unit is a segment of storage space in, for example, DRAM or SRAM. The buffer unit descriptor is used for describing the buffer unit. The buffer container is used for recording buffer unit descriptors.
Fig. 5 is a schematic diagram illustrating an FTL providing cache according to an embodiment of the present application. By way of example, the FTL maps the logical address space to a portion of one or more NVM chips 105 (denoted as NVM chip 510) and a portion of DRAM110 (denoted as DRAM 520).
The memory space of NVM chip 510 includes multiple physical blocks. The memory space of NVM chip 510 is organized into NVM data frames (see block 512). The large block 512 includes a plurality of NVM data frames. The size of the NVM data frame is the same as the size of the small block, so that data stored in the logical address space corresponding to one small block can be recorded in one NVM data frame.
With continued reference to FIG. 5, the memory space of DRAM520 is organized into cache units. A cache unit is a segment of memory space, such as DRAM 520. The size of the cache unit is the same as that of the small blocks, so that data stored in a logical address space corresponding to one small block can be recorded in one cache unit.
Referring back to fig. 4, the index of the cache container of the value record of the FTL entry indicates the cache container. The cache unit associated with the cache container may be accessed according to the cache container.
The cache container describes one or more cache elements associated therewith. The cache molecule may be assigned to the cache container or have a specified association with the cache container. For example, the cache container records cache unit descriptors associated with one or more cache units of the cache container, and the cache unit descriptors record addresses of the cache units and working states of the cache units.
With continued reference to fig. 5, the values of FTL entry 0, FTL entry 2, FTL entry 3 and FTL entry 4 indicate NVM data frame addresses located in NVM chip 510, and the values of FTL entry 1 and FTL entry 5 indicate indexes of cache containers located in DRAM 520. The FTL thus obtains FTL entries from the logical addresses and provides physical addresses providing storage space for small blocks of logical address space according to the values of the FTL entries.
FIG. 6A shows a flow chart of writing data to a storage device according to an embodiment of the application.
The storage device obtains a write command provided by the host, the write command indicating a logical address (610). And the control part of the storage device inquires the FTL table (612) according to the logical address and acquires the physical address corresponding to the logical address.
Optionally, the write command indicates a plurality of small blocks of the logical address space, and accordingly, the FTL table is queried according to the logical address of each small block to obtain the corresponding physical address. For clarity, embodiments according to the present application are described in one or more of the following examples, taking a write command to access a single tile as an example.
It is identified whether the resulting physical address corresponding to the logical address indicates a cache container index (614). If the physical address of the logical address accessed by the write command provided by the FTL is the cache container index, the cache unit of the corresponding cache container is accessed by using the cache container index to carry the data to be written by the write command (616). Optionally, after the data to be written by the write command is written into the cache unit, the host that issued the write command indicates that the write command processing is completed.
If the physical address provided by the FTL corresponding to the logical address accessed by the write command is not a buffer container index (614) (e.g., is an NVM data frame address, or other content), a new buffer container is allocated for the logical address (618), and the data to be written by the write command is written into the buffer unit of the newly allocated buffer container (620). The index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address (622).
According to the embodiment of the present application, optionally, data stored in the cache unit of the cache container indicated by one or more FTL entries is written to the NVM chip when needed or periodically. And in response to the data stored in the cache unit being written into the NVM chip, replacing the cache container index in the FTL entry with the NVM data frame address of the NVM chip. Thus, the FTL table entries are always recorded with the cache container index until the cache container index recorded in the FTL table entries is modified to indicate the physical address of the data unit of the NVM chip, which means that during this time, if a read command or a write command is received to access these FTL table entries, a hit will be made to the unified cache and the cache unit recorded with the cache container index will respond to the read command or the write command.
FIG. 6B illustrates a flow diagram for writing data to a storage device according to yet another embodiment of the present application.
The storage device retrieves a write command provided by the host, the write command indicating a logical address (630). And the control component of the storage device inquires an FTL (flash translation layer) table (632) according to the logical address and acquires the physical address corresponding to the logical address.
It is identified whether the resulting physical address corresponding to the logical address indicates a cache container index (634). If the physical address of the logical address accessed by the write command provided by the FTL is the cache container index, it is further identified whether the cache unit of the cache container corresponding to the cache container index is being used (636). The buffer unit is being used, for example, data is being written to the buffer unit, data is being read from the buffer unit, or the control section is writing data of the buffer unit to the NVM chip according to other write commands. If the cache unit is not currently in use, the cache unit is used to carry data to be written by the write command (638). If the cache unit is currently being used, a new cache unit is allocated for the cache container (640), and the new cache unit is used to carry data to be written by the write command (642).
If the physical address of the logical address accessed by the write command provided by the FTL is not the buffer container index (634) (e.g., is an NVM data frame address, or other content), a new buffer container is allocated for the logical address (644), and the data to be written by the write command is written into the buffer location of the newly allocated buffer container (646). An index to the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address (648).
FIG. 6C illustrates a flow diagram for reading data from a memory device according to another embodiment of the present application.
The storage device retrieves a read command provided by the host, the read command indicating a logical address (650). And the control component of the storage device inquires the FTL table (652) according to the logical address and acquires the physical address corresponding to the logical address.
It is identified whether the resulting physical address corresponding to the logical address indicates a cache container index, an NVM data frame address, or another type of physical address (654). If the physical address of the logical address accessed by the read command provided by the FTL is the cache container index, the cache unit of the cache container is accessed, and data is obtained from the cache unit as a response to the read command (656). If the physical address provided by the FTL is an NVM data frame address, then a read command is sent to the NVM chip to read out the data (658). Alternatively, other types of addresses are recorded in the FTL entry, for example, indicating that the read logical address has not been written with data, indicating to the host that the read command is in error, or responding to the read command with a specified value (e.g., all 0 s).
Fig. 7 illustrates a schematic diagram of FTL entry caching according to an embodiment of the present application.
According to the embodiment of fig. 7, the FTL table includes a plurality of FTL entries. The number of FTL entries is large, for example, 1 hundred million, so that it takes much time to access the FTL entries. FTL entry caching is provided to speed up FTL table lookup.
The FTL entry cache includes a plurality of entries (referred to as "cache entries") each recording a logical address (LBA) and a cache container index in association. The cache entry corresponds to one of the FTL entries. And the value of the FTL entry corresponding to the cache entry is recorded as the index of the cache container. Each value of the FTL table is an FTL entry indexed by the cache container, and there is a corresponding cache entry. For example, referring to fig. 7, cache entry 710 records the logical address (LBA) and cache container index of FTL entry 1, cache entry 712 records the logical address (LBA) and cache container index of FTL entry 2, and cache entry 714 records the logical address (LBA) and cache container index of FTL entry 4. And the value of the FTL table is the FTL entry of the non-cache container index, and has no corresponding cache entry.
And in response to allocating a cache container to the small block corresponding to the FTL entry, creating a cache entry, and recording an allocated cache container index in the created cache entry, and also recording an allocated cache container index in the FTL entry.
Optionally, since the cache entry records the cache container index, the NVM data frame address or other type value is recorded in the FTL entry corresponding to the cache entry without recording the cache container index. And creating a cache entry in response to allocating the cache container to the small block corresponding to the FTL entry, and recording the allocated cache container index in the created cache entry without recording the allocated cache container index in the FTL entry.
FIG. 8A illustrates a flow diagram for reading data from a memory device according to the embodiment of FIG. 7 of the present application.
The storage device obtains a read command provided by the host, the read command indicating a logical address (810). A control component of the storage device queries (815) the FTL entry cache according to the logical address in an attempt to acquire a physical address corresponding to the logical address as soon as possible.
If the logical address (or portion thereof) indicated by the read command is recorded in an entry of the FTL entry cache (815), the logical address of the read command hits in the FTL entry cache. The cache container index is obtained from the cache entry where the FTL entry is hit. And accesses the cache location corresponding to the cache container and reads data from the cache location in response to the read command (820).
If the logical address indicated by the read command is not recorded in the entry of the FTL entry cache, it means that the logical address of the read command misses in the FTL entry cache (815). In this case, FTL table is further queried (825) to obtain the physical address indicating the NVM data frame address recorded in the FTL entry corresponding to the logical address of the read command, and data is read from the NVM data frame according to the physical address in response to the read command (830).
FIG. 8B illustrates a flow diagram for writing data to a storage device according to the embodiment of FIG. 7 of the present application.
The storage device obtains a write command provided by the host, the write command indicating a logical address (840). The control component of the storage device queries (845) the FTL entry cache according to the logical address to identify whether the logical address indicated by the write command hits in the FTL entry cache.
If the logical address indicated by the write hit hits in the FTL entry cache (845), the cache container index is obtained from the cache entry where the FTL entry is hit. And accesses the cache location corresponding to the cache container and writes the data corresponding to the write command to the cache location (850). Optionally, in response to data being written to the cache unit, completion of write command processing is indicated to the host.
If the logical address of the write command misses in the FTL entry cache (845). In this case, a new buffer container is allocated for the logical address (855), and data to be written by the write command is written to the buffer unit of the newly allocated buffer container (860). And updating the FTL entry cache (865), adding an entry in the FTL entry cache, and recording an association relationship between the logical address indicated by the write command and the index of the newly allocated cache container in the added entry. Optionally, an index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address (870).
According to the embodiment of the present application, optionally, data stored in the cache unit of the cache container indicated by one or more FTL entries is written to the NVM chip when needed or periodically. And in response to the data of the cache unit being written into the NVM chip, replacing the cache container index in the FTL entry with the NVM data frame address of the NVM chip and deleting the corresponding entry in the FTL entry cache.
Fig. 9 illustrates an FTL table according to still another embodiment of the present application.
The FTL table includes a plurality of FTL entries, each FTL entry corresponding to one of the small blocks, and the FTL entries record NVM data frame addresses or cache container indexes providing storage space for the small blocks.
Some FTL entries also record a validity bitmap, among others.
The logical address space corresponding to the tile is further divided into a plurality of regions. Each bit of the validity bitmap of the FTL entry indicates whether data stored in the cache unit in one of the regions of the small block is consistent with data recorded in the NVM chip. For example, the logical address space size of a small block is 4KB, the logical address space is divided into 8 regions, each having a size of 512 bytes. By way of example, in response to a first write of data to a small block, a cache unit of the cache container holds the written data, which has not been written to the NVM chip, and thus data of one or more regions of the logical address space corresponding to the cache unit is inconsistent with data stored by the NVM chip, and in the validity bitmap, the inconsistency of the one or more regions of the cache unit is marked. As another example, data corresponding to the small block is read from the NVM chip and stored in the cache unit, where data of the one or more regions of the logical address space corresponding to the cache unit is consistent with data stored by the NVM chip, and the one or more regions of the cache unit are marked by one or more bits of the validity bitmap to be consistent with data stored by the NVM chip.
Optionally, some FTL entry values record NVM data frame addresses, instead of cache container index, and these FTL entries do not include validity bitmap.
According to the embodiment of fig. 9, data is read from the memory device using the same or similar flow according to the embodiment illustrated in fig. 6C or fig. 8A.
FIG. 10A illustrates a flow diagram for writing data to a memory device according to the embodiment of FIG. 9 of the present application.
The storage device retrieves a write command provided by the host, the write command indicating a logical address (1010). And the control component of the storage device queries (1012) the FTL table according to the logical address and acquires the physical address corresponding to the logical address.
It is identified whether the resulting physical address corresponding to the logical address indicates a cache container index (1014). If the physical address of the logical address accessed by the write command provided by the FTL is the cache container index (1014), accessing the cache unit corresponding to the cache container index, and using the cache unit to carry the data to be written by the write command (1016).
And, since the data to be written by the write command is written only in the buffer unit and not in the NVM data frame, the data in the buffer unit is inconsistent with the data of the NVM data frame. The validity bitmap in the FTL entry is updated by accessing one or more areas of the small block according to the write command (1018).
If the physical address provided by the FTL corresponding to the logical address accessed by the write command is not a buffer container index (e.g., is an NVM data frame address, or other content) (1014), a new buffer container is allocated for the logical address (1020), and the data to be written by the write command is written into the buffer unit of the newly allocated buffer container (1022). The validity bitmap in the FTL entry is updated by accessing one or more areas of the small block according to the write command (1024).
Further, in response to the physical address corresponding to the logical address accessed by the write command not being the cache container index, it is also identified whether the data to be written by the write command occupies a complete cache location (or small block) (1026). For example, the size of the logical address space corresponding to the small block is 4KB, and if a write command writes 4KB of data into the logical address space, the data to be written by the write command occupies a complete cache unit; if a write command writes, for example, 2KB of data to the logical address space, the data to be written by the write command does not occupy the entire cache unit.
If the data to be written by the write command does not occupy the complete buffer memory unit (1026), the data in the partial logical address space of the small block which is not occupied by the data to be written by the write command is read from the NVM chip according to the physical address (for example, NVM data frame address) of the logical address accessed by the write command provided by the FTL, and is filled into the buffer memory unit (1028) to which the write command writes the data, so that the data corresponding to the complete logical address space of the small block (partially from the write command and partially from the NVM data frame) is filled into the buffer memory unit.
The buffer location of the buffer container, which now already holds the data corresponding to the complete logical address space of the small block, can respond to the access to the small block without having to retain the NVM data frame address of the data just read in the FTL entry. The index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address of the write command (1030).
If the data to be written by the write command occupies the entire cache location (1026), go to step 1024 to update the validity bitmap in the FTL entry.
FIG. 10B illustrates yet another flow chart for writing data to a memory device according to the embodiment of FIG. 9 of the present application.
The storage device obtains a write command provided by the host, the write command indicating a logical address (1040). And the control part of the storage device inquires the FTL table (1042) according to the logical address and acquires the physical address corresponding to the logical address.
Identifying whether the obtained physical address corresponding to the logical address indicates a cache container index (1044). If the physical address of the logical address accessed by the write command provided by the FTL is the cache container index (1044), it is further identified whether the cache unit of the cache container corresponding to the cache container index is being used (1046). If the cache unit is not currently used, the cache unit is used to carry data to be written by the write command (1048). If the cache unit is currently being used, a new cache unit is allocated for the cache container (1050) and the new cache unit is used to carry the data to be written by the write command (1052).
And accessing one or more regions of the tile in accordance with the write command, updating a validity bitmap in the FTL entry (1054).
If the physical address of the logical address accessed by the write command provided by the FTL is not a buffer container index (e.g., is an NVM data frame address, or other content) (1044), a new buffer container is allocated for the logical address (1056), and the data to be written by the write command is written to the buffer unit of the newly allocated buffer container (1058).
Further, it is also identified whether the data to be written by the write command occupies a complete unit (or small block) of the buffer (1060). If the data to be written by the write command does not occupy the complete buffer location (1060), the data in the partial logical address space of the small block that is not occupied by the data to be written by the write command is also read from the NVM chip from the physical address (e.g., NVM data frame address) of the logical address accessed by the write command provided by the FTL and filled into the buffer location to which the write command writes the data (1062), so that the data corresponding to the complete logical address space of the small block (partially from the write command and partially from the NVM data frame) is filled into the buffer location. If the write command occupies the entire cache location to be written (1060), the process goes directly to step 1064.
The validity bitmap in the FTL entry is updated according to one or more areas of the small block accessed by the write command (1064). The index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address of the write command (1066).
Alternatively, the step of identifying whether the data to be written by the write command occupies a complete unit (or small block) of the buffer is performed earlier. For example, after recognizing that the physical address of the logical address accessed by the write command provided by the FTL is the NVM data frame address, a step of recognizing whether the data to be written by the write command occupies a complete buffer unit (or a small block) is performed, and a step of reading out the data of a part of the logical address space of the small block not occupied by the data to be written by the write command from the NVM chip and filling the data into the buffer unit to which the write command writes the data is performed.
According to the FTL table illustrated in fig. 9, optionally, an FTL entry cache is also provided.
The FTL entry cache includes a plurality of entries (referred to as "cache entries") each of which records a logical address (LBA), a cache container index, and a validity bitmap in association. The cache entry corresponds to one of the FTL entries. Each value of the FTL table is an FTL entry indexed by the cache container, and has a corresponding cache entry. And the value of the FTL table is the FTL entry of the non-cache container index, and has no corresponding cache entry.
FIG. 10C illustrates yet another flow chart for writing data to a memory device according to the embodiment of FIG. 9 of the present application.
The storage device retrieves a write command provided by the host, the write command indicating a logical address (1070). A control component of the storage device queries (1072) the FTL entry cache according to the logical address to identify whether the logical address indicated by the write command hits in the FTL entry cache.
If the logical address indicated by the write hit hits in the FTL entry cache (1072), the cache container index is obtained from the cache entry where the FTL entry is hit. And accesses the cache location corresponding to the cache container and writes the data corresponding to the write command to the cache location (1074). And accessing one or more areas of the tile in accordance with the write command, updating a validity bitmap in the FTL entry (1076).
If the logical address of the write command misses in the FTL entry cache (1072). In this case, a new buffer container is allocated for the logical address (1078), and the data to be written by the write command is written in the buffer unit of the newly allocated buffer container (1080).
After recognizing that the physical address provided by the FTL for the logical address accessed by the write command is the NVM data frame address, in addition to allocating a new buffer container, it is also recognized whether the data to be written by the write command occupies a complete buffer unit (or small block) (1082). If the data to be written by the write command does not occupy the complete cache unit (1082), the FTL table (1084) is also queried according to the logical address of the write command, and the data in the partial logical address space of the small block that is not occupied by the data to be written by the write command is read from the NVM chip according to the physical address (e.g., NVM data frame address) of the logical address accessed by the write command provided by the FTL, and is filled into the cache unit (1086) to which the write command writes the data, so that the cache unit is filled with the data corresponding to the complete logical address space of the small block (partially from the write command and partially from the NVM data frame). If the data to be written by the write command occupies the entire buffer location (1082), go to step 1088.
And accessing one or more areas of the tile in accordance with the write command, updating a validity bitmap in the FTL entry (1088). And updating an FTL (flash translation) entry cache (1090), and adding a cache entry in the FTL entry cache to record the association relation between the logical address indicated by the write command and the index of the newly allocated cache container. Optionally, an index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address of the write command (1092).
According to still another embodiment of the present application, the buffer container records the validity bitmap and the NVM data frame address in addition to its own buffer unit. The NVM data frame address of the buffer container is the address of the NVM data frame from which data is read to fill the buffer cell. Because the NVM data frame address is recorded in the buffer container, when the data from the write command is written into the buffer container, even if the data to be written by the write command does not occupy the whole buffer unit, the data does not need to be read out from the NVM data frame immediately to fill the buffer unit.
FIG. 11 illustrates a flow chart for writing data to a storage device according to the still another embodiment of the present application.
The storage device retrieves a write command provided by the host, the write command indicating a logical address (1110). And the control component of the storage device queries the FTL table (1120) according to the logical address and acquires the physical address corresponding to the logical address.
It is identified if the resulting physical address corresponding to the logical address indicates a cache container index (1130). If the physical address provided by the FTL and corresponding to the logical address accessed by the write command is the cache container index (1130), accessing the cache unit corresponding to the cache container index, and using the cache unit to carry the data to be written by the write command (1140). And, accessing one or more areas of the tile in accordance with the write command, updating a validity bitmap in the FTL entry (1150).
If the physical address provided by the FTL corresponding to the logical address accessed by the write command is not a buffer container index (e.g., is an NVM data frame address, or other content) (1130), then a new buffer container is allocated for the logical address, and the data to be written by the write command is written into the buffer unit of the newly allocated buffer container (1160). And recording the NVM data frame address corresponding to the logical address obtained from the FTL table in the newly allocated cache container (1170). The validity bitmap (1180) in the FTL entry is updated according to one or more areas of the tile accessed by the write command. The index of the newly allocated cache container is also recorded in the FTL entry corresponding to the logical address of the write command (1190).
Optionally, the step of updating the validity bitmap in the FTL entry according to the one or more areas where the write command accesses the small block may be implemented after the logical address indicated by the write command is obtained, and is independent of whether the physical address corresponding to the logical address indicated by the write command indicates the cache container index.
FIG. 12A illustrates a flow chart for reading data from a memory device according to the still another embodiment of the present application.
The storage device obtains a read command provided by the host, the read command indicating a logical address (1210). The control component of the storage device queries the FTL table (1215) according to the logical address, and acquires a physical address corresponding to the logical address.
It is identified whether the resulting physical address corresponding to the logical address indicates a cache container index, an NVM data frame address, or another type of physical address (1220). If the physical address provided by the FTL corresponding to the logical address accessed by the read command is the cache container index (1220), it also identifies whether the cache container can provide the complete data to be read by the read command (1225). For example, the size of the logical address space corresponding to the data written in the buffer container is 4KB, and a read command reads 4KB of data from the logical address space, the buffer container can provide the complete data to be read by the read command; if the read command reads, for example, 2KB of data from the logical address space, and the size of the logical address space corresponding to the data written in the buffer container is 1KB, the buffer container cannot provide the complete data to be read by the read command.
If the buffer container cannot provide the complete data to be read by the read command (1225), the partial data which cannot be provided by the buffer container is read from the NVM chip to respond to the read command (1235) according to the NVM data frame address (1230) obtained by the buffer container and corresponding to the logical address accessed by the read command. And optionally, for a portion of the data that the cache container can provide, retrieving the data from the cache container in response to the read command.
If the cache container is able to provide the complete data to be read by the read command (1225), the data is retrieved from the cache container in response to the read command (1240).
If the physical address provided by the FTL corresponding to the logical address accessed by the read command is the NVM data frame address (1220), a read command is sent to the NVM chip to read out the data (1245).
FIG. 12B illustrates yet another flow chart for reading data from a memory device according to the still another embodiment of the present application.
The storage device retrieves a read command provided by the host, the read command indicating a logical address (1240). A control component of the storage device queries (1245) the FTL entry cache according to the logical address in an attempt to acquire a physical address corresponding to the logical address as soon as possible.
If the logical address indicated by the read hit hits in the FTL entry cache (1245), the cache container index is obtained from the cache entry hit in the FTL entry cache. It is also identified whether the cache container can provide the complete data to be read by the read command. If the cache container can provide the complete data to be read by the read command (e.g., via a validity bitmap recorded by the cache entry) (a full hit), the cache location corresponding to the cache container is accessed and the data is read from the cache location in response to the read command (1250).
If the cache container cannot provide the complete data to be read by the read command (not complete hit), a portion of the logical address space (small block) that the cache container can provide for the read command and a portion of the logical address space that the cache container cannot provide for the read command are identified according to the validity bitmap recorded by the cache entry (1255). Part of the data of the logical address space that it can provide for the read command is retrieved from the cache location of the cache container (1250). And querying the FTL table (1265) to obtain a physical address corresponding to the logical address of the read command and indicating the NVM data frame, and reading out data (1270) of a portion of the logical address space which cannot be provided by the fetch buffer container for the read command from the NVM data frame according to the physical address.
Optionally, for the part of the logical address space that the cache container cannot provide for the read command, the FTL table is not queried, but the physical address indicating the NVM data frame corresponding to the logical address of the read command is retrieved from the cache container (1260), and the data of the part of the logical address space that the cache container cannot provide for the read command is read from the NVM data frame according to the physical address (1270).
If the FTL entry cache does not have a cache entry not recording the logical address indicated by the read command, it means that the logical address of the read command misses in the FTL entry cache (1245). In this case, the FTL table is further queried to obtain the physical address indicating the NVM data frame address recorded in the FTL entry corresponding to the logical address of the read command, and read data from the NVM data frame according to the physical address in response to the read hit.
Embodiments according to the present application also provide a storage device including a control unit and a nonvolatile memory chip, wherein the control unit executes any one of the processing methods provided by the embodiments of the present application.
Embodiments according to the present application also provide a program stored on a readable medium, which when executed by a controller of a storage device, causes the storage device to perform any one of the processing methods provided according to the embodiments of the present application.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application. It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A method for a storage device, comprising:
acquiring a logical address for accessing the storage device;
the logical addresses are mapped to physical addresses by the FTL table.
2. The method for a memory device of claim 1, wherein the physical address provided by the one or more first FTL entries of the FTL table is a physical address to access the NVM chip and the physical address provided by the one or more second FTL entries of the FTL table is a physical address to access the cache.
3. The method for a memory device of claim 2, wherein the physical address provided by the one or more first FTL entries of the FTL table is an NVM data frame address of the NVM chip; the physical address provided by one or more second FTL entries of the FTL table is a cache container index of DRAM or SRAM.
4. The method for a storage device of any of claims 1-3, wherein in response to identifying that the physical address corresponding to the logical address indicates a cache container index, accessing the cache location of the corresponding cache container according to the cache container index.
5. The method for a memory device of claim 4, wherein if the access logical address is a read command, reading data from the corresponding cache unit in response to the read command.
6. The method for the storage device according to any one of claims 1 to 3, wherein in response to identifying that the physical address corresponding to the logical address indicates the NVM data frame address, if the access to the logical address is a write command, allocating a new buffer container for the logical address, and writing the data indicated by the write command into a buffer unit of the newly allocated buffer container.
7. The method for a storage device of claim 6, wherein after writing data to the cache unit of the newly allocated cache container, recording an index of the newly allocated cache container in the FTL entry corresponding to the logical address of the write command.
8. The method for the storage device according to claim 6, wherein after identifying that the physical address corresponding to the logical address indicates the NVM data frame address or after writing the data indicated by the write command into the buffer unit of the newly allocated buffer container, identifying whether the data to be written by the write command occupies a complete buffer unit.
9. The method for a memory device according to claim 8, wherein if it is recognized that data to be written by the write command does not occupy a complete first cache location, reading data of a portion of a logical address space of a small block not occupied by data to be written by the write command from the NVM chip from a physical address of a logical address accessed according to the write command provided by the FTL, and filling the data into the first cache location.
10. A storage device, comprising: a control component that performs the method of any of claims 1-9 to accomplish FTL management.
CN201811154190.7A 2018-09-30 2018-09-30 FTL provided caching Active CN110968527B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811154190.7A CN110968527B (en) 2018-09-30 2018-09-30 FTL provided caching

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811154190.7A CN110968527B (en) 2018-09-30 2018-09-30 FTL provided caching

Publications (2)

Publication Number Publication Date
CN110968527A true CN110968527A (en) 2020-04-07
CN110968527B CN110968527B (en) 2024-05-28

Family

ID=70028672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811154190.7A Active CN110968527B (en) 2018-09-30 2018-09-30 FTL provided caching

Country Status (1)

Country Link
CN (1) CN110968527B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112559388A (en) * 2020-12-14 2021-03-26 杭州宏杉科技股份有限公司 Data caching method and device
CN113076189A (en) * 2020-04-17 2021-07-06 北京忆芯科技有限公司 Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195801A1 (en) * 2007-02-13 2008-08-14 Cheon Won-Moon Method for operating buffer cache of storage device including flash memory
CN102279712A (en) * 2011-08-10 2011-12-14 北京百度网讯科技有限公司 Storage control method, system and device applied to network storage system
EP2423819A1 (en) * 2008-02-04 2012-02-29 Apple Inc. Memory mapping techniques
CN103425600A (en) * 2013-08-23 2013-12-04 中国人民解放军国防科学技术大学 Address mapping method for flash translation layer of solid state drive
US20170024326A1 (en) * 2015-07-22 2017-01-26 CNEX-Labs, Inc. Method and Apparatus for Caching Flash Translation Layer (FTL) Table
CN106502584A (en) * 2016-10-13 2017-03-15 记忆科技(深圳)有限公司 A kind of method of the utilization rate for improving solid state hard disc write buffer
US20170109089A1 (en) * 2015-10-16 2017-04-20 CNEXLABS, Inc. a Delaware Corporation Method and Apparatus for Providing Hybrid Mode to Access SSD Drive
CN108255420A (en) * 2017-12-22 2018-07-06 深圳忆联信息系统有限公司 A kind of solid state disk buffer memory management method and solid state disk

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080195801A1 (en) * 2007-02-13 2008-08-14 Cheon Won-Moon Method for operating buffer cache of storage device including flash memory
EP2423819A1 (en) * 2008-02-04 2012-02-29 Apple Inc. Memory mapping techniques
CN102279712A (en) * 2011-08-10 2011-12-14 北京百度网讯科技有限公司 Storage control method, system and device applied to network storage system
CN103425600A (en) * 2013-08-23 2013-12-04 中国人民解放军国防科学技术大学 Address mapping method for flash translation layer of solid state drive
US20170024326A1 (en) * 2015-07-22 2017-01-26 CNEX-Labs, Inc. Method and Apparatus for Caching Flash Translation Layer (FTL) Table
US20170109089A1 (en) * 2015-10-16 2017-04-20 CNEXLABS, Inc. a Delaware Corporation Method and Apparatus for Providing Hybrid Mode to Access SSD Drive
CN106502584A (en) * 2016-10-13 2017-03-15 记忆科技(深圳)有限公司 A kind of method of the utilization rate for improving solid state hard disc write buffer
CN108255420A (en) * 2017-12-22 2018-07-06 深圳忆联信息系统有限公司 A kind of solid state disk buffer memory management method and solid state disk

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113076189A (en) * 2020-04-17 2021-07-06 北京忆芯科技有限公司 Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths
CN112559388A (en) * 2020-12-14 2021-03-26 杭州宏杉科技股份有限公司 Data caching method and device
CN112559388B (en) * 2020-12-14 2022-07-12 杭州宏杉科技股份有限公司 Data caching method and device

Also Published As

Publication number Publication date
CN110968527B (en) 2024-05-28

Similar Documents

Publication Publication Date Title
CN106448737B (en) Method and device for reading flash memory data and solid state drive
US9378131B2 (en) Non-volatile storage addressing using multiple tables
CN108595349B (en) Address translation method and device for mass storage device
CN107797759B (en) Method, device and driver for accessing cache information
CN107797934B (en) Method for processing de-allocation command and storage device
CN108228470B (en) Method and equipment for processing write command for writing data into NVM (non-volatile memory)
CN108614668B (en) KV model-based data access method and solid-state storage device
CN111512290A (en) File page table management techniques
KR20210028729A (en) Logical vs. physical table fragments
CN110968527B (en) FTL provided caching
CN108614671B (en) Key-data access method based on namespace and solid-state storage device
CN111352865B (en) Write caching for memory controllers
CN110096452B (en) Nonvolatile random access memory and method for providing the same
CN110865945B (en) Extended address space for memory devices
CN111290974A (en) Cache elimination method for storage device and storage device
CN111290975A (en) Method for processing read command and pre-read command by using unified cache and storage device thereof
CN110968520B (en) Multi-stream storage device based on unified cache architecture
WO2018041258A1 (en) Method for processing de-allocation command, and storage device
CN109840219B (en) Address translation system and method for mass solid state storage device
CN110968525B (en) FTL provided cache, optimization method and storage device thereof
CN109960667B (en) Address translation method and device for large-capacity solid-state storage device
CN110532199B (en) Pre-reading method and memory controller thereof
CN110297596B (en) Memory device with wide operating temperature range
CN110968525A (en) Cache provided by FTL (flash translation layer), optimization method thereof and storage device
CN108614669B (en) Key-data access method for solving hash collision and solid-state storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: 100192 room A302, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Applicant before: BEIJING MEMBLAZE TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant