CN107797759B - Method, device and driver for accessing cache information - Google Patents

Method, device and driver for accessing cache information Download PDF

Info

Publication number
CN107797759B
CN107797759B CN201610819400.4A CN201610819400A CN107797759B CN 107797759 B CN107797759 B CN 107797759B CN 201610819400 A CN201610819400 A CN 201610819400A CN 107797759 B CN107797759 B CN 107797759B
Authority
CN
China
Prior art keywords
command
commands
read
logical address
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610819400.4A
Other languages
Chinese (zh)
Other versions
CN107797759A (en
Inventor
路向峰
孙清涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Publication of CN107797759A publication Critical patent/CN107797759A/en
Application granted granted Critical
Publication of CN107797759B publication Critical patent/CN107797759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling

Abstract

Methods, apparatus, and drives for accessing cached information are disclosed. The disclosed method for accessing cache information comprises: acquiring a first IO command; judging whether the first IO command is associated with a second IO command in a first IO command set according to a logical address of the first IO command, wherein the first IO command set corresponds to a first cache line of the cache; and marking the incidence relation between the first IO command and the second IO command. The technical scheme disclosed by the invention is beneficial to reducing the IO command processing delay, thereby improving the IO command processing speed and efficiency.

Description

Method, device and driver for accessing cache information
Technical Field
The invention relates to the field of storage, in particular to a technology for reducing IO command processing delay by using a front-end cache in a solid state disk.
Background
FIG. 1 is a block diagram of a memory device. The solid-state storage device 102 is coupled to a host for providing storage capabilities to the host. The host and the solid-state storage device 102 may be coupled by various methods, including but not limited to, connecting the host and the solid-state storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high-speed Peripheral Component Interconnect), NVMe (NVM Express, high-speed nonvolatile storage), ethernet, fiber channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The storage device 102 includes an interface 103, a control section 104, one or more NVM (Non-Volatile Memory) chips 105, and a DRAM (Dynamic Random Access Memory) 110. NAND flash, phase change memory, FeRAM, MRAM, etc. are common NVMs. The interface 103 may be adapted to exchange data with a host by means such as SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc. The control unit 104 is used to control data transfer between the interface 103, the NVM chip 105, and the DRAM 110, and also used for memory management, host logical address to flash physical address mapping, erase leveling, bad block management, and the like. The control component 104 can be implemented in a variety of ways including software, hardware, firmware, or a combination thereof. The control unit 104 may be in the form of an FPGA (Field-programmable gate array), an ASIC (Application Specific Integrated Circuit), or a combination thereof.
The control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO commands. The control unit 104 is also coupled to the DRAM 110 and can access data of the DRAM 110. FTL tables and/or cached IO command data may be stored in the DRAM. The control section 104 includes a flash interface controller (or referred to as a flash channel controller). The flash interface controller is coupled to NVM chip 105 and issues commands to NVM chip 105 to operate NVM chip 105 and receive command execution results output from NVM chip 105 in a manner that conforms to the interface protocol of NVM chip 105. The interface protocol of the NVM chip 105 includes well-known interface protocols or standards such as "Toggle", "ONFI", etc.
The memory Target (Target) is one or more Logic units (Logic units) of a shared Chip Enable (CE) signal within the NAND flash package. Each logical Unit has a Logical Unit Number (LUN). One or more dies (Die) may be included within the NAND flash memory package. Typically, a logic cell corresponds to a single die. The logical unit may include a plurality of planes (planes). Multiple planes within a logical unit may be accessed in parallel, while multiple logical units within a NAND flash memory chip may execute commands and report status independently of each other. In "Open NAND F last Interface Specification (Revision 3.0)" available from http:// www.micron.com// media/Documents/Products/Other% 20Documents/ONFI3_0gold. ashx, the meaning for target (tar get), logical unit, LUN, Plane (Plane) is provided, which is part of the prior art.
Data is typically stored and read on a storage medium on a page-by-page basis. And data is erased in blocks. A block contains a plurality of pages. Pages on the storage medium (referred to as physical pages) have a fixed size, e.g., 17664 bytes. Physical pages may also have other sizes.
In the solid-state storage device, mapping information from logical addresses to physical addresses is maintained using FTL (Flash Translation Layer). The logical addresses constitute the storage space of the solid-state storage device as perceived by upper-level software, such as an operating system. The physical address is an address for accessing a physical memory location of the solid-state memory device. Address mapping may also be implemented in the prior art using an intermediate address modality. E.g. mapping the logical address to an intermediate address, which in turn is further mapped to a physical address.
A table structure storing mapping information from logical addresses to physical addresses is called an FTL table. FTL tables are important metadata in solid state storage devices. Usually, the data entry of the FTL table records the address mapping relationship in the unit of data page in the solid-state storage device.
The FTL table includes a plurality of FTL table entries (or table entries). In one embodiment, each FTL table entry records a correspondence relationship between a logical page address and a physical page. In another example, each FTL table entry records the correspondence between consecutive logical page addresses and consecutive physical pages. In another embodiment, each FTL table entry records the corresponding relationship between the logical block address and the physical block address. In still another embodiment, the FTL table records the mapping relationship between logical block addresses and physical block addresses, and/or the mapping relationship between logical page addresses and physical page addresses.
Volatile Write Cache (Volatile Write Cache) is defined in the NVMe standard.
However, the implementation of volatile write cache is not defined in the NVMe standard. In the solid-state storage device, there is also a demand to further reduce the IO command processing delay.
Disclosure of Invention
The invention aims to provide a cache for write commands for a solid-state storage device, reduce the execution delay of IO commands by using the cache and improve the speed and efficiency of the IO command execution.
According to a first aspect of the present invention, there is provided a method for accessing cached information, comprising: obtaining an IO command, wherein the IO command comprises a first logic address; judging whether the IO command hits a first cache line of a cache according to the first logic address; data is retrieved from the first cache line in response to the IO command.
According to a second aspect of the present invention, there is provided obtaining a first IO command; judging whether the first IO command is associated with a second IO command in a first IO command set according to a logical address of the first IO command, wherein the first IO command set corresponds to a first cache line of the cache; and marking the incidence relation between the first IO command and the second IO command.
According to an embodiment of the second aspect of the invention, further comprising: adding the first IO command to the first set of IO commands.
According to an embodiment of the second aspect of the present invention, if the logical address of the second IO command includes the logical address of the first IO command, the second IO command is a write command, and the first IO command is a read command, the first IO command is associated with the second IO command.
According to an embodiment of the second aspect of the present invention, if the logical address of the second IO command includes the logical address of the first IO command, the second IO command is a prefetch command, and the first IO command is a read command, the first IO command is associated with the second IO command.
According to an embodiment of the second aspect of the present invention, wherein the logic address of the second IO command including the logic address of the first IO command includes: the logic address of the second IO command is the same as the logic address of the first IO command; or the logical address of the first IO command is a portion of the address of the second IO command.
According to an embodiment of the second aspect of the invention, further comprising: and if the first IO command is associated with the second IO command, responding to the completion of the execution of the second IO command, and acquiring data from a first cache line to respond to the first IO command.
According to an embodiment of the second aspect of the present invention, if the first IO command is associated with the second IO command, after the second IO command is executed, the first IO command is preferentially executed.
According to an embodiment of the second aspect of the invention, further comprising: removing the second IO command and all IO commands associated with the second IO command from the first set of IO commands after execution of the second IO command and all IO commands associated with the second IO command is complete.
According to an embodiment of the second aspect of the present invention, if the first IO command is not associated with any IO command in the first IO command set, the first IO command is added to the first IO command set, so that the first IO command is executed last in the IO commands in the first IO command set.
According to an embodiment of the second aspect of the present invention, it is searched in the first IO command set whether there is a second IO command associated with the first IO command.
According to an embodiment of the second aspect of the present invention, data targeted by IO commands having the same logical address is cached only in a cache line corresponding to the same logical address.
According to an embodiment of the second aspect of the invention, further comprising: taking out a third IO command from the first IO command set; and if the third IO command is a write command, responding to the data of the third IO command written into the first cache line, and indicating the third IO command to the host to be executed completely.
According to an embodiment of the second aspect of the invention, further comprising: if a fourth IO command having an association relation with the third IO command exists in the first IO command set, acquiring data from the first cache line as a response to the fourth IO command; and removing the third IO command and the fourth IO command from the first IO command set after the fourth IO command is processed.
According to a third aspect of the present invention, there is provided an apparatus for accessing cached information, comprising: the IO command acquisition module is used for acquiring a first IO command; a cache association detection module, configured to determine whether the first IO command is associated with a second IO command in a first IO command set according to a logical address of the first IO command, where the first IO command set corresponds to a first cache line of the cache; and the marking module is used for marking the association relationship between the first IO command and the second IO command.
According to a fourth aspect of the present invention, there is provided a method of accessing cached information, comprising: acquiring a first write command; determining whether the first write command is associated with a second write command in a first IO command set according to a logical address of the first write command, wherein the first IO command set corresponds to a first cache line of the cache; merging the first write command with the second write command.
According to an embodiment of the fourth aspect of the invention, further comprising: adding the first write command to the first set of IO commands.
According to an embodiment of the fourth aspect of the present invention, if the data targeted by the first write command and the data targeted by the second write command have the same logical address, the first write command is associated with the second IO command.
According to an embodiment of the fourth aspect of the present invention, if the data having the same logical address and targeted by the first write command and the second write command are the same, the data targeted by the first write command or the second write command is regarded as the data to be written.
According to an embodiment of the fourth aspect of the present invention, if the first write command and the second write command access different portions of the same logical address range, the data targeted by the first write command or the second write command is merged, and the merged data is used as the data to be written.
According to one embodiment of the fourth aspect of the present invention, the first write command and the second write command are adjacent.
According to an embodiment of the fourth aspect of the present invention, if the first write command is associated with the second write command, the first write command is preferentially executed after the second write command is executed.
According to an embodiment of the fourth aspect of the present invention, after the second write command and all IO commands associated with the second write command are executed, the second write command and all IO commands associated with the second write command are removed from the first IO command set.
According to an embodiment of the fourth aspect of the present invention, if the first write command is not associated with any write command in the first IO command set, the first write command is added to the first IO command set, so that the first write command is executed last in an IO command in the first IO command set.
According to an embodiment of the fourth aspect of the present invention, it is searched in the first IO command set whether there is a second write command associated with the first write command.
According to an embodiment of the fourth aspect of the present invention, data targeted by IO commands having the same logical address is cached only in a cache line corresponding to the same logical address.
According to an embodiment of the fourth aspect of the invention, further comprising: taking out a third write command from the first IO command set; indicating to the host that the third write command execution is complete in response to writing data of the third write command to the first cache line.
According to an embodiment of the fourth aspect of the invention, further comprising: if a fourth write command which has an association relation with the third write command exists in the first IO command set, acquiring data from the first cache line as a response to the fourth write command; and removing the third write command and the fourth write command from the first IO command set after the fourth write command processing is completed.
According to a fifth aspect of the present invention, there is provided an apparatus for accessing cached information, comprising: the IO command acquisition module is used for acquiring a first write command; a cache association detection module, configured to determine whether the first write command is associated with a second write command in a first IO command set according to a logical address of the first write command, where the first IO command set corresponds to a first cache line of the cache; and the merging module is used for merging the first write command and the second write command.
According to a sixth aspect of the present invention, there is provided a solid state drive comprising: one or more processors; a memory; a program stored in the memory, which when executed by the one or more processors, causes the solid state drive to perform the method as described above.
According to a seventh aspect of the present invention, there is provided a computer-readable storage medium storing a program which, when executed by an apparatus, causes the apparatus to perform the method described above.
Drawings
FIG. 1 shows a block diagram of a prior art storage device;
FIG. 2 illustrates a block diagram of a control component of a storage device according to an embodiment of the invention;
FIG. 3 is a diagram illustrating the composition of a front-end cache according to an embodiment of the invention;
FIG. 4 shows a flow diagram of a method of accessing cached information, according to an embodiment of the invention;
FIG. 5 is a diagram illustrating the composition of a front-end cache according to yet another embodiment of the invention;
FIG. 6 illustrates a schematic diagram of an IO command set in accordance with an embodiment of the present invention;
FIG. 7 illustrates a correspondence between IO command sets and cache lines according to an embodiment of the invention;
FIG. 8A is a flow diagram of adding a command to a set of IO commands in accordance with an embodiment of the present invention;
FIG. 8B is a flow diagram of a process for fetching a command from an IO command set in accordance with an embodiment of the present invention;
FIG. 9 is a diagram illustrating an association relationship between IO commands, according to an embodiment of the invention;
FIG. 10 is a flow diagram of adding commands to a set of IO commands in accordance with yet another embodiment of the present invention;
FIG. 11 is a diagram illustrating an association relationship between IO commands according to yet another embodiment of the present invention; and
FIG. 12 is a flow diagram of adding commands to an IO command set in accordance with another embodiment of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the terms "first," "second," and the like in this disclosure are used merely for convenience in referring to objects, and are not intended to limit the number and/or order.
FIG. 2 illustrates a block diagram of a control component of a storage device according to an embodiment of the invention. The control unit 104 includes a host interface 210, a front-end processing module 220, a flash management module 230, and a back-end processing module 240.
The host interface 210 is used to exchange commands and data with a host. In one example, the host and the storage device communicate via NVMe/PCIe protocol, and the host interface 210 processes the PCIe protocol data packet, extracts the NVMe protocol command, and returns a processing result of the NVMe protocol command to the host. The FTL module 230 converts the logical address of the flash memory access command into a physical address, and manages the flash memory, thereby providing services such as wear leveling and garbage collection. The back-end processing module 240 accesses the one or more NVM chips according to the physical address. Processing before accessing the FTL is referred to as front-end processing, and processing after accessing the FTL is referred to as back-end processing. The front-end control unit 104 is also coupled to an external memory (e.g., RAM) 260. A portion of the space of the memory 260 is used as a front-end cache (front-end cache 265) that the front-end processing module 220 may access to the memory 260 to use. Optionally, a front-end cache module 225 is provided within the control component 104 for use as a front-end cache.
Fig. 3 shows a schematic diagram of the composition of a front-end cache according to an embodiment of the invention. As shown in fig. 3, front-end cache 300 includes a plurality of cache lines (see fig. 3, cache line 310, cache line 320, cache line 330, and cache line 340). Each cache line includes metadata and data. The metadata of the cache line records the corresponding logical address of the cache line. The front-end processing module 220 determines whether the front-end cache hits by comparing the logical address of the IO command with the logical address recorded in the metadata. The metadata of the cache line may also record information such as the status of the cache line. The data part of the cache line stores data corresponding to the IO command, and for a read command, the data part of the cache line records data obtained from the NVM chip, and for a write command, the data part of the cache line records data sent by the host and to be written into the NVM chip. Thus, the front-end cache 300 according to embodiments of the present invention may be used not only as a volatile write cache that supports accelerated write command processing as defined in the NVMe protocol, but also as a cache to accelerate read operations.
Fig. 4 shows a flow diagram of a method of accessing cached information, according to an embodiment of the invention. In an embodiment in accordance with the invention, an IO command is obtained (410). The IO command carries a logical address to be accessed by the IO command. And judging whether the IO command hits the cache according to the acquired logical address of the IO command (420). For example, the logical address of the IO command is compared with the logical address recorded in the metadata of each cache line of the cache to determine whether the command is issued. And if the logical address of the IO command hits in the cache, retrieving data from the hit cache line in response to the IO command (430).
It should be understood that the term "access" as described above includes the basic operations of reading, writing, etc. information and any other operations that interact with information in the cache. The term "information" as used herein may include any content stored in a cache line, including metadata and data corresponding to the metadata. Obviously, the information here also includes logical addresses, state of cache lines, etc.
In the embodiment of the present invention, "obtaining an IO command" may be implemented in various ways. For example, an IO command is received from the host, for example, a read command and/or a write command is received, or a prefetch command is generated by the control unit 104 (see fig. 2).
In an embodiment consistent with the invention, a "hit" indicates that the logical address included in the IO command matches the logical address stored in the cache line, such that the IO command may be determined to hit the cache line.
Example A
By way of example, the front-end processing module 220 (see fig. 2) splits the NVMe command into IO commands having fixed-size data (e.g., 512 bytes, 2KB, or 4KB), the data portion of the cache line can accommodate the data of 1 IO command. The IO command also indicates a logical address. For example, 1 NVMe command indicates to write 4KB of data to each of the logical addresses LBA 100-LBA 103. The NVMe command is broken into 3 IO commands, where 1 IO command indicates to write 4KB of data (4KB) to a logical address (LBA 100). The front-end processing module 220 allocates a cache line 320 from the front-end cache 300 (see FIG. 3), fills the metadata portion of the cache line 320 with the logical address (see LBA100, FIG. 3) or a portion thereof, and fills the data portion of the cache line 320 with the 4KB data provided by the host to be written to the logical address LBA100 (see FIG. 3, data 1). Optionally, in the metadata portion of cache line 320, the state of cache line 320 is also marked to indicate that cache line 320 has been written with data. Similarly, the front-end processing module 220 also allocates a cache line 330 from the front-end cache 300 for recording the writing of 4KB of data to the logical address LBA 101 (see fig. 3, data 2 filled in cache line 330), and allocates a cache line 340 for recording the writing of 4KB of data to the logical address LBA 102 (fig. 3, data 3 filled in cache line 340).
Optionally, all 3 IO commands corresponding to the NVMe command are written into the front-end cache 300, and a message is further sent to the host to indicate that the NVMe command processing is completed. The access speed of the front-end cache 300 is much higher than that of the NVM chip 105, so that after the NVMe command is split into IO commands and filled into a cache line of the front-end cache, a message is sent to the host to indicate that the NVMe command is processed completely, and the processing delay of the NVMe command is greatly reduced.
Next, the front-end processing module receives an NVMe read command, instructing the data to be read from logical addresses LBA 98-LBA 101. The NVMe command is broken into 4 read IO commands, including reading 4KB of data from logical address LBA 98, 4KB of data from logical address LBA 99, 4KB of data from logical address LBA100, and 4KB of data from logical address LBA 101. When the front-end processing module processes the read IO command, whether the read IO command hits the front-end cache is checked. Whether a hit is determined by comparing the LBA address to be read by the read IO command with the logical address recorded in the front-end cache 300. For example, for the IO commands with the access logical addresses LBA 98 and LBA 99, the two logical addresses are not recorded in the front-end cache 300, and thus the two IO commands miss the front-end cache. The front-end processing module 220 sends its two IO commands to the FTL module 230 to obtain the physical address and access the NVM chip to obtain the data to be read. For IO commands with logical addresses of LBA100 and LBA 101, the metadata portions of cache line 320 and cache line 330 of the front-end cache 300 record corresponding LBA100 and LBA 101, so that the two IO commands hit in the front-end cache 300. In response, the front-end processing module 220 accesses the hit cache line 320 and cache line 330, retrieves the cached data 1 and data 2 from the cache line 320 and cache line 330, and uses the data 1 and data 2 as a response to the IO command accessing the logical addresses LBA100 and LBA 101. Since the access speed of the front-end cache 300 is much higher than that of the NVM chip 105, retrieving data from the hit cache line as a response to the read IO command will greatly reduce the processing delay of the IO command.
Further alternatively, for IO commands accessing logical addresses LBA 98 and LBA 99, although no command is issued in front-end cache 300 and data is read from NVM chip 105, the data read from NVM chip 105 is filled into front-end cache 300 in association with logical addresses LBA 98 and LBA 99 so that the data can be read from front-end cache 300 the next time logical addresses LBA 98 and LBA 99 are read.
Example B
In embodiment B, the front-end processing module 220 populates the front-end cache based on the write IO command and the prefetch command. For example, in response to receiving the NVMe write command, the front-end processing module 220 breaks the NVMe write command into one or more write IO commands, allocates a cache line from the front-end cache 300, and records the logical address and data of the write IO command in the cache line. Alternatively, if the cache line allocation from the front-end cache 300 fails, the write IO command is sent to the FTL module 230 without being recorded in the front-end cache 300.
The front-end processing module 220 also generates a prefetch command, reads data from the NVM chip 105 according to the prefetch command, and fills the read data and the corresponding logical address into the front-end cache 300. The front-end processing module 220 generates prefetch commands based on a variety of policies. For example, the prefetch command is generated according to the working state of the solid-state storage device, and when the solid-state storage device is powered on, the prefetch command to the operating system image of the host is generated
In another example, the orderliness of NVMe read commands is predicted, such as generating a prefetch command to logical addresses LBA 102-LBA 105 in response to receiving an NVMe read command to logical addresses LBA 98-LBA 101.
Fig. 5 is a diagram illustrating the composition of a front-end cache according to embodiment B of the present invention. Referring to fig. 5, in response to receiving an NVMe write command to write data 0 to LBA 50, cache line 510 of front-end cache 500 is filled. In response to receiving the NVMe read command for logical addresses LBA 98-LBA 101, the front-end processing module 220 splits the NVMe read command into a plurality of read IO commands. When the read IO command hits the front-end cache 500, the data required by the read IO command is obtained from the front-end cache, and when the read IO command misses the front-end cache 500, the read IO command is sent to the FTL module 530. The front-end cache 500 is not filled whether the read IO command hits the front-end cache 500. However, the front-end processing module 220 generates the prefetch command, prefetches data from the logical addresses LBA 102 to LBA 105, and fills the cache lines 520, 530, 540, and 550 of the front-end cache 500 with the logical addresses LBA 102 to LBA 105 and the prefetched corresponding data (data 3, data 4, data 5, and data 6). A current front-end cache 500 is shown in fig. 5. Next, if a read IO command for accessing the logical address LBA 50 or LBA 102-.
FIG. 6 shows a schematic diagram of an IO command set in accordance with an embodiment of the invention. The command set includes a plurality of IO commands 680. The command set shown in fig. 6 includes a command to read logical address 100, i.e., read LBA 100. The set of commands also includes an IO command to write LBA 104 (670), an IO command to write LBA 102 (660), an IO command to read LBA 104-1 (650), an IO command to read LBA 104-2 (640), an IO command to pre-fetch LBA 104 (630), an IO command to read LBA100-1 (620), and an IO command to write LBA100-2 (610). Write LBA 104 represents the writing to logical address 104. While read LBA 104-1 and read LBA 104-2 represent the first portion of data and the second portion of data, respectively, for logical address 104. Prefetch LBA 104 represents the execution of a prefetch operation to logical address 104. And write LBA100-2 represents the writing of the second portion of logical address 100.
Whenever an IO command is received, the received IO command is added to the set of IO commands. There are many ways of organizing the set of IO commands. In the example of FIG. 6, the received IO commands are organized as queues in the IO command set in a first-in-first-out manner. The newly received IO command is added to the tail of the set of IO commands (see FIG. 6, IO command (610) to write LBA100-2 is at the tail of the set of IO commands), while the command is fetched from the head of the set of IO commands (see FIG. 6, IO command (680) to read LBA100 is at the head of the set of IO commands) for processing.
The IO command set is associated with one cache line of the contract cache.
Example C
FIG. 7 shows an association relationship between an IO command set and a cache line according to embodiment C of the present invention.
As shown in fig. 7, front-end cache 700 includes a plurality of cache lines: cache line 710, cache line 720, cache line 730, and cache line 740. The cache line and the logic address adopt a direct-connection mapping mode. Each cache line corresponds to a predetermined plurality of logical addresses, and data of one logical address is cached only by the cache line corresponding to the logical address. Alternatively, in another example, the cache line and the logical address are mapped in a multi-way set associative manner.
With continued reference to FIG. 7, a corresponding set of IO commands is provided for each cache line. IO command set 760 corresponds to cache line 710, IO command set 762 corresponds to cache line 720, IO command set 764 corresponds to cache line 730, and IO command set 766 corresponds to cache line 740. When an IO command is received, a corresponding cache line is determined according to the logic address of the IO command, and the IO command is filled into an IO command set corresponding to the cache line.
For example, front-end processing module 220 (see fig. 2) generates a plurality of IO commands, including a write command to update logical address LBA100, a read command to read LBA100, and a read command to read LBA 104, a prefetch command to prefetch LBA 101, a read command to read part 1 of LBA 101 (LBA 101-1), a read command to read part 2 of LBA 101 (LBA 101-2), and a write command to write LBA 105. Since LBA100 and LBA 104 are mapped to cache line 710 and LBA 101 and LBA 105 are mapped to cache line 720, the IO commands accessing LBA100 and LBA 104 are filled in to IO command set 760, and the IO commands accessing LBA 101 and LBA 105 are filled in to IO command set 762.
The front-end processing module 220 is also responsible for processing the set of IO commands. For example, in response to a write command to fetch an updated logical address LBA100 from the IO command set 760, the corresponding cache line 710 is filled with the logical address LBA100 and the data to be written. Optionally, a write command is also sent to the FTL module 230 and a message indicating that the write command processing is complete is sent to the host. Next, in response to a read command to fetch the read logical address LBA100 from the IO command set 760, the logical address (LBA 100) of the read command stored in the cache line 510 is identified, i.e., the read command hits the cache line 510, and data is fetched from the cache line 510 as a response to the read command. Next, in response to a read command to read logical address LBA 104 from IO command set 560, a logical address (LBA 104) of cache line 510 that does not store the read command is identified, i.e., the read command misses cache line 510. In this case, a read command is sent to FTL module 230 to retrieve the data to be read from the accessing NVM chip.
As another example, in response to a prefetch command to fetch logical address LBA 101 from IO command set 562, the prefetch command is sent to the FTL module and the results of reading NVM die 105 are filled into cache line 720 with logical address LBA 101. Next, in response to a partial 1 (LBA 101-1) read command to fetch read logical address LBA 101 from IO command set 762, identifying the logical address (LBA 101) in cache line 720 that stored the read command, i.e., the read command hit cache line 720, data is fetched from cache line 720 as a response to the read command accessing LBA 101-1. Next, in response to a partial 2 (LBA 101-2) read command to fetch read logical address LBA 101 from IO command set 762, identifying the logical address (LBA 101) in cache line 720 that stored the read command, i.e., the read command hit cache line 720, data is fetched from cache line 720 as a response to the read command accessing LBA 101-2. Next, in response to a write command to fetch an updated logical address LBA 105 from the IO command set 762, cache line 720 is updated with the write command. And optionally also sends write commands to the FTL module 230 and messages to the host indicating that write command processing is complete.
As another example, in response to a read command to fetch a read logical address LBA 102 from IO command set 764, the read command is sent to FTL module 230 to obtain the data to be read from the accessing NVM chip. Next, in response to a write command to fetch out part 1 (LBA 102-1) of the new logical address LBA 102 from the IO command set 764, the next pending command in the IO command set 764 is also checked. Since the next command to be processed is a write command to update part 2 (LBA 102-2) of the logical address LBA 102, the two write commands are merged, the logical address LBA 102 is filled in the cache line 730, the data of the two write commands in the IO command set 764 is written into the cache line 730, and the two write commands are merged into 1 new write command and forwarded to the FTL module to write the data to the NVM chip 105.
FIG. 8A is a flow diagram of adding a command to a set of IO commands in accordance with an embodiment of the present invention. In the embodiment of FIG. 8A, a first IO command to be processed is obtained (810), along with a logical address of the first IO command, and a cache line and a set of IO commands corresponding to the first IO command are determined. In an embodiment according to the invention, one set of IO commands is provided for each cache line. The set of IO commands is traversed to determine whether the first IO command is associated with a second IO command of the set of IO commands (820). If a second IO command having an association relation with the first IO command exists in the IO command set, the association relation between the first IO command and the second IO command is marked in the IO command set (830). If there is no second IO command in the set of IO commands that has an association with the first IO command, the first IO command is added to the set of IO commands (840), for example, to the tail of the set of IO commands.
FIG. 8B is a flow diagram of a process for fetching a command from an IO command set in accordance with an embodiment of the present invention. In the embodiment of fig. 8B, to process an IO command in the IO command set, a first IO command is obtained from the IO command set and processed (850). The first IO command may be an IO command obtained from an IO command set header that was earliest populated into the IO command set. In response to the first IO command being processed, it is checked whether there is a second IO command in the set of IO commands that is associated with the first IO command (860). If a second IO command associated with the first IO command exists in the set of IO commands, the second IO command is responded to with data of a cache line corresponding to the set of IO commands (870). The processed second IO command is removed from the set of IO commands. If a plurality of IO commands in the IO command set are associated with the first IO command, the plurality of IO commands can all be responded from the data of the cache line. And removing the first IO command and all the IO commands which have the association relation with the first IO command from the IO command set after processing all the IO commands which are associated with the first IO command. And if the second IO command related to the first IO command does not exist in the IO command set, removing the first IO command from the IO command set, and processing the next IO command in the IO command set.
There are a number of ways to determine whether there is an association between IO commands. For example, for a preceding IO command and a following IO command having the same logical address, if the preceding IO command is a write command and the following IO command is a read command, the following IO command is associated with the preceding IO command; if the preceding IO command is a prefetch command and the subsequent IO command is a read command, the subsequent IO command is associated with the preceding IO command. And optionally, the preceding and following IO commands need not be immediately adjacent IO commands.
In another example, according to another embodiment, if the logical address range of the preceding IO command includes the logical address range of the following IO command. The preceding IO command is a pre-fetching command, and the subsequent IO command is a reading command, so that the subsequent IO command is related to the preceding IO command; if the preceding IO command is a write command and the subsequent IO command is a read command, the subsequent IO command is associated with the preceding IO command. For example, a preceding IO command prefetches or updates 4KB space starting from LBA100, while a subsequent IO command reads any 1KB space of the 4KB space starting from LBA 100. In this case, there is also an association relationship between these two IO commands.
FIG. 9 shows an association relationship between IO commands according to an embodiment of the present invention. Referring to FIG. 9, front-end cache 900 includes a plurality of cache lines: cache line 910, cache line 920, cache line 930, and cache line 940. The cache line may include data therein.
With continued reference to FIG. 9, IO commands corresponding to each cache line are added to the IO command set 960. In FIG. 9, IO command set 960 corresponds to cache line 910. In general, the newly received IO command corresponding to cache line 910 is added to the tail of IO command set 960, while the IO command execution is fetched from the head of IO command set 960. In set 960, the head is the pre-fetch command to pre-fetch logical address LBA100, and the tail is the read command to read logical address LBA 104, and the next command to the pre-fetch command to pre-fetch logical address LBA100 is the read command to read logical address LBA 104.
As shown in FIG. 9, in response to receiving a read command for portion 1 of a new read LBA100 (read LBA 100-1). According to the logical address LBA100 of the read command, it is determined that the read command corresponds to the cache line 910, and accordingly the IO command set 960 is traversed to find the IO command depended by the read command (read LBA 100-1). In IO command set 960, the execution of the command to prefetch LBA100 will result in the data required by the read command to read LBA100-1, and thus the read command depends on the prefetch command to prefetch LBA100, or the read command (read LBA100-1) is associated with the prefetch LBA100 command. In set 960-2, the read command to read LBA100-1 is associated with the prefetch command to prefetch LBA100 to mark that the read command to read LBA100-1 depends on or is associated with the prefetch command to prefetch LBA100 and that the next command to prefetch logical address LBA100 is still the read command to read logical address LBA 104. Next, a read command is received to read portion 2 (LBA 100-2) of LBA 100. By traversing the IO command set 960, a read command to read portion 2 of LBA100 (LBA 100-2) is found to be dependent on the prefetch command for LBA 100. In IO command set 960-2, the read command marking read LBA100-2 also depends on or is associated with the prefetch command to prefetch LBA 100. At this point, the structure of the IO command set corresponding to cache line 910 is shown by IO command set 960-2. And optionally, upon adding the read command for LBA100-2 to IO command set 960-2, since there is a read command for read LBA100-1 associated with the prefetch command to prefetch LBA100, in IO command set 960-2, the read command for read LBA100-2 is recorded to be associated with the read command for read LBA100-1 to indicate that the read command for read LBA100-2 was added later than the read command for read LBA100-1 to set IO command set 960-2.
If the logical address of the received IO command has no corresponding relation with the logical address of any IO command in the IO command set, the received IO command is considered to be unrelated with any IO command in the IO command set, and the newly received IO command is added to the tail of the IO command set at the moment, so that the newly received IO command is executed finally in the currently formed IO command set.
For example, next, a write command is received to update the LBA 108. Because there is no IO command in the IO command set 960-2 that has an association relationship with the write command to update the LBA 108, the write command to update the LBA 108 is inserted into the tail of the IO command set 960-2, and the IO command set 960-4 is obtained.
In an embodiment according to the present invention, the process of fetching an IO command from an IO command set corresponding to a cache line and processing the IO command may be performed simultaneously with the process of adding a new IO command to the IO command set. Data is fetched from the head of the IO command set. Taking IO command set 960-4 as an example, the set header is a prefetch command to prefetch LBAs 100. The prefetch command is processed to fill prefetched data into the cache line 910 corresponding to the IO command set 960-4. Next, since there is a read command to read LBA100-1 and a read command to read LBA100-2 that depends on the prefetch command to prefetch LBA100, the read command to read LBA100-1 and the read command to read LBA100-2 are responded to with the data cached in cache line 910 so that the read command to read LBA100-1 and the read command to read LBA100-2 are preferentially executed after the prefetch command to prefetch LBA 100.
After all other IO commands that depend on the prefetch command to prefetch LBA100 have been processed, the prefetch command to prefetch LBA100 and other IO commands that depend on the prefetch command are removed from IO command set 960-4. Next, the read command to read LBA 104 becomes the IO command set header. The read command at the head of the IO command set is processed. Since the read command to read LBA 104 is not relied upon by any IO command, after processing of the read command is complete, it is removed from the header of set 960-4 in order to process other IO commands of set 960-4. Next, the write command to write the LBA 108 becomes the head of the IO command set 960-4. In the process of processing the write command, if a read command to read the LBA 108 is received, the read command to mark the new read LBA 108 in the set 660-4 is associated with the write command to write the LBA 108, and after the write command is processed, data is obtained from the buffer line 910 to respond to the read command to read the LBA 108.
In an embodiment according to the present invention, each set of IO commands corresponds to a particular cache line, in other words, each IO command in the set of IO commands is associated with a cache line corresponding to the set of IO commands.
In a further embodiment according to the present invention, to further reduce the latency of IO command processing, when a write command in the IO command set is processed, completion of the write command execution is indicated to the host in response to writing data of the write command to the cache line. Although data is only written into the cache line at this time and data is not yet written into the NVM chip, a message that the execution of the write command is completed is sent to the host in advance without waiting until the data is written into the NVM chip before sending a notification to the host. This will advantageously reduce the latency of the write command execution.
In the above embodiment, the newly received IO command may have an association relationship with a write command or a prefetch command in the IO command set when the IO command is a read command. Thus, if the received IO command is not a read command, optionally, the received IO command is added directly to the tail of the IO command set. According to another embodiment, when the newly received IO command is a write command, the new received IO command can also have an association relationship with the IO command in the IO command set.
FIG. 10 is a flow diagram of adding commands to a set of IO commands in accordance with yet another embodiment of the present invention. In the embodiment of fig. 10, a method of accessing cached information, comprising: obtaining a first write command (1010); determining whether the first write command is associated with a second write command of a first set of IO commands according to a logical address of the first write command, wherein the first set of IO commands corresponds to one cache line of a cache (1020); and merging the first write command with a second write command in the set of IO commands (1030).
The case where two write commands have an association relationship is described below with reference to fig. 11.
Referring to fig. 11, front-end cache 1100 includes a plurality of cache lines: cache line 1110, cache line 1120, cache line 1130, and cache line 1140. The cache line includes data. The IO commands corresponding to each cache line are organized into a set of IO commands. In FIG. 11, IO command set 1162(1162-0, 1162-1, 1162-2, and 1162-3) is the IO command set composed of IO commands corresponding to cache line 1120. Generally, the newly received IO command corresponding to cache line 1120 is added to the tail of IO command set 1162, while the IO command is fetched from the head of IO command set 1162 for execution. By way of example, an IO command accessing logical addresses LBA 101, LBA 105, LBA109 corresponds to cache line 1120.
In FIG. 11, reference numerals 1162-0, 1162-1, 1162-3, 1162-4, 1162-5, and 1162-6 indicate the states of the different sets of time points 1162. In IO command set 1162-0, the head is the read command to read logical address LBA 105, and the tail is the write command to update logical address LBA109 (1170), and the next command to read the read command to read logical address LBA 105 is the write command to update logical address LBA109 (1170). The new IO command currently received is another write command to update the logical address LBA109 (1172). In set 1162-0, the read command to read logical address LBA109 is associated with or dependent on the write command to update logical address LBA109 (1170).
In response to receiving 1172 the write command to update the logical address LBA109, the IO command set 1162-0 is traversed to find 1170 the write command to update the logical address LBA109 in the IO command set 1162-0. Since the write command (1172) and the write command (1170) access the same logical address, the write command (1172) is merged with the write command (1170), with the merged write command (1174) (IO command set 1162-1) indicated by the write LBA 109'. There are different ways to merge write commands. If the write command (1170) and the write command (1172) update the same logical address, then to merge the write command (1170) and the write command (1172), the data of the subsequent write command (1172) is used as the data to be written for the merged write command (1174), and the logical address of either the write command (1170) or the write command (1172) is used as the logical address for the write command (1174). If the write command (1170) and the write command (1172) respectively update different parts of the LBA109, the result of the continuous update of the LBA109 by the write command (1170) and the write command (1172) is used as the data to be written by the merged write command (1174), and the logical address written by the write command (1174) is the union of the logical addresses updated by the write command (1170) and the write command (1172). The merging of the write commands reduces the number of write commands to be processed and improves the processing efficiency of the solid-state storage device.
Note that referring to FIG. 11, in IO command set 1162-0, read commands to read LBA109 are associated with write commands to update LBA109 (1170), while in IO command set 1160-1, read commands to read LBA109 are associated with write commands to update LBA109 (1174). Thus, when the write command is processed 1174 and the read command to read LBA109 is responded to with the data from cache line 1120, a different result will be read. In some cases, the results of such processing are in accordance with a storage protocol, or are expected by the user.
There are a number of ways to incorporate write commands that update the same logical address. Referring to FIG. 11, the manner in which write commands are merged for another embodiment is indicated in connection with the IO command sets 1162-3, 1162-4, 1162-5, and 1162-6.
In IO command set 1162-3, the head is the read command to read logical address LBA 105, and the tail is the write command to update logical address LBA109 (1182), and the next command to read the read command to read logical address LBA 105 is the write command to update logical address LBA109 (1180). The next command of the write command (1180) to update the logical address LBA109 is another write command (1182) to update the logical address LBA 109. In set 1162-3, the read command to read logical address LBA109 is associated with or dependent on the write command to update logical address LBA109 (1180).
Next, a prefetch logical address LBA 101 prefetch IO command is received. Since LBA 101, the logical address of the prefetch IO command, corresponds to cache line 1120, set 1162 is traversed to find the IO command on which the prefetch IO command depends. In IO command set 1162-3, there are no IO commands on which the prefetch IO command depends, so the prefetch IO command is added to the tail of set 1162-3 (see also IO command set 1162-4) as the next IO command to the write command that updates logical address LBA 109. Next, a write command to update the logical address LBA 105 is received. The write command is not dependent on or associated with any IO commands, and thus adds a write command to update logical address LBA 105 to the end of set 1162-4. Next, a read command to read portion 1 (LBA 101-1) of logical address LBA 101 is received, set 1162 is traversed to find that the read command to read portion 1 (LBA 101-1) of logical address LBA 101 depends on or is associated with the prefetch logical address LBA 101 prefetch IO command, and thus in set 1162-4, the read command to read LBA 101-1 is associated with the prefetch IO command to prefetch LBA 101. Next, a read command to read portion 2 (LBA 101-2) of logical address LBA 101 is received, set 1162-4 is traversed to find that the read command to read portion 2 (LBA 101-2) of logical address LBA 101 depends on prefetch logical address LBA 101 to prefetch IO commands, and thus in set 1162-4, the read command to read LBA 101-2 is associated with prefetch IO commands to prefetch LBA 101. In FIG. 11, reference numerals 1162-4 indicate the state of the IO command set at this time.
It should be noted that the process of fetching and processing an IO command from the IO command set corresponding to the cache line may be performed simultaneously with the process of adding a new IO command to the IO command set.
Further as shown in FIG. 11, taking sets 1162-4 as an example, the IO command set header is a read command to read logical address LBA 105. The read command is processed, and the write command (1180) that updates logical address LBA109 becomes the head of set 1162 (1162-5). The write command (1180) to update the logical address LBA109 is next processed to fill the cache line 1120 with the data to be written. The read command identifying the read logical address LBA109 depends on or is associated with the write command (1180), responds to the read command reading the logical address LBA109 with the data in the cache entry 1120, and removes the read command reading the logical address LBA109 from the IO command set. Next, the IO command next to the write command (1180) is identified as the write command (1182), and the write command (1180) and the write command (1182) update the same logical address LBA109, so that the write command (1180) and the write command (1182) are merged. Based on the merged write command, the cache line is updated 1120 and data is written to the NVM chip.
In fig. 11, the two write commands (1180 and 1182) shown in the IO command set 1162-5 are adjacent in time, but the invention is not limited thereto, and the two write commands may also be non-adjacent in time, that is, the two non-adjacent write commands with the same logical address are merged, and the merged data is used as the data to be written.
With continued reference to FIG. 11, as the write command (1180) is merged with the write command (1182) and written to the NVM chip, the merged write command is fetched from IO command set 1162-5 and the prefetch IO command to prefetch LBA 101 becomes the header of the IO command set (1162-6). The prefetch IO command to prefetch LBA 101 is executed, filling the prefetched data into cache line 1120. Recognizing that the read command to read portion 1 (LBA 101-1) of logical address 101 and the read command to read portion 2 (LBA 101-2) of logical address 101 depends on the prefetch IO command to prefetch logical address LBA 101, the read command to read portion 1 (LBA 101-1) of logical address 101 and the read command to read portion 2 (LBA 101-2) of logical address 101 are responded to with data in cache entry 1120.
FIG. 12 is a flow diagram of adding commands to an IO command set in accordance with another embodiment of the present invention. As has been seen in the above description, some read commands in the IO command set fail to command a cache line. In the embodiment of FIG. 12, a prefetch command is generated for a read command that misses a cache line in a set of IO commands periodically or under specified conditions (e.g., the set of IO commands is larger than a specified size, or in response to an indication by a user) (1210). Preferably, the presence of a plurality of read commands accessing the same logical address in the IO command set is identified, and a prefetch command is generated for these read commands. And adding the generated prefetch command to the set of IO commands and adjusting the structure of the set of IO commands, associating the read commands associated with the generated prefetch command (1220). So that when a prefetch command is executed, the read commands associated with the prefetch command can be identified and the data of the cache line utilized as a response to those read commands. In this way, the number of reading the NVM chip is reduced, and the operating efficiency of the solid-state storage device is improved.
The methods and apparatus of the present invention may be implemented in hardware, software, firmware, or any combination thereof. The hardware may include digital circuitry, analog circuitry, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), and so forth. The software may include computer readable programs which, when executed by a computer, implement the methods of the present invention.
For example, the present invention may be embodied as a solid state drive, which may include: one or more processors; a memory; a program stored in the memory, which when executed by the one or more processors, causes the solid state drive to perform the method as described above.
The software of the invention may also be stored in a computer readable storage medium, such as a hard disk, an optical disk, etc., which stores a program that, when executed by an apparatus, causes the apparatus to perform the method described above.
The foregoing description is merely exemplary rather than exhaustive of the present invention, and those skilled in the art may add, delete, modify, replace, etc. the above methods, apparatuses, devices, modules, etc. without departing from the spirit and scope of the present invention.

Claims (9)

1. A method of accessing cached information, comprising:
acquiring a first IO command;
determining a first command set corresponding to the first IO command according to the logic address of the first IO command;
judging whether the first IO command is associated with a second IO command in a first IO command set according to the logical address of the first IO command, wherein the first IO command set corresponds to the cached first cache line, and the second IO command is located in the first IO command set;
marking the incidence relation between the first IO command and the second IO command;
if the first IO command is associated with the second IO command, preferentially executing the first IO command after the second IO command is executed; if the first IO command is not associated with any IO command in the first set of IO commands, adding the first IO command to the first set of IO commands, such that the first IO command is executed last among the IO commands in the first set of IO commands.
2. The method of claim 1, wherein,
and if the logical address of the second IO command comprises the logical address of the first IO command, the second IO command is a write command, and the first IO command is a read command, the first IO command is associated with the second IO command.
3. The method of claim 1 or 2,
if the logical address of the second IO command includes the logical address of the first IO command, the second IO command is a prefetch command, and the first IO command is a read command, the first IO command is associated with the second IO command.
4. The method of claim 1 or 2, further comprising:
and if the first IO command is associated with the second IO command, responding to the completion of the execution of the second IO command, and acquiring data from a first cache line to respond to the first IO command.
5. The method of claim 1 or 2, further comprising:
removing the second IO command and all IO commands associated with the second IO command from the first set of IO commands after execution of the second IO command and all IO commands associated with the second IO command is complete.
6. The method of claim 1 or 2, wherein
Data targeted by IO commands having the same logical address is cached only in a cache line corresponding to the same logical address.
7. The method of claim 1 or 2, further comprising:
taking out a third IO command from the first IO command set;
and if the third IO command is a write command, responding to the data of the third IO command written into the first cache line, and indicating the third IO command to the host to be executed completely.
8. A solid state drive, comprising:
one or more processors;
a memory;
a program stored in the memory, which when executed by the one or more processors, causes the solid state drive to perform the method of any of claims 1-7.
9. An apparatus for accessing cached information, comprising:
the IO command acquisition module is used for acquiring a first IO command;
a command set determining module, configured to determine a first command set corresponding to the first IO command according to a logical address of the first IO command;
the cache association detection module is configured to determine whether the first IO command is associated with a second IO command in a first IO command set according to a logical address of the first IO command, where the first IO command set corresponds to a first cache line of the cache, and the second IO command is located in the first IO command set;
the marking module is used for marking the incidence relation between the first IO command and the second IO command;
an execution module, configured to preferentially execute the first IO command after the second IO command is executed if the first IO command is associated with the second IO command; if the first IO command is not associated with any IO command in the first set of IO commands, adding the first IO command to the first set of IO commands, such that the first IO command is executed last among the IO commands in the first set of IO commands.
CN201610819400.4A 2016-09-05 2016-09-12 Method, device and driver for accessing cache information Active CN107797759B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2016108051570 2016-09-05
CN201610805157 2016-09-05

Publications (2)

Publication Number Publication Date
CN107797759A CN107797759A (en) 2018-03-13
CN107797759B true CN107797759B (en) 2021-05-18

Family

ID=61529567

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610819400.4A Active CN107797759B (en) 2016-09-05 2016-09-12 Method, device and driver for accessing cache information

Country Status (1)

Country Link
CN (1) CN107797759B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102523327B1 (en) * 2018-03-19 2023-04-20 에스케이하이닉스 주식회사 Memory controller and memory system having the same
CN108519858B (en) * 2018-03-22 2021-06-08 雷科防务(西安)控制技术研究院有限公司 Memory chip hardware hit method
CN108897491B (en) * 2018-05-30 2021-07-23 郑州云海信息技术有限公司 Heterogeneous hybrid memory quick access optimization method and system
CN110580227B (en) * 2018-06-07 2024-04-12 北京忆恒创源科技股份有限公司 Adaptive NVM command generation method and device
CN108874685B (en) * 2018-06-21 2021-10-29 郑州云海信息技术有限公司 Data processing method of solid state disk and solid state disk
KR102596964B1 (en) * 2018-07-31 2023-11-03 에스케이하이닉스 주식회사 Data storage device capable of changing map cache buffer size
CN110941571B (en) * 2018-09-05 2022-03-01 合肥沛睿微电子股份有限公司 Flash memory controller and related access method and electronic device
CN111290975A (en) * 2018-12-07 2020-06-16 北京忆恒创源科技有限公司 Method for processing read command and pre-read command by using unified cache and storage device thereof

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1585934A (en) * 2001-11-12 2005-02-23 英特尔公司 Method and apparatus for read launch optimizations in memory interconnect

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9003159B2 (en) * 2009-10-05 2015-04-07 Marvell World Trade Ltd. Data caching in non-volatile memory
JP2011090460A (en) * 2009-10-21 2011-05-06 Toshiba Corp Data storage device and method of controlling the same
CN102053914B (en) * 2009-10-30 2013-07-31 慧荣科技股份有限公司 Memory device and data access method for memory unit
JP5296041B2 (en) * 2010-12-15 2013-09-25 株式会社東芝 Memory system and memory system control method
CN102103548B (en) * 2011-02-22 2015-06-10 中兴通讯股份有限公司 Method and device for increasing read-write rate of double data rate synchronous dynamic random access memory
CN102681952B (en) * 2012-05-12 2015-02-18 北京忆恒创源科技有限公司 Method for writing data into memory equipment and memory equipment
CN103473184B (en) * 2013-08-01 2016-08-10 记忆科技(深圳)有限公司 The caching method of file system and system
CN104407933B (en) * 2014-10-31 2018-10-02 华为技术有限公司 A kind of backup method and device of data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1585934A (en) * 2001-11-12 2005-02-23 英特尔公司 Method and apparatus for read launch optimizations in memory interconnect

Also Published As

Publication number Publication date
CN107797759A (en) 2018-03-13

Similar Documents

Publication Publication Date Title
CN107797759B (en) Method, device and driver for accessing cache information
CN107797760B (en) Method and device for accessing cache information and solid-state drive
CN106448737B (en) Method and device for reading flash memory data and solid state drive
US20110231598A1 (en) Memory system and controller
CN109164976B (en) Optimizing storage device performance using write caching
CN111061655B (en) Address translation method and device for storage device
TW201314452A (en) System and method to buffer data
CN103543955A (en) Method and system for reading cache with solid state disk as equipment and solid state disk
CN108228483B (en) Method and apparatus for processing atomic write commands
JP7030942B2 (en) Memory device and its control method
CN111625482B (en) Sequential flow detection method and device
CN111352865B (en) Write caching for memory controllers
CN106502584B (en) A method of improving the utilization rate of solid state hard disk write buffer
CN110515861B (en) Memory device for processing flash command and method thereof
CN111290975A (en) Method for processing read command and pre-read command by using unified cache and storage device thereof
CN111290974A (en) Cache elimination method for storage device and storage device
CN110968527A (en) FTL provided caching
CN115993930A (en) System, method and apparatus for in-order access to data in block modification memory
CN113254363A (en) Non-volatile memory controller with partial logical to physical address translation table
CN109960667B (en) Address translation method and device for large-capacity solid-state storage device
JP6378111B2 (en) Information processing apparatus and program
CN110532199B (en) Pre-reading method and memory controller thereof
CN112947845A (en) Thermal data identification method and storage device thereof
CN109840219B (en) Address translation system and method for mass solid state storage device
CN110580227B (en) Adaptive NVM command generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 100192 room A302 / 303 / 305 / 306 / 307, 3rd floor, building B-2, Dongsheng Science Park, Zhongguancun, 66 xixiaokou Road, Haidian District, Beijing

Patentee after: Beijing yihengchuangyuan Technology Co.,Ltd.

Address before: 100192 room A302 / 303 / 305 / 306 / 307, 3rd floor, B-2, Zhongguancun Dongsheng Science Park, 66 xixiaokou Road, Haidian District, Beijing

Patentee before: MEMBLAZE TECHNOLOGY (BEIJING) Co.,Ltd.

CP03 Change of name, title or address