CN117032594B - Read command scheduling method, processing method, device and storage equipment - Google Patents

Read command scheduling method, processing method, device and storage equipment Download PDF

Info

Publication number
CN117032594B
CN117032594B CN202311297584.9A CN202311297584A CN117032594B CN 117032594 B CN117032594 B CN 117032594B CN 202311297584 A CN202311297584 A CN 202311297584A CN 117032594 B CN117032594 B CN 117032594B
Authority
CN
China
Prior art keywords
command
read
read command
linked list
cache
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311297584.9A
Other languages
Chinese (zh)
Other versions
CN117032594A (en
Inventor
蔡述楠
孙清涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Memblaze Technology Co Ltd
Original Assignee
Beijing Memblaze Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Memblaze Technology Co Ltd filed Critical Beijing Memblaze Technology Co Ltd
Priority to CN202311297584.9A priority Critical patent/CN117032594B/en
Publication of CN117032594A publication Critical patent/CN117032594A/en
Application granted granted Critical
Publication of CN117032594B publication Critical patent/CN117032594B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/901Indexing; Data structures therefor; Storage structures
    • G06F16/9024Graphs; Linked lists
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources

Abstract

The disclosure relates to a read command scheduling method, a read command processing device and storage equipment. In the embodiment of the disclosure, by identifying a read command or a read cache command in a pre-read mode, when a first preset condition is satisfied, if the read command is identified, the command manager processes the read command; when the second preset condition is met, if the read cache command is identified, the command manager distributes the read cache command to the data manager for processing, and the command manager and the data manager cooperate to realize that the read command and the read cache command are processed by adopting different scheduling strategies, so that the processing delay of the read command in a pre-reading scene is reduced, the response speed of the system is improved, and the sequential reading performance can be improved in a sequential reading scene.

Description

Read command scheduling method, processing method, device and storage equipment
Technical Field
The embodiment of the disclosure relates to the technical field of storage, in particular to a read command scheduling method, a read command processing device and storage equipment.
Background
With the development of Hard Disk technology, solid State Disk (SSD) gradually replaces a mechanical Hard Disk (HDD) to improve data storage stability and data reading efficiency.
When a Host (Host) accesses an SSD, read IO (Input Output) needs to read data from the NAND flash memory to the cache memory and then read from the cache memory to the Host. In the case of serial processing, there is a longer delay. If the read IO satisfies the sequentiality, for example, addresses accessed by a plurality of IO commands are continuous or approximately continuous, the IO commands belong to the same sequential flow, and the sequentiality is satisfied, data can be read from the NAND flash memory to the cache in advance in a prediction mode, so that the parallel processing effect is achieved.
Since the data read in advance by the SSD is already cached in the cache, when the host makes a read request, the SSD can read the data read in advance directly from the cache in response to the read request without spending time and resources to read the data from the NAND flash memory.
However, if the pre-read data is insufficient to satisfy the next read request, the SSD must read the data from the NAND flash memory and wait for an access time. This latency will lead to increased read latency, which causes problems with read command latency in SSD pre-read scenarios. Alternatively, if the data area of consecutive reads in the read-ahead request is large, or the load of the read operation is large, the sequential read-ahead of the SSD may fail, and may cause read delay problems. This is because when continuously read data is very large, the SSD may not be able to pre-store all of the pre-read data due to controller cache size limitations, in which case the SSD has to read data from the NAND flash memory and read latency may occur.
Disclosure of Invention
At least one embodiment of the present disclosure provides a read command scheduling method, a processing method, a device, a storage device, and a medium, so as to reduce read command delay in a pre-read scenario and improve sequential read performance.
In a first aspect, an embodiment of the present disclosure provides a method for scheduling a read command, including:
responding to the read command or the read cache command identified in the pre-read mode, and when the first preset condition is met, if the read command is identified, processing the read command by the command manager;
when the second preset condition is met, if the read cache command is identified, the command manager distributes the read cache command to the data manager for processing;
the first preset condition and the second preset condition are determined according to different queue depths of IO commands and/or IO data block sizes.
In some embodiments, in response to identifying whether the read command is a read cache command in the read-ahead mode, the read command scheduling method further comprises:
receiving a read command in a pre-read mode, and generating a request descriptor corresponding to the read command, wherein the request descriptor is used for describing relevant information of the read command; the related information comprises a starting position of an address accessed by a read command and a data block size accessed by the read command;
Identifying whether a read command or a read cache command in a read-ahead mode includes:
determining whether the read command hits in the read cache based on the starting location;
if the read command does not hit the read cache, identifying the read command as the read command;
if the read command hits the read cache, it is identified as a read cache command.
In some embodiments, when the first preset condition is met, if identified as a read command, the command manager processes the read command including:
if the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset first depth, or the size of a data block accessed by the read command is smaller than the size of a preset data block and the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset second depth, the command manager processes the read command; wherein the first depth is less than the second depth.
In some embodiments, when the second preset condition is met, if a read cache command is identified, the command manager assigns the read cache command to the data manager for processing, including:
if the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset third depth, the command manager distributes the read cache command to the data manager for processing; wherein the third depth is greater than the second depth.
In some embodiments, the command manager assigns the read cache command to the data manager for processing, including:
and selecting the data manager with the least number of the processed read commands from the plurality of data managers to process the read commands, or adopting a preset load balancing strategy to select one data manager to process the read commands.
In a second aspect, an embodiment of the present disclosure further provides a method for processing a read command, where a link table is maintained in advance, where the link table is used to associate multiple cache addresses, and each link table entry in the link table includes a cache interval and a link table entry pointer associated next, and the method includes:
in response to receiving a read command scheduled based on the read command scheduling method provided by any embodiment of the first aspect, determining a first data block size accessed by the read command and a second data block size accessed by a previous read command;
if the size of the first data block is different from that of the second data block, adjusting a linked list item pointer at the tail part corresponding to the read command and a linked list item pointer at the tail part corresponding to the last command so that the linked list can cache the data accessed by the read command;
based on the starting position of the address accessed by the read command and the first data block size, the data accessed by the read command is cached in a linked list.
In some embodiments, determining a first data block size for read command access includes:
if a read command is received, the read command is analyzed to obtain the first data block size accessed by the read command;
if a request descriptor is received, a first data block size accessed by a read command included in the request descriptor is extracted.
In some embodiments, adjusting the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the previous command includes:
disconnecting the linked list item pointer at the tail part corresponding to the read command, and pointing the linked list item pointer at the tail part corresponding to the last command to the next adjacent linked list item.
In a third aspect, an embodiment of the present disclosure further proposes a read command scheduling apparatus, including:
the first unit is used for responding to the read command or the read cache command in the pre-read mode, and when the first preset condition is met, if the read command is identified, the command manager processes the read command;
the second unit is used for distributing the read cache command to the data manager for processing if the read cache command is identified when the second preset condition is met;
the first preset condition and the second preset condition are determined according to different queue depths of IO commands and/or IO data block sizes.
In a fourth aspect, an embodiment of the present disclosure further provides a read command processing apparatus, where the apparatus maintains in advance a linked list, where the linked list is used to associate a plurality of cache addresses, each linked list item in the linked list includes a cache interval and a linked list item pointer associated next, and the apparatus includes:
a determining unit, configured to determine, in response to receiving a read command scheduled based on the read command scheduling method according to any one of the embodiments of the first aspect, a first data block size accessed by the read command and a second data block size accessed by a previous read command;
the adjusting unit is used for adjusting the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the last command if the size of the first data block is different from that of the second data block so that the linked list can buffer the data accessed by the read command;
and the processing unit is used for caching the data accessed by the read command into a linked list based on the starting position of the address accessed by the read command and the first data block size.
In a fifth aspect, embodiments of the present disclosure further propose an electronic device, including a memory, a processor and a computer program stored on the memory, wherein the processor executes the computer program to implement the steps of the read command scheduling method as provided in any embodiment of the first aspect or the steps of the read command processing method as provided in any embodiment of the second aspect.
In a sixth aspect, embodiments of the present disclosure further provide a storage device, where a control unit and an NVM chip, where the control unit performs a step of a read command scheduling method as provided in any embodiment of the first aspect or a step of a read command processing method as provided in any embodiment of the second aspect.
In a seventh aspect, embodiments of the present disclosure further provide a computer-readable storage medium, where the computer-readable storage medium stores a program or instructions that cause a computer to perform the steps of the read command scheduling method as provided by any embodiment of the first aspect or the steps of the read command processing method as provided by any embodiment of the second aspect.
In an eighth aspect, the disclosed embodiments further provide a computer program product, wherein the computer program product comprises a computer program stored in a computer readable storage medium, from which at least one processor of the computer reads and executes the computer program, such that the computer performs the steps of the read command scheduling method as provided by any of the embodiments of the first aspect or the steps of the read command processing method as provided by any of the embodiments of the second aspect.
It can be seen that, in at least one embodiment of the present disclosure, by identifying a read command or a read cache command in the pre-read mode, when a first preset condition is satisfied, if the read command is identified, the command manager processes the read command; when the second preset condition is met, if the read cache command is identified, the command manager distributes the read cache command to the data manager for processing, and the command manager and the data manager cooperate to realize that the read command and the read cache command are processed by adopting different scheduling strategies, so that the processing delay of the read command in a pre-reading scene is reduced, the response speed of the system is improved, and the sequential reading performance can be improved in a sequential reading scene.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings to those of ordinary skill in the art.
Fig. 1 is a flow chart of a read command scheduling method according to an embodiment of the disclosure;
FIG. 2 is a flow chart of a method for processing a read command according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of adjusting linked list item pointers provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a read command scheduling apparatus according to an embodiment of the disclosure;
FIG. 5 is a schematic diagram of a read command processing apparatus according to an embodiment of the disclosure;
fig. 6 is an exemplary block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
The following description of the technical solutions in the embodiments of the present disclosure is made clearly and completely with reference to the accompanying drawings in the embodiments of the present disclosure, and it is apparent that the described embodiments are some embodiments of the present disclosure, but not all embodiments. Based on the embodiments in this disclosure, all other embodiments that a person skilled in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
The following detailed description is provided to assist the reader in obtaining a thorough understanding of the methods, apparatus, and/or systems described herein. However, various changes, modifications, and equivalents of the methods, apparatuses, and/or systems described herein will be apparent after an understanding of the present disclosure. For example, the order of operations described herein is merely an example and is not limited to those set forth herein, but rather may be altered as would be apparent after an understanding of the disclosure, except for operations that must occur in a specific order. Furthermore, descriptions of features known after understanding the present disclosure may be omitted for added clarity and conciseness.
The features described herein may be embodied in different forms and should not be construed as limited to the examples described herein. Rather, the examples described herein have been provided to illustrate only some of the many possible ways in which the methods, devices, and/or systems described herein may be implemented that will be apparent upon an understanding of the present disclosure.
The terminology used herein is for the purpose of describing various examples only and is not intended to be limiting of the disclosure. Singular forms also are intended to include plural forms unless the context clearly indicates otherwise. The terms "comprises," "comprising," and "having" specify the presence of stated features, amounts, operations, components, elements, and/or combinations thereof, but do not preclude the presence or addition of one or more other features, amounts, operations, components, elements, and/or combinations thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs based on the understanding of this disclosure. Unless explicitly so defined herein, terms (such as those defined in a general dictionary) should be construed to have meanings consistent with their meanings in the context of the relevant art and the disclosure of the present disclosure, and should not be interpreted idealized or overly formal. The use of the term "may" herein with respect to an example or embodiment (e.g., with respect to what the example or embodiment may include or implement) indicates that there is at least one example or embodiment that includes or implements such feature, and all examples are not so limited.
Fig. 1 is a schematic flow chart of a read command scheduling method provided by an embodiment of the present disclosure, where an execution body of the read command scheduling method is an electronic device, and the electronic device includes, but is not limited to, a storage device (for example, a solid state disk, a flash memory device, etc.), a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop, a notebook computer, an all-in-one machine, an intelligent home device, a server, etc., where the server may be an independent server, or may be a cluster of multiple servers, or may include a server built locally and a server erected at a cloud.
In some embodiments, a Command Manager (CM) and a Data Manager (DM) are required to cooperate in the execution process of the read Command scheduling method, so that the read Command and the read cache Command are processed by adopting different scheduling strategies. CM is implemented as a component in a control unit within a storage device (e.g., a solid state disk).
As shown in fig. 1, the read command scheduling method may include, but is not limited to, steps 101 and 102:
in step 101, in response to identifying the read command or the read cache command in the pre-read mode, when the first preset condition is satisfied, if the read command is identified, the command manager processes the read command.
In this embodiment, the command manager CM, in response to receiving a read command sent by a host in a pre-read mode, analyzes the read command to obtain information such as an address (logical address) accessed by the read command and a Block Size (BS) accessed by the read command, and further generates a request descriptor corresponding to the read command based on the information obtained by analyzing the read command, where the request descriptor is used to describe information related to the read command, and the related information includes but is not limited to: the starting location of the address accessed by the read command and the data block size BS accessed by the read command. In some embodiments, an Identification (ID) of the read command may also be included in the request descriptor.
In this embodiment, one implementation of identifying the read command or the read cache command in response to the read-ahead mode is: the command manager CM determines whether the read command hits the read cache based on the starting position; if the read command does not hit the read cache, identifying the read command as the read command; if the read command hits the read cache, it is identified as a read cache command.
For the read cache, the control unit in the storage device may maintain a mapping relationship between an index of each cache unit in the read cache and a logical address (LBA) of data cached by each cache unit, abbreviated as a mapping relationship between the cache unit index and the LBA. The command manager CM may determine whether the start position of the address accessed by the read command is recorded in the mapping relation, and if the start position is recorded in the mapping table, may determine the buffer location index of the address mapping accessed by the read command based on the mapping relation, that is, may determine that the data to be accessed by the read command is stored in the buffer location corresponding to the buffer location index, which indicates that the data to be accessed by the read command is recorded in the read buffer, that is, the read command hits the read buffer. If the start position is not recorded in the mapping table, it is indicated that the data to be accessed by the read command is not recorded in the read cache, i.e. the read command does not hit in the read cache.
Scene of miss read cache for read command:
in this embodiment, the command manager CM identifies the read command as a read command if it determines that the read command does not hit the read cache, and processes the read command directly when the first preset condition is satisfied. The first preset condition is determined according to different Queue Depths (QDs) of the IO commands and/or the sizes of the IO data blocks, wherein the QDs indicate the number of read commands to be processed.
If the queue depth of the IO command maintained by the command manager CM is smaller than or equal to a preset first depth, or the size of a data block accessed by the read command is smaller than the preset data block size and the queue depth of the IO command maintained by the command manager CM is smaller than or equal to a preset second depth, the command manager CM directly processes the read command; wherein the first depth is less than the second depth.
For example, if QD is less than or equal to 2 (first depth) or BS <16k (preset data block size) and QD is less than or equal to 4 (second depth) when the read command misses the read cache, the command manager CM directly processes the read command, i.e., only the CM processes the read command, and does not schedule the data manager DM, and each Core (CPU) in the NAND flash memory controller is correspondingly provided with a DM, so that the DM is not scheduled, i.e., the time required for scheduling the read command to one or more Cores (CPUs) in the NAND flash memory controller is reduced, thereby reducing the inter-core delay for processing the read command, reducing the latency for processing the read command, and improving the overall processing speed of the read command processing.
In step 102, when the second preset condition is satisfied, if the read cache command is identified, the command manager allocates the read cache command to the data manager for processing.
Hit read cache for read command scenario:
in this embodiment, the command manager CM determines that the read command hits the read cache, and recognizes the read cache command, and when the second preset condition is satisfied, the command manager CM allocates the read cache command to the data manager DM for processing. The second preset condition is also determined according to different queue depths of IO commands and/or IO data block sizes.
If the queue depth of the IO command maintained by the command manager CM is smaller than or equal to a preset third depth, the command manager CM distributes the read cache command to the data manager DM for processing; wherein the third depth is greater than the second depth.
For example, if QD is less than or equal to 32 (the third depth), the command manager CM allocates the read buffer command to the data manager DM for processing, that is, only the data manager DM processes the read buffer command, and the command manager CM does not process the read buffer command, so that the overhead of processing the command by the CM can be reduced, and further, the processing delay of the command can be reduced.
In some embodiments, the command manager CM may directly assign the read cache command to the data manager DM, process the read cache command by the data manager DM, determine the data accessed by the read cache command and feed back the data accessed by the read cache command to the command manager CM.
In some embodiments, the command manager CM may not allocate the read cache command to the data manager DM for processing, but allocate a request descriptor corresponding to the read cache command to the data manager DM, so that the data manager DM determines the data accessed by the read cache command directly based on the related information of the read cache command carried by the request descriptor and feeds back the data accessed by the read cache command to the command manager CM. In order to make the data manager DM aware of the read cache command corresponding to the request descriptor, the request descriptor further includes an Identification (ID) of the read cache command; in order to make the command manager CM aware of the read cache command corresponding to the read data fed back by the data manager DM, the data manager DM increases the Identification (ID) of the read cache command when feeding back the read data.
In some embodiments, from a plurality of data managers DM (each DM corresponds to a CPU in the NAND flash memory controller), the DM with the least number of read commands is selected to process the read command, or a preset load balancing policy is adopted to select one DM to process the read command, which may be used as a common policy in the art and will not be described herein.
On the basis of the above embodiments, fig. 2 is a schematic flow chart of a command processing method provided by the embodiments of the present disclosure, where an execution body of the command processing method is an electronic device, and the electronic device includes, but is not limited to, a storage device (for example, a solid state disk, a flash memory device, etc.), a smart phone, a palm computer, a tablet computer, a wearable device with a display screen, a desktop computer, a notebook computer, an all-in-one machine, a smart home device, a server, etc., where the server may be an independent server, or may be a cluster of multiple servers, or may include a server built locally and a server erected at a cloud.
In some embodiments, the execution subject of the command processing method is a command manager CM or a data manager DM, CM and DM being one of the components of the control unit within the storage device.
The execution main body of the command processing method maintains a linked list in advance, wherein the linked list is used for associating a plurality of cache addresses (e.g. DDR addresses), and each linked list item in the linked list comprises a cache interval (e.g. DDR interval) and a linked list item pointer of the next association. In some embodiments, the linked list entries are shared by read and write commands, with the linked list entries being more located on the mass slow storage in order to support high concurrent write commands.
In some embodiments, the buffer intervals in the linked list are ordered, access friendly. The buffer interval can only be used by one command at a time.
As shown in fig. 2, the command processing method may include, but is not limited to, steps 201 to 203:
in step 201, in response to receiving a read command scheduled based on a read command scheduling method, a first data block size accessed by the read command and a second data block size accessed by a previous read command are determined.
In this embodiment, if a read command is received, the read command is parsed to obtain the size of a first data block accessed by the read command, and an address (logical address) accessed by the read command can also be obtained; if a request descriptor is received, a first data block size accessed by a read command included in the request descriptor is extracted. The second data block size accessed by the last read command may be looked up from the history.
In step 202, if the first data block size is different from the second data block size, the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the last command are adjusted so that the linked list can buffer the data accessed by the read command.
In consideration of the fact that the positions of linked list items are disturbed after the linked list is released through multiple applications, access is not friendly, and the linked list is slower to process, in the embodiment, the linked list item pointer is optimized based on the first data block size and the second data block size, so that the linked list processing efficiency is improved, processing delay is reduced, and reading performance is improved.
If the first data block size is the same as the second data block size, the linked list item pointer is maintained unchanged. In a sequential read scenario, the host may send similarly sized read commands for a short period of time, so the linked list item pointer may remain unchanged during this period of time.
If the size of the first data block is different from that of the second data block, the linked list item pointer at the tail part corresponding to the read command and the linked list item pointer at the tail part corresponding to the last command are adjusted so that the linked list can buffer the data accessed by the read command through the sequentially arranged buffer intervals, the sequentially arranged buffer intervals can improve the processing efficiency of the linked list, and the reading delay of the data accessed by the read command is reduced.
In step 203, the data accessed by the read command is cached in a linked list based on the starting location of the address accessed by the read command and the first data block size.
In this embodiment, if the DM processes the read command, the DM obtains the data accessed by the read command from the nonvolatile flash memory based on the starting position of the address accessed by the read command and the first data block size, and caches the data in the linked list. Among them, the nonvolatile flash Memory is, for example, a NAND flash Memory, which is a common NVM (Non-Volatile Memory). DM is also responsible for rate control of data transfer, ECC (Error Correcting Code, error correction code) and decompression to ensure that data can be transferred at a steady speed and to prevent mismatch in read and write speeds.
In this embodiment, if the CM processes the read command, the CM obtains the data accessed by the read command from the pre-read data stored in the cache unit based on the start position of the address accessed by the read command and the first data block size, and caches the data in the linked list.
In this embodiment, the dispatcher of the read command is a CM, and the CM returns the data cached in the linked list to the sender (e.g., host) of the read command.
In some embodiments, one implementation of "adjusting the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the previous command" in step 202 is:
disconnecting the linked list item pointer at the tail part corresponding to the read command, and pointing the linked list item pointer at the tail part corresponding to the last command to the next adjacent linked list item.
For example, fig. 3 is a schematic diagram of adjusting a linked list item pointer provided in an embodiment of the present disclosure. In fig. 3, the linked list includes 8 linked list items, denoted as a through H, where the linked list item pointer of a is a Head pointer (Head), the linked list item pointer of H is a Tail pointer (Tail), and the arrow indicates the linked list item pointed to by the linked list item pointer. After a period of use, the position of the linked list item is disturbed, e.g. the Tail pointer Tail is changed from H to D, so that the linked list item pointer of D is empty and no longer points to E.
In fig. 3, if the first data block size accessed by the received read command is smaller than the second data block size accessed by the previous read command, for example, the first data block size corresponds to a through C and the second data block size corresponds to a through D, then the linked list item pointer of the tail (C) corresponding to the received read command is disconnected from pointing to D, and the linked list item pointer of the tail (D) corresponding to the previous command is pointed to the next adjacent linked list item (E).
In fig. 3, if the first data block size accessed by the received read command is larger than the second data block size accessed by the previous read command, for example, the first data block size corresponds to a through G and the second data block size corresponds to a through D, then the linked list item pointer of the tail (G) corresponding to the received read command is disconnected from pointing to H, and the linked list item pointer of the tail (D) corresponding to the previous command is pointing to the next adjacent linked list item (E).
Therefore, in this embodiment, the end linked list item pointer corresponding to the read command is disconnected, and the end linked list item pointer corresponding to the previous command is pointed to the next adjacent linked list item, so as to achieve the purpose of changing the linked list item pointer by minimum operation, so that the linked list can buffer the data accessed by the read command through the sequentially arranged buffer intervals, the sequentially arranged buffer intervals can improve the linked list processing efficiency, and reduce the reading delay of the data accessed by the read command.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but those skilled in the art can appreciate that the disclosed embodiments are not limited by the order of actions described, as some steps may occur in other orders or concurrently in accordance with the disclosed embodiments. In addition, those skilled in the art will appreciate that the embodiments described in the specification are all alternatives.
Fig. 4 is a schematic diagram of a read command scheduling device provided in an embodiment of the present disclosure, where the read command scheduling device provided in the embodiment of the present disclosure may execute a processing flow provided in each embodiment of a read command scheduling method, and as shown in fig. 4, the read command scheduling device includes, but is not limited to: a first unit 41 and a second unit 42. The functions of each unit are described as follows:
a first unit 41, configured to respond to the read command or the read cache command identified in the pre-read mode, and when a first preset condition is satisfied, if the read command is identified, the command manager processes the read command;
a second unit 42, configured to, when a second preset condition is satisfied, if the read cache command is identified, allocate the read cache command to the data manager for processing;
The first preset condition and the second preset condition are determined according to different queue depths of IO commands and/or IO data block sizes.
In some embodiments, the read command scheduling device further includes a generating unit, configured to receive a read command in a pre-read mode, and generate a request descriptor corresponding to the read command, where the request descriptor is used to describe related information of the read command; the related information comprises a starting position of an address accessed by a read command and a data block size accessed by the read command;
accordingly, the first unit 41, in response to identifying whether the read command or the read cache command is in the read-ahead mode, includes:
determining whether the read command hits in the read cache based on the starting location;
if the read command does not hit the read cache, identifying the read command as the read command;
if the read command hits the read cache, it is identified as a read cache command.
In some embodiments, when the first unit 41 satisfies the first preset condition, if the read command is identified, the command manager processes the read command including:
if the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset first depth, or the size of a data block accessed by the read command is smaller than the size of a preset data block and the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset second depth, the command manager processes the read command; wherein the first depth is less than the second depth.
In some embodiments, the second unit 42 is configured to, if the queue depth of the IO command maintained by the command manager is less than or equal to a preset third depth, allocate the read cache command to the data manager for processing; wherein the third depth is greater than the second depth.
In some embodiments, the command manager assigns the read cache command to the data manager for processing, including:
and selecting the data manager with the least number of the processed read commands from the plurality of data managers to process the read commands, or adopting a preset load balancing strategy to select one data manager to process the read commands.
The details of the embodiments of the read command scheduling device refer to the embodiments of the read command scheduling method, and are not repeated.
Fig. 5 is a schematic diagram of a read command processing apparatus according to an embodiment of the present disclosure, where the read command processing apparatus maintains a linked list in advance, the linked list is used to associate multiple cache addresses, and each linked list item in the linked list includes a cache interval and a linked list item pointer associated next. The read command processing apparatus provided in the embodiments of the present disclosure may execute the processing flow provided in each embodiment of the read command processing method, as shown in fig. 5, where the read command processing apparatus includes, but is not limited to: a determining unit 51, an adjusting unit 52 and a processing unit 53. The functions of each unit are described as follows:
A determining unit 51, configured to determine, in response to receiving a read command scheduled based on a read command scheduling method, a first data block size accessed by the read command and a second data block size accessed by a previous read command;
the adjusting unit 52 is configured to adjust, if the first data block size is different from the second data block size, a linked list item pointer at the tail corresponding to the read command and a linked list item pointer at the tail corresponding to the previous command, so that the linked list can cache data accessed by the read command;
the processing unit 53 is configured to buffer the data accessed by the read command into the linked list based on the starting location of the address accessed by the read command and the first data block size.
In some embodiments, determining unit 51 determines the first data block size accessed by the read command, including:
if a read command is received, the read command is analyzed to obtain the first data block size accessed by the read command;
if a request descriptor is received, a first data block size accessed by a read command included in the request descriptor is extracted.
In some embodiments, the adjusting unit 52 adjusts the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the previous command, including:
disconnecting the linked list item pointer at the tail part corresponding to the read command, and pointing the linked list item pointer at the tail part corresponding to the last command to the next adjacent linked list item.
Details of the embodiments of the read command processing apparatus refer to the embodiments of the read command processing method, and are not repeated.
In an embodiment of the present disclosure, there is also provided a storage device (or solid state storage device, etc.), including: the control unit and an NVM (Non-Volatile Memory) chip, and the control unit executes a read command scheduling method or a read command processing method.
Fig. 6 is an exemplary block diagram of an electronic device provided by an embodiment of the present disclosure. As shown in fig. 6, the electronic device includes: a memory 61, a processor 62 and a computer program stored on said memory 61. It is to be understood that the memory 61 in the present embodiment may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories.
In some embodiments, memory 61 stores the following elements, executable modules or data structures, or a subset thereof, or an extended set thereof: an operating system and application programs.
The operating system includes various system programs, such as a framework layer, a core library layer, a driving layer, and the like, and is used for realizing various basic tasks and processing hardware-based tasks. Applications, including various applications such as Media players (Media players), browsers (browses), etc., are used to implement various application tasks. A program implementing the read command scheduling method or the read command processing method provided by the embodiments of the present disclosure may be included in an application program.
In the embodiment of the present disclosure, the at least one processor 62 is configured to execute the read command scheduling method or the steps of the embodiments of the read command processing method provided in the embodiment of the present disclosure by calling a program or an instruction stored in the at least one memory 61, specifically, a program or an instruction stored in an application program.
The read command scheduling method or the read command processing method provided by the embodiments of the present disclosure may be applied to the processor 62 or implemented by the processor 62. The processor 62 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuitry in hardware in the processor 62 or by instructions in the form of software. The processor 62 described above may be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a field programmable gate array (Field Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The steps of the read command scheduling method or the read command processing method provided by the embodiments of the present disclosure may be directly embodied as execution completion of the hardware decoding processor, or may be executed by a combination of hardware and software modules in the decoding processor. The software modules may be located in a random access memory, flash memory, read only memory, programmable read only memory, or electrically erasable programmable memory, registers, etc. as well known in the art. The storage medium is located in a memory 61 and a processor 62 reads information in the memory 61 and in combination with its hardware performs the steps of the method.
The embodiments of the present disclosure further provide a computer readable storage medium storing a program or instructions that cause a computer to perform steps such as the read command scheduling method or the read command processing method in each embodiment, and for avoiding repetition of the description, the description will not be repeated here. Wherein the computer readable storage medium may be a non-transitory computer readable storage medium.
The disclosed embodiments also provide a computer program product comprising a computer program stored in a computer readable storage medium, which may be a non-transitory computer readable storage medium. At least one processor of the computer reads and executes the computer program from the computer-readable storage medium, so that the computer performs steps such as the read command scheduling method or the read command processing method embodiments, which are not described herein in detail for the sake of avoiding repetition of the description.
The apparatus or device embodiments described above are merely illustrative, in which the unit modules illustrated as separate components may or may not be physically separate, and the components shown as unit modules may or may not be physical units, may be located in one place, or may be distributed over multiple network module units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
From the above description of embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by means of software plus a general purpose hardware platform, or may be implemented by hardware. Based on such understanding, the foregoing technical solution may be embodied essentially or in a part contributing to the related art in the form of a software product, which may be stored in a computer readable storage medium, such as ROM/RAM, a magnetic disk, an optical disk, etc., and include several instructions for up to a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method of each embodiment or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are merely for illustrating the technical solution of the present disclosure, and are not limiting thereof; the technical features of the above embodiments or in different embodiments may also be combined under the idea of the present disclosure, the steps may be implemented in any order, and there are many other variations of the different aspects of the present disclosure as above, which are not provided in details for the sake of brevity; although the present disclosure has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the corresponding technical solutions from the scope of the technical solutions of the embodiments of the present disclosure.

Claims (10)

1. A method of read command scheduling, the method comprising:
responding to the read command or the read cache command identified in the pre-read mode, and when a first preset condition is met, if the read command is identified, processing the read command by a command manager, wherein the command manager does not distribute the read command to a data manager, and is used for reducing inter-core delay of command processing;
When a second preset condition is met, if the read cache command is identified, the command manager distributes the read cache command to the data manager for processing, so that the command processing overhead of the command manager is reduced;
the first preset condition and the second preset condition are determined according to different queue depths of IO commands and/or IO data block sizes, and the queue depth in the first preset condition is smaller than the queue depth in the second preset condition.
2. The method of claim 1, wherein prior to said responding to the read-in-read mode identifying the read command or the read cache command, the method further comprises:
receiving a read command in a pre-read mode, and generating a request descriptor corresponding to the read command, wherein the request descriptor is used for describing relevant information of the read command; the related information comprises a starting position of an address accessed by the read command and a data block size accessed by the read command;
the responding to the read command or the read cache command in the read-ahead mode comprises the following steps:
determining whether the read command hits in a read cache based on the starting location;
if the read command does not hit the read cache, identifying the read command as a read command;
And if the read command hits the read cache, identifying the read command as a read cache command.
3. The method of claim 2, wherein the command manager processes the read command if the read command is identified when the first preset condition is satisfied comprises:
if the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset first depth, or the size of a data block accessed by the read command is smaller than the size of a preset data block and the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset second depth, the command manager processes the read command; wherein the first depth is less than the second depth.
4. A method according to claim 3, wherein the command manager assigns a read cache command to the data manager for processing if the read cache command is identified when the second preset condition is met, comprising:
if the queue depth of the IO command maintained by the command manager is smaller than or equal to a preset third depth, the command manager distributes the read cache command to the data manager for processing; wherein the third depth is greater than the second depth.
5. The method of claim 4, wherein the command manager assigns read cache commands to data managers for processing, comprising:
and selecting the data manager with the least number of the processed read commands from the plurality of data managers to process the read commands, or adopting a preset load balancing strategy to select one data manager to process the read commands.
6. A method for processing a read command, wherein a linked list is maintained in advance, the linked list is used for associating a plurality of cache addresses, each linked list item in the linked list comprises a cache interval and a linked list item pointer of the next association, the method comprises:
in response to receiving a read command scheduled based on the read command scheduling method of any of claims 1 to 5, determining a first data block size accessed by the read command and a second data block size accessed by a previous read command;
if the size of the first data block is different from that of the second data block, adjusting a linked list item pointer at the tail part corresponding to the read command and a linked list item pointer at the tail part corresponding to the last command so that the linked list can cache the data accessed by the read command;
and caching the data accessed by the read command into the linked list based on the starting position of the address accessed by the read command and the size of the first data block.
7. The method of claim 6, wherein the determining the first data block size accessed by the read command comprises:
if the read command is received, the read command is analyzed to obtain the first data block size accessed by the read command;
and if the request descriptor is received, extracting the first data block size accessed by the read command included in the request descriptor.
8. The method of claim 6, wherein the adjusting the linked list item pointer of the tail corresponding to the read command and the linked list item pointer of the tail corresponding to the last command comprises:
disconnecting the linked list item pointer at the tail part corresponding to the read command, and pointing the linked list item pointer at the tail part corresponding to the previous command to the next adjacent linked list item.
9. A read command processing apparatus, wherein the apparatus maintains in advance a linked list for associating a plurality of cache addresses, each linked list item in the linked list including a cache interval and a next associated linked list item pointer, the apparatus comprising:
a determining unit configured to determine a first data block size accessed by the read command and a second data block size accessed by a previous read command in response to receiving a read command scheduled based on the read command scheduling method of any one of claims 1 to 5;
The adjusting unit is used for adjusting a linked list item pointer at the tail part corresponding to the read command and a linked list item pointer at the tail part corresponding to the last command if the size of the first data block is different from the size of the second data block so that the linked list can cache the data accessed by the read command;
and the processing unit is used for caching the data accessed by the read command into the linked list based on the starting position of the address accessed by the read command and the first data block size.
10. A memory device, comprising: control means and NVM chip, the control means performing the steps of the read command scheduling method according to any one of claims 1 to 5 or the steps of the read command processing method according to any one of claims 6 to 8.
CN202311297584.9A 2023-10-09 2023-10-09 Read command scheduling method, processing method, device and storage equipment Active CN117032594B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311297584.9A CN117032594B (en) 2023-10-09 2023-10-09 Read command scheduling method, processing method, device and storage equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311297584.9A CN117032594B (en) 2023-10-09 2023-10-09 Read command scheduling method, processing method, device and storage equipment

Publications (2)

Publication Number Publication Date
CN117032594A CN117032594A (en) 2023-11-10
CN117032594B true CN117032594B (en) 2024-01-23

Family

ID=88637564

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311297584.9A Active CN117032594B (en) 2023-10-09 2023-10-09 Read command scheduling method, processing method, device and storage equipment

Country Status (1)

Country Link
CN (1) CN117032594B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134741A1 (en) * 2020-12-25 2022-06-30 深圳大普微电子科技有限公司 Reread command processing method, flash memory controller, and solid-state drive
CN116610262A (en) * 2023-05-31 2023-08-18 苏州忆联信息系统有限公司 Method, device, equipment and medium for reducing SSD sequential reading delay

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134741A1 (en) * 2020-12-25 2022-06-30 深圳大普微电子科技有限公司 Reread command processing method, flash memory controller, and solid-state drive
CN116610262A (en) * 2023-05-31 2023-08-18 苏州忆联信息系统有限公司 Method, device, equipment and medium for reducing SSD sequential reading delay

Also Published As

Publication number Publication date
CN117032594A (en) 2023-11-10

Similar Documents

Publication Publication Date Title
US10963387B2 (en) Methods of cache preloading on a partition or a context switch
US9996466B2 (en) Apparatus, system and method for caching compressed data
US11379381B2 (en) Main memory device having heterogeneous memories, computer system including the same, and data management method thereof
US20130326113A1 (en) Usage of a flag bit to suppress data transfer in a mass storage system having non-volatile memory
KR20150028610A (en) Storage device and data porcessing method thereof
US9135177B2 (en) Scheme to escalate requests with address conflicts
US11500797B2 (en) Computer memory expansion device and method of operation
US9569381B2 (en) Scheduler for memory
CN106951374B (en) Method for checking block page address and apparatus thereof
US11487478B2 (en) Memory system and method of controlling nonvolatile memory
KR20200017364A (en) MODIFYING NVMe PHYSICAL REGION PAGE LIST POINTERS AND DATA POINTERS TO FACILITATE ROUTING OF PCIe MEMORY REQUESTS
US9304946B2 (en) Hardware-base accelerator for managing copy-on-write of multi-level caches utilizing block copy-on-write differential update table
CN110442382B (en) Prefetch cache control method, device, chip and computer readable storage medium
CN114721975A (en) Chain table processing method and device, accelerator, circuit board, equipment and storage medium
CN117032594B (en) Read command scheduling method, processing method, device and storage equipment
US11449428B2 (en) Enhanced read-ahead capability for storage devices
JP7170093B2 (en) Improved read-ahead capabilities for storage devices
US20150212759A1 (en) Storage device with multiple processing units and data processing method
US11138118B2 (en) Method and apparatus for dynamically adapting sizes of cache partitions in a partitioned cache
US9367467B2 (en) System and method for managing cache replacements
US11797183B1 (en) Host assisted application grouping for efficient utilization of device resources
CN117032595B (en) Sequential flow detection method and storage device
US20230325117A1 (en) Speculative command processing interface in storage systems
CN111736779B (en) Method and device for optimizing execution of NVM interface command
CN116975103A (en) Data archiving method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant