CN113535345A - Method and device for constructing downlink data path - Google Patents

Method and device for constructing downlink data path Download PDF

Info

Publication number
CN113535345A
CN113535345A CN202010306299.9A CN202010306299A CN113535345A CN 113535345 A CN113535345 A CN 113535345A CN 202010306299 A CN202010306299 A CN 202010306299A CN 113535345 A CN113535345 A CN 113535345A
Authority
CN
China
Prior art keywords
dtu
processing unit
task processing
channel
command
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010306299.9A
Other languages
Chinese (zh)
Inventor
贾舒
孙通
程雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Starblaze Technology Co ltd
Original Assignee
Beijing Starblaze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Starblaze Technology Co ltd filed Critical Beijing Starblaze Technology Co ltd
Priority to CN202010306299.9A priority Critical patent/CN113535345A/en
Publication of CN113535345A publication Critical patent/CN113535345A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3017Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is implementing multitasking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/544Buffers; Shared memory; Pipes

Abstract

The application provides a method and equipment for constructing a downlink data path, wherein the method comprises the following steps: creating at least one task processing unit; creating at least one channel; associating the at least one task processing unit with the at least one channel. The incense downlink data network comprises at least one task processing unit and at least one channel; a first task processing unit in at least one task processing unit acquires a DTU from a channel associated with the first task processing unit, wherein the DTU carries a sub-command, the first task processing unit is any one task processing unit in the at least one task processing unit, and a second task processing unit is a task processing unit in the at least one task processing unit except the first task processing unit; and the first task processing unit processes the sub-command and fills the DTU into a channel associated with the first task processing unit after the sub-command is processed, so that the second task processing unit acquires the DTU from the channel.

Description

Method and device for constructing downlink data path
Technical Field
The present application relates to a storage technology, and in particular, to a method and an apparatus for constructing a downlink data path.
Background
In some applications, the processor handles large-scale concurrent tasks. Such as an embedded processor for a network device, a storage device, processes multiple network packets or IO commands concurrently.
In the desktop CPU and the server CPU, an operating system is operated, and a plurality of processes and/or threads which are operated on the CPU are scheduled by the operating system in a time slice and/or preemption mode to process tasks, so that a user does not need to intervene the switching between the processes/threads too much. The appropriate process/thread is selected by the operating system for scheduling to take full advantage of the CPU computing power. However, in embedded CPUs, the resources available for memory, CPU processing power, etc. are limited and any that is processed is of a special nature, such as being a large-scale concurrent relatively simple task. And some embedded systems have strict requirements on performance, especially on task processing delay, for which the operating systems of the prior art are difficult to adapt.
To improve the performance of any process, a task is typically divided into multiple stages (or sub-tasks), where for a single task, the stages are processed sequentially, and for multiple tasks, they may be processed concurrently.
In chinese patent application nos. 201811095364.7, 201811160925.7, and 2019102538859, a signal-slot based task scheduling scheme is provided to handle a large number of concurrent IO commands and to ensure the overall quality of service of multiple IO commands.
FIG. 1A is a schematic diagram of task scheduling.
In fig. 1A, the direction from left to right is the direction in which time elapses. Also shown are a plurality of tasks (1-1, 2-1, 3-1, 1-2, 2-2 and 3-2) being processed, wherein in the reference numerals structured "a-b", the preceding symbol a indicates a task and the following symbol b indicates a subtask included in the task. Fig. 1A shows 3 tasks processed in time sequence, each task comprising 2 sub-tasks.
The solid arrows indicate the temporal order of processing a plurality of tasks, and the dashed arrows indicate the logical order of processing of the tasks. For example, taking task 1 as an example, it is required to process its subtask 1-1 (task 1-1) first, and then to process its subtask 1-2 (task 1-2). Still by way of example, referring to FIG. 1A, after sub-task 1-1 is processed, sub-task 2-1 and sub-task 3-1 are scheduled for execution to improve task processing parallelism, then it is identified that the conditions for executing sub-task 1-2 are met, and after sub-task 3-1 is processed, sub-task 1-2 is scheduled for execution.
On a processor, a task (or sub-task) is processed by executing code segments. A single CPU (or CPU core) that processes only a single task at any one time. Illustratively, as shown in FIG. 1A, for multiple tasks to be processed, a code segment for processing subtask 1-1 is executed first, a code segment for processing subtask 2-1 is executed next, a code segment for processing subtask 3-1 is executed next, a code segment for processing subtask 1-2 is executed next, a code segment for processing subtask 2-2 is executed next, and a code segment for processing subtask 3-2 is executed next. Alternatively, the logical order of task processing is indicated in the code segments of the respective processing tasks (or sub-tasks). For example, the logical sequence includes subtasks 1-2 to be processed after task subtasks 1-1. As yet another example, the code segment whose logical order post-processing is indicated in the code segments processing the subtasks 1-1 should be the code segment processing the subtasks 1-2.
FIG. 1B is a block diagram of a task processing system.
Referring to fig. 1B, the task processing system includes two parts, software and hardware. The hardware includes, for example, one or more CPUs running software, other hardware resources (e.g., memory, codecs, interfaces, accelerators, interrupt controllers, DMA units, etc.) that handle related tasks.
A code segment of software running on the CPU is referred to as a task processing unit. The task processing system includes a plurality of task processing units. Each task processing unit processes the same or different tasks. For example, task processing unit 0 processes a first sub-task of a task (e.g., sub-tasks 1-1, 2-1, and 3-1), while task processing unit 1, 2, and 3 process a second sub-task of the task (e.g., sub-tasks 1-2, 2-2, and 3-2).
The task processing system further comprises a software-implemented task management unit for scheduling one of the task processing units to run on hardware.
The resources required by the task processing unit include, for example, a cache unit, a descriptor (or context) of the processing task, and the like.
A storage device for providing storage capabilities for a host coupled thereto. The host and the storage device may be coupled by various methods, including but not limited to, connecting the host and the storage device 102 by, for example, SATA (Serial Advanced Technology Attachment), SCSI (Small Computer System Interface), SAS (Serial Attached SCSI), IDE (Integrated Drive Electronics), USB (Universal Serial Bus), PCIE (Peripheral Component Interconnect Express, PCIE, high speed Peripheral Component Interconnect), NVMe (NVM Express, high speed nonvolatile storage), ethernet, fibre channel, wireless communication network, etc. The host may be an information processing device, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, etc., capable of communicating with the storage device in the manner described above. The storage device processes the IO command. IO commands include, for example, read commands, write commands, or other IO commands. The Memory device includes an interface, a control unit, one or more NVM chips, and/optionally a DRAM (Dynamic Random Access Memory). The control section of the storage device includes one or more CPUs. The CPU runs software or firmware to process the IO command as a task processing unit.
The storage device also splits the IO command into one or more subcommands for processing. Each sub-command has a relatively consistent specification, e.g. accessing the same size address range, so that the task processing unit processing the sub-command can be implemented in a simpler manner.
Disclosure of Invention
It is desirable to provide storage devices of multiple sizes, for example, with different storage capacities, different capabilities, and/or differentiated functionality. Thus, different design versions are provided for different sizes of storage devices. These different designs are also common and it is desirable to reuse existing technical efforts, to efficiently combine and interact with components developed by multiple people of a team, and to easily expand the design to provide enhanced functionality and/or performance.
In order to solve the technical problem of how to provide different design versions for storage devices with different specifications in the prior art, according to a first aspect of the present application, a method for constructing a first downlink data path according to the first aspect of the present application is provided, including: creating at least one task processing unit; creating at least one channel; associating the at least one task processing unit with the at least one channel.
According to the first downlink data path construction method of the first aspect of the present application, there is provided a second downlink data path construction method of the first aspect of the present application, wherein the task processing unit includes an inbound interface, an outbound interface, and a DTU processing module; the method further comprises the following steps: the inbound interface obtains a DTU from its associated channel; the outbound interface adds DTUs to the channels associated with the outbound interface; the DTU processing module extracts the sub-commands from the DTUs acquired through the inbound interface, processes the sub-commands, and adds the DTUs carrying the processed sub-commands to the channels associated with the DTUs through the outbound interface.
According to the first or second downlink data path constructing method of the first aspect of the present application, there is provided a third downlink data path constructing method of the first aspect of the present application, where a channel includes a DTU list and a plurality of functions of an operation DTU list, the DTU list includes a container that accommodates one or more DTUs, and the functions of the operation DTU list at least include a first Push function and a Pop function; when the first Push function is called, adding at least one DTU to the DTU list; and when the Pop function is called, acquiring at least one DTU from the DTU list.
According to the third downlink data path construction method of the first aspect of the present application, there is provided the fourth downlink data path construction method of the first aspect of the present application, where the channel further includes a direct forwarding unit, and the functions of the multiple operation DTU lists further include a second Push function; and in response to the second Push function being called, the direct forwarding unit provides the DTU obtained by the second Push function to the task processing unit.
According to the fourth downlink data path constructing method of the first aspect of the present application, there is provided the fifth downlink data path constructing method of the first aspect of the present application, wherein the creating at least one channel further includes: creating a direct forwarding unit of a channel and setting a destination index; and the direct forwarding unit calls the function associated with the task processing unit to provide the DTU acquired from the second Push function to the task processing unit.
According to one of the first to fifth downstream data path constructing methods of the first aspect of the present application, there is provided the sixth downstream data path constructing method of the first aspect of the present application, wherein the channel acquires the DTU from one or more task processing units associated therewith, and provides the DTU to only one task processing unit associated therewith.
According to one of the first to fourth downlink data path constructing methods of the first aspect of the present application, there is provided a seventh downlink data path constructing method of the first aspect of the present application, where the DTU is a message unit that carries a sub-command and/or a sub-command context.
According to one of the first to seventh downlink data path constructing methods of the first aspect of the present application, there is provided the eighth downlink data path constructing method of the first aspect of the present application, wherein the downlink network further includes at least one resource manager; the resource manager manages the use of the specified resources; the method further includes associating at least one task processing unit with at least one resource manager such that the task manager accesses the resource through the resource manager associated therewith.
According to the eighth downlink data path constructing method of the first aspect of the present application, there is provided the ninth downlink data path constructing method of the first aspect of the present application, further including: associating one or more channels with one or more resource managers, wherein a monitoring function of a resource manager is registered with a channel; in response to the channel being added with the DTU, the registered monitor function is called to access the resource managed by the resource manager corresponding to the called monitor function.
According to the eighth or ninth downlink data path constructing method of the first aspect of the present application, there is provided the tenth downlink data path constructing method of the first aspect of the present application, further comprising: the resource manager manages allocation or recovery of the specified resources; the resource manager manages the state of the specified resource.
According to one of the first to tenth downstream data path constructing methods of the first aspect of the present application, there is provided the eleventh downstream data path constructing method of the first aspect of the present application, wherein the number of channels is equal to or less than the number of task processing units.
According to one of the first to eleventh methods for constructing a downstream data path of the first aspect of the present application, there is provided the twelfth method for constructing a downstream data path of the first aspect of the present application, wherein the associating the at least one task processing unit with the at least one channel includes: setting one or more channels associated with an inbound interface of each task processing unit of the downstream channel; and setting a channel associated with the outbound interface of each task processing unit of the downstream channel.
According to one of the first to twelfth downstream data path construction methods of the first aspect of the present application, there is provided the thirteenth downstream data path construction method of the first aspect of the present application, wherein the task processing unit is a software unit that can be scheduled, and the channel cannot be scheduled.
According to one of the first to thirteenth downlink data path constructing methods of the first aspect of the present application, there is provided the fourteenth downlink data path constructing method of the first aspect of the present application, associating the at least one task processing unit with the at least one channel, including: associating a task processing unit with two or more channels; and/or associating a task processing unit with a channel.
According to one of the first to fourteenth downlink data path constructing methods of the first aspect of the present application, there is provided the fifteenth downlink data path constructing method of the first aspect of the present application, wherein the at least one task processing unit that is created includes a first task processing unit and a second task processing unit, and the first task processing unit and the second task processing unit are task processing units that process the same or different tasks.
According to a fifteenth downstream data path construction method of the first aspect of the present application, there is provided the sixteenth downstream data path construction method of the first aspect of the present application, wherein the at least one task processing unit includes a first task processing unit that processes an address translation task, a second task processing unit that processes a data assembly task, and a third task processing unit that processes a garbage collection task; the created at least one channel comprises a first channel, a second channel and a third channel; an inbound interface of the first task processing unit is associated with the first channel and a third channel; the outbound interface of the first task processing unit is associated with the second channel; the inbound interface of the second task processing unit is associated with the second channel; an outbound interface of the third task processing unit is associated with the third channel.
According to a sixteenth downstream datapath construction method of a first aspect of the present application, there is provided the seventeenth downstream datapath construction method of the first aspect of the present application, the first task processing unit being associated with a resource manager that manages address mapping table resources; the second task processing unit is associated with a resource manager that manages accelerator resources; the third task processing unit is associated with a resource manager that manages storage medium resources.
According to one of the fifteenth to seventeenth downstream datapath construction methods of the first aspect of the present application, there is provided the eighteenth downstream datapath construction method of the first aspect of the present application, wherein the at least one task processing unit further includes a fourth task processing unit that processes an address translation task; the created at least one channel also comprises a fourth channel and a fifth channel; an inbound interface of the fourth task processing unit is associated with the fourth channel; the outbound interface of the fourth task processing unit is associated with the fifth channel; the inbound interface of the second task processing unit is associated with the fifth channel.
According to the eighteenth downstream data path construction method of the first aspect of the present application, there is provided the nineteenth downstream data path construction method of the first aspect of the present application, the at least one created channel further includes a sixth channel; an inbound interface of the fourth task processing unit is associated with the sixth channel; the outbound interface of the third task processing unit is associated with the sixth channel.
According to one of the methods for constructing the first to nineteenth downlink data paths of the first aspect of the present application, there is provided the method for constructing the twentieth downlink data path of the first aspect of the present application, further comprising associating the downlink data path with a command transmission unit and a sub-command processing unit; the command transmission unit splits the IO command into one or more subcommands, acquires a DTU bearing subcommand, and delivers the DTU bearing the subcommand to a task processing unit coupled with the DTU; the sub-command processing unit accesses the storage medium according to the sub-command carried by the DTU.
According to a twenty-first downlink data path construction method of the first aspect of the present application, there is provided a twenty-first downlink data path construction method of the first aspect of the present application, wherein a first task processing unit of at least one task processing unit is associated with the command transmission unit, the first task processing unit is directly connected to the command transmission unit or coupled to the command transmission unit through a channel, and the first task processing unit is a task processing unit of at least one task processing unit that processes a sub-command; and associating a second task processing unit in the at least one task processing unit with the sub-command processing unit, wherein the second task processing unit is directly connected with the sub-command processing unit, and the second task processing unit is the last task processing unit in the at least one task processing unit for processing the sub-command.
According to a second aspect of the present application, there is provided a first information processing apparatus according to the second aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the above first aspects when executing the program.
According to a third aspect of the present application, there is provided a construction apparatus of a first downstream data path according to the third aspect of the present application, comprising: a first creating unit for creating at least one task processing unit; a second creating unit for creating at least one channel; an associating unit for associating the at least one task processing unit with the at least one channel.
According to a fourth aspect of the present application, there is provided a first downstream data path according to the fourth aspect of the present application, comprising: at least one task processing unit and at least one channel; a first task processing unit in at least one task processing unit acquires a DTU from a channel associated with the first task processing unit, wherein the DTU carries a sub-command, the first task processing unit is any one task processing unit in the at least one task processing unit, and a second task processing unit is a task processing unit in the at least one task processing unit except the first task processing unit; and the first task processing unit processes the sub-command and fills the DTU into a channel associated with the first task processing unit after the sub-command is processed, so that the second task processing unit acquires the DTU from the channel.
According to the first downlink data path of the fourth aspect of the present application, there is provided the second downlink data path of the fourth aspect of the present application, wherein the first task processing unit obtains the DTU from the first channel associated with itself, and after the sub-command processing is completed, fills the DTU into the second channel associated with itself, so that the second task processing unit obtains the DTU from the second channel; or, the first task processing unit acquires the DTU from a first channel associated with the first task processing unit, and fills the DTU into the first channel after the sub-command processing is completed, so that the second task processing unit acquires the DTU from the first channel; wherein the first channel and the second channel are both associated with the first task processing unit, and the first channel and the second channel are different channels.
According to the first or second downstream data path of the fourth aspect of the present application, there is provided a third downstream data path of the fourth aspect of the present application, the first task processing unit comprising an inbound interface, an outbound interface and a DTU processing module; the DTU processing module acquires the DTU from a channel associated with the first task processing unit through an inbound interface; the DTU processing module acquires a sub-command from the DTU, processes the sub-command and loads the processed sub-command to the DTU; the DTU processing module adds the DTU to a channel associated with the first task processing unit through an outbound interface.
According to one of the first to third downstream data paths of the fourth aspect of the present application, there is provided the fourth downstream data path of the fourth aspect of the present application, wherein each of the at least one channel includes a DTU list and a plurality of functions of an operational DTU list, the DTU list is used to accommodate a plurality of DTUs, and the functions of the operational DTU list at least include a first Push function and a Pop function; the task processing unit adds at least one DTU in the DTU list by calling a first Push function; and the task processing unit acquires at least one DTU from the DTU list by calling the Pop function.
According to a fourth downlink data path of the fourth aspect of the present application, there is provided a fifth downlink data path of the fourth aspect of the present application, each channel further includes a direct forwarding unit, and the functions of the multiple operation DTU lists further include a second Push function; when the first task processing unit calls the second Push function, the direct forwarding unit acquires the DTU from the second Push function, and provides the DTU to the second task processing unit.
According to one of the first to fifth downstream data paths of the fourth aspect of the present application, there is provided a sixth downstream data path according to the fourth aspect of the present application, the downstream data path further comprising at least one resource manager; the resource manager manages the use of the specified resources; at least one task processing unit is associated with at least one resource manager such that the task processing unit accesses the resource through its associated resource manager.
According to a sixth downstream data path of the fourth aspect of the present application, there is provided the seventh downstream data path of the fourth aspect of the present application, wherein the resource manager manages multiple types of resources, and the multiple types of resources at least include cache resources, address mapping table resources, computing resources, and/or storage medium resources.
According to a seventh downstream data path of the fourth aspect of the present application, there is provided the eighth downstream data path of the fourth aspect of the present application, wherein one resource manager manages a plurality of types of resources; alternatively, each of the plurality of resource managers manages one type of resource, and different resource managers manage different types of resources.
According to a seventh or eighth downstream data path of the fourth aspect of the present application, there is provided a ninth downstream data path of the fourth aspect of the present application, wherein the inbound interface of the task processing unit calls a Pop function of a channel associated with itself to obtain the DTU; and the outbound interface of the task processing unit calls a Push function of the channel associated with the outbound interface so as to add the DTU into the channel.
According to one of the first to ninth downstream data paths of the fourth aspect of the present application, there is provided the tenth downstream data path of the fourth aspect of the present application, wherein the first task processing unit further comprises a callback function; the first task processing unit writes a callback function index indicating the callback function in the DTU before adding the DTU into a channel; and calling the callback function through the callback function index so as to request a resource manager to release the resource.
According to one of the first to tenth downlink data paths of the fourth aspect of the present application, an eleventh downlink data path of the fourth aspect of the present application is provided, wherein the plurality of task processing units are associated with a third channel, and if the plurality of task processing units call the first Push function of the third channel, the plurality of DTUs from the plurality of task processing units are added to the DTU list in the third channel, and the third channel is any one of the at least one channel.
According to one of the first to eleventh downstream data paths of the fourth aspect of the present application, there is provided the twelfth downstream data path of the fourth aspect of the present application, a plurality of channels being associated with the first task processing unit, the first task processing unit acquiring DTUs from the plurality of channels according to priorities of the plurality of channels; alternatively, the first task processing unit polls a plurality of channels to acquire the DTUs from the plurality of channels.
According to one of the first to twelfth downstream data paths of the fourth aspect of the present application, there is provided a thirteenth downstream data path of the fourth aspect of the present application, where the downstream data path includes a third task processing unit, a fourth task processing unit, a fifth task processing unit, a sixth task processing unit, a plurality of channels, and a resource manager, the third task processing unit processes a cache task, the fourth task processing unit processes an address translation task, the fifth task processing unit processes a data assembly task, and the sixth task processing unit processes a garbage collection task; the third task processing unit acquires a DTU from the third channel or the seventh channel, executes the cache task, and adds the DTU carrying the execution result of the cache task to the fourth channel; the fourth task processing unit acquires a DTU from the fourth channel, executes an address conversion task, and adds the DTU carrying an execution result of the address conversion task to a fifth channel; the fifth task processing unit acquires the DTU from the fifth channel, executes a data assembly task, and sends the DTU carrying the execution result of the data assembly task to the sub-command processing unit; and the sixth task processing unit acquires the DTU from the sixth channel, executes the garbage collection task, and adds the DTU carrying the execution result of the garbage collection task to the seventh channel.
According to a twelfth downstream data path of the fourth aspect of the present application, there is provided a fourteenth downstream data path of the fourth aspect of the present application, where the downstream data path includes a third task processing unit, a fourth task processing unit, a fifth task processing unit, a sixth task processing unit, a seventh task processing unit, an eighth task processing unit, a plurality of channels, and a resource manager, the third task processing unit and the fourth task processing unit process a cache task, the fifth task processing unit and the sixth task processing unit process an address translation task, and the seventh task processing unit processes a data assembly task and the eighth task processing unit processes a garbage collection task; the third task processing unit acquires a DTU from the third channel, executes a cache task, and adds the DTU carrying the execution result of the cache task to the fifth channel; the fourth task processing unit acquires a DTU from the fourth channel, executes the cache task, and adds the DTU carrying the execution result of the cache task to the sixth channel; the fifth task processing unit acquires a DTU from the fifth channel and/or the tenth channel, executes an address conversion task, and adds the DTU carrying an execution result of the address conversion task to a seventh channel; the sixth task processing unit acquires a DTU from the sixth channel and/or the eleventh channel, executes an address conversion task, and adds the DTU carrying an execution result of the address conversion task to the eighth channel; the seventh task processing unit acquires the DTU from the seventh channel and/or the eighth channel, executes a data assembly task, and sends the DTU carrying the execution result of the data assembly task to the sub-command processing unit; the eighth task processing unit obtains the DTU from the ninth channel, executes the garbage collection task, and adds the DTU carrying the execution result of the garbage collection task to the tenth channel and/or the eleventh channel.
According to a fifth aspect of the present application, there is provided a method for constructing a first uplink data path according to the fifth aspect of the present application, including: writing one or more callback function indexes into a Data Transmission Unit (DTU) in one or more task processing units of a downlink data path to construct an uplink data path for the DTU; wherein, the uplink data path is formed by one or more callback functions indicated by the one or more callback function indexes, and the DTU carries a subcommand; and responding to the completion of the sub-command processing, calling a callback function indicated by one or more callback function indexes recorded in the DTU, so as to return the processing result of the sub-command through the uplink data path.
According to the method for constructing the first uplink data path of the fifth aspect of the present application, there is provided the method for constructing the second uplink data path of the fifth aspect of the present application, wherein the one or more callback function indexes in the DTU are ordered, and the callback functions indicated by the one or more callback function indexes in the DTU that are called in order constitute the uplink data path, wherein the order of calling the callback functions indicated by the one or more callback function indexes in the DTU is an order reverse to an order of writing the one or more callback function indexes to the DTU in constructing the uplink data path.
According to the method for constructing the first or second uplink data path of the fifth aspect of the present application, there is provided the method for constructing the third uplink data path of the fifth aspect of the present application, wherein when the first task processing unit processes the sub-command, the first task processing unit writes a first callback function index into the DTU; the callback function indicated by the first callback function index is used for releasing a first resource allocated by the first task processing unit for processing the sub-command, and the first task processing unit is a task processing unit in the downlink data path.
According to one of the methods for constructing the first to third uplink data paths of the fifth aspect of the present application, there is provided the method for constructing the fourth uplink data path of the fifth aspect of the present application, wherein when the second task processing unit processes the subcommand, the second task processing unit writes a second callback function index into the DTU; the callback function indicated by the second callback function index is used for releasing the first resource allocated by the first task processing unit for processing the subcommand, and the second task processing unit is a task processing unit in the downlink data path.
According to one of the methods for constructing the first to fourth uplink data paths of the fifth aspect of the present application, there is provided the method for constructing the fifth uplink data path of the fifth aspect of the present application, wherein the downlink data path includes a plurality of task processing units, and each task processing unit of the plurality of task processing units writes a callback function index into the DTU in a process of processing the subcommand; the subcommands are obtained by each task processing unit from the DTU, and callback functions written into the DTU by each task processing unit are the same or different.
According to one of the methods for constructing the first to fifth upstream datapaths of the fifth aspect of the present application, there is provided the method for constructing the sixth upstream datapath of the fifth aspect of the present application, wherein the upstream datapath further includes a monitoring unit, and the monitoring unit is a first task processing unit of the upstream datapath; the monitoring unit monitors and identifies whether the sub-command is processed; in response to completion of the sub-command processing, a monitoring unit acquires the DTU indicating a processing result of the sub-command.
According to a sixth uplink data path constructing method of a fifth aspect of the present application, there is provided the seventh uplink data path constructing method of the fifth aspect of the present application, wherein a last task processing unit of the downlink data path sends the DTU to a sub-command processing unit, and the sub-command processing unit buffers the DTU, processes the sub-command indicated by the DTU, and provides a processing result of the sub-command to the monitoring unit.
According to one of the methods of constructing the first to seventh uplink data lanes of the fifth aspect of the present application, there is provided the method of constructing the eighth uplink data lane of the fifth aspect of the present application, wherein the one or more task processing units are schedulable.
According to one of the methods for constructing the first to eighth uplink data paths of the fifth aspect of the present application, there is provided the method for constructing the ninth uplink data path of the fifth aspect of the present application, wherein a callback function called last in the uplink data path sends the DTU to a command transmission unit; the command transmission unit acquires the processing result of the sub-command according to the indication of the DTU; returning a processing result of one or more sub-commands to a command issuer, the command comprising the one or more sub-commands; and commanding a transmission unit to release the DTU.
According to a ninth upstream data path construction method of a fifth aspect of the present application, there is provided the tenth upstream data path construction method of the fifth aspect of the present application, wherein the command transmission unit splits the command into one or more sub-commands in response to receiving the command, allocates a DTU to each sub-command, and indicates one of the sub-commands in the allocated DTU.
According to a sixth aspect of the present application, there is provided the first information processing apparatus according to the sixth aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the fifth aspects when executing the program.
According to a seventh aspect of the present application, there is provided a device for constructing a first uplink data path according to the seventh aspect of the present application, including: the device comprises a callback function index generating unit, a Data Transmission Unit (DTU) and a Data Transmission Unit (DTU), wherein the callback function index generating unit is used for writing one or more callback function indexes into the DTU in one or more task processing units of a downlink data path so as to construct the uplink data path for the DTU; the DTU carries a subcommand, one task processing unit in the downlink data path acquires and processes the subcommand indicated in the DTU, and provides the DTU to another task processing unit in the downlink data path; and the callback function index calling unit is used for responding to the completion of the processing of the subcommand, and calling one or more callback functions recorded in the DTU in sequence so as to return the processing result of the subcommand through the uplink data path.
According to an eighth aspect of the present application, there is provided a first data processing system according to the eighth aspect of the present application, comprising an upstream data path and a downstream data path; the downlink data path comprises one or more task processing units; wherein a first task processing unit of the plurality of task processing units is coupled with the command transmission unit, and a second task processing unit of the plurality of task processing units is coupled with the sub-command processing unit; the downlink data path processes a DTU provided by a command transmission unit, and constructs the uplink data path for the DTU in the process of processing the DTU; and responding to the completion of the sub-command processing by the sub-command processing unit, and acquiring the processing result of the sub-command from the command transmission unit through the uplink data channel.
According to a first data processing system of an eighth aspect of the present application, there is provided the second data processing system of the eighth aspect of the present application, the upstream data path further comprising a monitoring unit, the sub-command processing unit being coupled with the monitoring unit; the monitoring unit acquires a sub-command processing completion instruction from the sub-command processing unit; in response to the sub-command processing completion indication, the monitoring unit calls a callback function indicated by a Data Transmission Unit (DTU) to provide the processing result of the sub-command to the command transmission unit through the uplink data path.
According to the first or second data processing system of the eighth aspect of the present application, there is provided the third data processing system of the eighth aspect of the present application, wherein the monitoring unit acquires the DTU according to the sub-command processing completion instruction; acquiring at least one callback function index from the DTU; wherein the callback function index indicates a callback function, and the at least one callback function index is written into the DTU when one or more task processing units of the downlink data path process subcommands; wherein the at least one callback function constitutes the upstream data path.
According to a third data processing system of the eighth aspect of the present application, there is provided the fourth data processing system of the eighth aspect of the present application, wherein the monitoring unit calls a callback function indicated by the at least one callback function index.
According to a fourth data processing system of the eighth aspect of the present application, there is provided the fifth data processing system of the eighth aspect of the present application, wherein the monitoring unit sequentially calls the at least one callback function corresponding to the at least one callback function index in a forward order according to a writing order of the at least one callback function index.
According to a fourth data processing system of the eighth aspect of the present application, there is provided the sixth data processing system of the eighth aspect of the present application, wherein the monitoring unit sequentially calls the at least one callback function corresponding to the at least one callback function index in reverse order according to a writing order of the at least one callback function index.
According to one of the fourth to sixth data processing systems of the eighth aspect of the present application, there is provided the seventh data processing system of the eighth aspect of the present application, wherein in response to the at least one callback function being called, a first callback function of the at least one callback function is executed to send a processing result of the sub-command to the command unit; the first callback function is a callback function written in the DTU when the first task processing unit processes the sub-command.
According to a seventh data processing system of the eighth aspect of the present application, there is provided the eighth data processing system of the eighth aspect of the present application, wherein the first callback function is executed to release the first resource.
According to a seventh or eighth data processing system of the eighth aspect of the present application, there is provided a ninth data processing system according to the eighth aspect of the present application, the processing data system further comprising a resource manager; when the first callback function is executed, requesting to release a first resource from the resource manager, wherein the first resource is obtained by requesting to distribute the first task processing unit to the resource manager when the first task processing unit processes the subcommand; in response to a request to release a first resource, the resource manager releases the first resource.
According to one of the fourth to ninth data processing systems of the eighth aspect of the present application, there is provided the tenth data processing system of the eighth aspect of the present application, wherein in response to the at least one callback function being called, a second callback function of the at least one callback function is executed to request the resource manager to release a second resource, the second resource being allocated by the second task processing unit to request the resource manager when processing a subcommand; in response to a request to release a second resource, the resource manager releases the second resource.
According to a tenth data processing system of the eighth aspect of the present application, there is provided the eleventh data processing system of the eighth aspect of the present application, wherein the monitoring unit calls the first callback function after calling the second callback function.
According to one of the first to eleventh data processing systems of the eighth aspect of the present application, there is provided the twelfth data processing system of the eighth aspect of the present application, wherein the monitoring unit polls the sub-command processing unit to acquire the sub-command processing completion indication.
According to one of the first to twelfth data processing systems of the eighth aspect of the present application, there is provided the thirteenth data processing system of the eighth aspect of the present application, wherein the at least one task processing unit is a processing unit that can be scheduled.
According to a ninth aspect of the present application, there is provided a first data processing system according to the ninth aspect of the present application, comprising a command transmission unit and a plurality of data paths; the data path comprises a downlink data path and an uplink data path; the downlink data path comprises one or more task processing units; the downlink data path is coupled with the command transmission unit; the command transmission unit allocates DTUs for one or more first sub-commands to be processed to carry the sub-commands, and provides the DTUs carrying the sub-commands to a downstream data path of a first data path of the plurality of data paths; wherein the first sub-command is associated with a command to access a first device and the first data path corresponds to the first device.
A first data processing system according to a ninth aspect of the present application provides a second data processing system according to the ninth aspect of the present application, a first task processing unit of a downstream data path being coupled with the command transmission unit; the command transmission unit provides the DTU carrying the first sub-command to a first task processing unit of a downstream data path of the first data path.
According to the first or second data processing system of the ninth aspect of the present application, there is provided the third data processing system of the ninth aspect of the present application, wherein the command transmission unit allocates a DTU to carry the sub-command for the one or more second sub-commands to be processed, and provides the DTU carrying the second sub-command to the first task processing unit of the downstream data path of the second data path; wherein the second sub-command is associated with a command to access a second device, and a second datapath corresponds to the second device.
According to one of the first to third data processing systems of the ninth aspect of the present application, there is provided the fourth data processing system of the ninth aspect of the present application, the plurality of data lanes each corresponding to a device to be accessed by the command; the command transfer unit determines one of the data paths to which the sub-command associated with the command is to be supplied, according to the device to be accessed by the command.
According to one of the first to fourth data processing systems of the ninth aspect of the present application, there is provided the fifth data processing system of the ninth aspect of the present application, the downstream data path being a downstream data path as described in any one of the above-mentioned fourth aspects; and/or the upstream data path is the upstream data path according to any of the above eighth aspects.
According to one of the first to fifth data processing systems of the ninth aspect of the present application, there is provided the sixth data processing system of the ninth aspect of the present application, further comprising a first sub-command processing unit; a downstream data path of a first plurality of data paths of the plurality of data paths is coupled to the first sub-command processing unit; and the last task processing unit of the downlink data path of each of the first plurality of data paths sends the DTU to the first sub-command processing unit, and the first sub-command processing unit buffers the DTU received from the downlink data path of each of the first plurality of data paths and processes the sub-command indicated by the received DTU.
According to a sixth data processing system of the ninth aspect of the present application, there is provided the seventh data processing system of the ninth aspect of the present application, the first sub-command processing unit being coupled to the monitoring unit of the upstream data path of each of the first plurality of data paths; the monitoring unit acquires a processing result of the sub command from the first sub command processing unit; the monitoring unit acquires a DTU bearing the processed sub-command, acquires at least one callback function index from the DTU of the processed sub-command, and calls a callback function indicated by the at least one callback function index.
According to a sixth or seventh data processing system of the ninth aspect of the present application, there is provided the eighth data processing system of the ninth aspect of the present application, the first sub-command processing unit being coupled to one or more first NVM chips; and/or each of the first plurality of datapaths is associated with one of a plurality of namespaces.
According to an eighth data processing system of the ninth aspect of the present application, there is provided the ninth data processing system of the ninth aspect of the present application, wherein each of the first plurality of data paths is coupled to a first resource manager, and the task processing units of the downstream data path of each of the first plurality of data paths access the first resource through the first resource manager.
According to a ninth data processing system of the ninth aspect of the present application, there is provided the tenth data processing system of the ninth aspect of the present application, the first resource being associated with the one or more first NVM chips.
According to a tenth data processing system of the ninth aspect of the present application, there is provided the eleventh data processing system of the ninth aspect of the present application, wherein the first plurality of data paths are each coupled to one of the plurality of second resource managers, and the task processing units of the downstream data paths of each of the first plurality of data paths access the second resource through the coupled second resource manager.
According to one of the sixth to eleventh data processing systems of the ninth aspect of the present application, there is provided the twelfth data processing system of the ninth aspect of the present application, the first sub-command processing unit being coupled to one or more first NVM chips; and/or each of the first plurality of datapaths is associated with one of a storage device that conforms to a SATA protocol, a storage device that conforms to an open channel protocol, a storage device that conforms to a key-value (K-V) storage protocol, a storage device that conforms to an NVMe protocol, and/or a plurality of namespaces that conform to an NVMe protocol.
According to one of the sixth to twelfth data processing systems of the ninth aspect of the present application, there is provided the thirteenth data processing system of the ninth aspect of the present application, further comprising a second sub-command processing unit; a downstream data path of a third data path of the plurality of data paths is coupled to the second sub-command processing unit; a downlink data path of the third data path sends the DTU to the second sub-command processing unit, and the second sub-command processing unit processes a received third sub-command indicated by the DTU; wherein the third sub-command is associated with a command to access a third device, and a third data path corresponds to the third device.
According to a thirteenth data processing system of the ninth aspect of the present application, there is provided the fourteenth data processing system of the ninth aspect of the present application, the second sub-command processing unit being coupled to one or more random access memory chips; the third data path is associated with a non-volatile memory device.
According to a fourteenth data processing system of the ninth aspect of the present application, there is provided the fifteenth data processing system of the ninth aspect of the present application, wherein the second sub-command processing unit obtains a processing result of the third sub-command, obtains a DTU carrying the processed third sub-command, and provides the carried processed third sub-command to the command transmitting unit through an uplink data path of the third data path.
According to one of the sixth to fifteenth data processing systems of the ninth aspect of the present application, there is provided the sixteenth data processing system of the ninth aspect of the present application, wherein the command transmitting unit provides the sub-command associated with the management command to the first task processing unit of the fourth data lane of the plurality of data lanes.
According to one of the first to sixteenth data processing systems of the ninth aspect of the present application, there is provided the seventeenth data processing system of the ninth aspect of the present application, wherein the plurality of data lanes are each coupled to a first resource manager that manages the first resource, the first resource manager handling conflicts in the plurality of data lanes using the plurality of instances of the first resource; and/or each of the plurality of data lanes is coupled to one of a plurality of second resource managers that manages the second resource, each second resource manager monopolizing one or more instances of the second resource.
According to a tenth aspect of the present application, there is provided a first data processing method according to the tenth aspect of the present application, applied to a data processing system, the data processing unit including a command transmission unit, a plurality of data paths, and at least one sub-command processing unit, each data path including an upstream data path and a downstream data path, the method including: the command transmission unit distributes DTUs for one or more first sub-commands to bear the sub-commands and provides the DTUs bearing the sub-commands to a downlink data path of a first data path of one of the data paths; wherein the first sub-command is associated with a command to access a first device and the first data path corresponds to the first device.
According to a first data processing method of a tenth aspect of the present application, there is provided the second data processing method of the tenth aspect of the present application, wherein the command transmission unit provides the DTU carrying the first sub-command to the first task processing unit of the downstream data path of the first data path.
According to the first or second data processing method of the tenth aspect of the present application, there is provided the third data processing method of the tenth aspect of the present application, wherein the command transmission unit allocates a DTU to carry a sub-command for one or more second sub-commands to be processed, and provides the DTU carrying the second sub-command to the first task processing unit of the downstream data path of the second data path; wherein the second sub-command is associated with a command to access a second device, and a second datapath corresponds to the second device.
According to one of the first to third data processing methods of the tenth aspect of the present application, there is provided the fourth data processing method of the tenth aspect of the present application, wherein the plurality of data lanes each correspond to a device to be accessed by a command; the method further comprises the following steps: the command transfer unit determines one of the data paths to which the sub-command associated with the command is to be supplied, according to the device to be accessed by the command.
According to one of the first to fourth data processing methods of the tenth aspect of the present application, there is provided the fifth data processing method of the tenth aspect of the present application, wherein the downstream data path is the downstream data path as set forth in any one of the above-mentioned fourth aspects; and/or the upstream data path is the upstream data path of any of the above eighth aspects.
According to one of the first to fifth data processing methods of the tenth aspect of the present application, there is provided the sixth data processing method of the tenth aspect of the present application, wherein the downstream data paths of a first plurality of the plurality of data paths are each coupled to a first sub-command processing unit, the method further comprising: the downlink data paths of the first plurality of data paths all send DTUs to the first sub-command processing unit; the first sub-command processing unit caches a plurality of DTUs and processes a plurality of sub-commands indicated by the DTUs, and the DTUs are sent to the first sub-command processing unit for the first plurality of data paths.
According to a sixth data processing method of a tenth aspect of the present application, there is provided the seventh data processing method of the tenth aspect of the present application, the data processing system further comprising a monitoring unit, the first sub-command processing unit being coupled with the monitoring unit, the method further comprising: the monitoring unit acquires a processing result of the sub-command from at least one sub-command processing unit; the monitoring unit acquires a DTU bearing the processed sub-command, acquires at least one callback function index from the DTU of the processed sub-command, and calls a callback function indicated by the at least one callback function index.
According to one of the first to seventh data processing methods of the tenth aspect of the present application, there is provided the eighth data processing method according to the tenth aspect of the present application, the data processing system further comprising a resource manager; each of the first plurality of data paths is coupled to a first resource manager; the method further comprises the following steps: the task processing unit of the downstream data path of each of the first plurality of data paths accesses the first resource through the first resource manager.
According to a seventh or eighth data processing method of the tenth aspect of the present application, there is provided the ninth data processing method of the tenth aspect of the present application, the method further comprising: and after receiving the processing results of one or more subcommands provided by the uplink data path through which the data passes, the command transmission unit combines the processing results of one or more subcommands into the processing result of one command.
According to a ninth data processing method of the tenth aspect of the present application, there is provided the tenth data processing method of the tenth aspect of the present application, the method further comprising: after receiving the processing results of all the sub-commands associated with one command, combining the processing results of all the sub-commands into the processing result of one command.
According to a sixth or seventh data processing method of the tenth aspect of the present application, there is provided the eleventh data processing method of the tenth aspect of the present application, the data processing system further comprising a second sub-command processing unit; a downstream data path of a third data path of the plurality of data paths is coupled to the second sub-command processing unit; the method further comprises the following steps: a downlink data path of the third data path sends the DTU to the second sub-command processing unit, and the second sub-command processing unit processes a received third sub-command indicated by the DTU; wherein the third sub-command is associated with a command to access a third device, and a third data path corresponds to the third device.
According to an eleventh data processing method of a tenth aspect of the present application, there is provided the twelfth data processing method of the tenth aspect of the present application, the second sub-command processing unit being coupled to one or more random access memory chips; the third data path is associated with a non-volatile memory device.
According to a twelfth data processing method of the tenth aspect of the present application, there is provided the thirteenth data processing method of the tenth aspect of the present application, the method further comprising: and the second sub-command processing unit acquires the processing result of the third sub-command, acquires the DTU carrying the processed third sub-command, and provides the loaded third sub-command to the command transmission unit through the uplink data path of the third data path.
According to one of the first to thirteenth data processing methods of the tenth aspect of the present application, there is provided the fourteenth data processing method of the tenth aspect of the present application, wherein the plurality of data lanes are each coupled to a first resource manager that manages the first resource, the first resource manager handling conflicts in which the plurality of data lanes use multiple instances of the first resource; and/or each of the plurality of data lanes is coupled to one of a plurality of second resource managers that manages the second resource, each second resource manager monopolizing one or more instances of the second resource.
According to an eleventh aspect of the present application, there is provided a first memory device according to the eleventh aspect of the present application, comprising at least one memory chip and at least one data path, each data path corresponding to one or more memory chips; in response to receiving a command, the memory device accesses a memory chip corresponding to one of the at least one data path through the one data path and completes operations indicated by the command, wherein the operations include a read operation, a write operation and an erase operation.
According to a first memory device of an eleventh aspect of the present application, there is provided the second memory device of the eleventh aspect of the present application, the at least one data path includes at least one of a first-type path, a second-type path, and a third-type path, the first-type path, the second-type path, and the third-type path have at least one same or different task processing unit.
According to a second storage device of an eleventh aspect of the present application, there is provided the third storage device of the eleventh aspect of the present application, the at least one data path further includes a fourth type path, the fourth type path handling a management command.
According to a second storage device of an eleventh aspect of the present application, there is provided the fourth storage device of the eleventh aspect of the present application, each data path including a first task processing unit, a second task processing unit, and a third task processing unit, wherein the first task processing unit processes a cache task, and the second task processing unit processes an address translation task.
According to a second or fourth storage device of the eleventh aspect of the present application, there is provided the fifth storage device of the eleventh aspect of the present application, wherein the second type of path and the first type of path include a third task processing unit that processes a data assembling task.
According to a second, fourth or fifth storage device of the eleventh aspect of the present application, there is provided the sixth storage device of the eleventh aspect of the present application, wherein the first type of pathway includes a fourth task processing unit that processes a garbage collection task.
According to one of the first to sixth memory devices of the eleventh aspect of the present application, there is provided the seventh memory device of the eleventh aspect of the present application, wherein the memory chip corresponding to the third type of access is a RAM chip; and the memory chip corresponding to the first type of access or the second type of access is an NVM chip.
According to one of the first to seventh memory devices of the eleventh aspect of the present application, there is provided the eighth memory device according to the eleventh aspect of the present application, further comprising a command transmission unit; in response to receiving the command, the command transmission unit splits the command into at least one sub-command, and determines the data path type provided by the sub-command split by the command according to the identification information carried by the command.
According to an eighth memory device of the eleventh aspect of the present application, there is provided the ninth memory device of the eleventh aspect of the present application, wherein the command transfer unit sends the at least one sub command to the first data path to access the first memory chip through the first data path; the identification information indicates that an object of operation is the first memory chip, and the first data path is a data path corresponding to the first memory chip.
According to an eighth or ninth memory device of the eleventh aspect of the present application, there is provided the tenth memory device of the eleventh aspect of the present application, wherein the command transmission unit determines the first memory chip to be accessed, based on the identification information; and determining the first data path according to the corresponding relation between the memory chip and the data path.
According to a first memory device of an eleventh aspect of the present application, there is provided the eleventh memory device of the eleventh aspect of the present application, including a plurality of command transfer units each corresponding to one data path, and a command assigning unit; in response to receiving a command, the command distribution unit determines a first command transmission unit according to identification information carried by the command, the first command transmission unit corresponds to a first data path, the identification information indicates that an object of operation is the first memory chip, and the first data path is a data path corresponding to the first memory chip; the first command transmission unit splits a command into at least one sub-command and sends the at least one sub-command to the first data path to access the first memory chip through the first data path.
According to a twelfth aspect of the present application, there is provided a first datapath according to the twelfth aspect of the present application, comprising a channel, at least one task processing unit and at least one resource manager, the at least one resource manager managing different types of resources; when processing the subcommand, the at least one task processing unit requests the at least one resource manager to allocate the resource; responding to the acquired resources, at least one task processing unit allocates the acquired resources to a DTU (data transfer unit), wherein the DTU is acquired from a channel by at least one task processing unit and carries a subcommand; in response to completion of use of the resources allocated to the DTU, the at least one task processing unit releases the resources to the at least one resource manager.
According to the first data path of the twelfth aspect of the present application, there is provided the second data path of the twelfth aspect of the present application, wherein the at least one task processing unit determines whether to request the resource from the at least one resource manager according to a type of the subcommand when processing the subcommand; and when the judgment result is negative, not requesting the resource from at least one resource manager.
According to the first or second data path of the twelfth aspect of the present application, there is provided the third data path of the twelfth aspect of the present application, when the at least one task processing unit allocates the plurality of resources to the DTU, after all the plurality of resources are used, the at least one task processing unit releases the resources to the at least one resource manager; in response to the resource being released, the at least one resource manager reclaims the plurality of resources.
According to a twelfth aspect of the present application, there is provided a fourth data path according to the twelfth aspect of the present application, wherein each task processing unit of the at least one task processing unit releases at least one resource allocated to the DTU; or the first task processing unit in the at least one task processing unit releases at least one resource allocated to the DTU by the second task processing unit of the at least one task processing unit.
According to one of the first to fourth data paths of the twelfth aspect of the present application, there is provided the fifth data path of the twelfth aspect of the present application, according to the type of the resource, each task processing unit allocates a resource of one type to the DTU, and each task processing unit releases at least one resource of its corresponding resource type.
According to a twelfth aspect of the present application there is provided a sixth data path according to the twelfth aspect of the present application, the data path comprising a resource manager, the resource manager managing at least one type of resource; in response to a resource allocation request, the one resource manager allocates at least one resource and provides to at least one task processing unit; wherein, when the number of the at least one resource is larger than 1, the type of the at least one resource is different.
According to a seventh data path of the twelfth aspect of the present application, there is provided a data path according to the first aspect of the present application, the data path comprising a plurality of resource managers that manage a plurality of types of resources; in response to a resource allocation request, the plurality of resource managers allocate and provide a plurality of different types of resources to the plurality of task processing units.
According to one of the first to seventh data paths of the twelfth aspect of the present application, there is provided the eighth data path of the twelfth aspect of the present application, wherein when at least one task processing unit allocates the obtained resource to the DTU, the at least one task processing unit further writes a callback function index into the DTU, and the callback function index points to a callback function in the at least one task processing unit.
According to an eighth datapath of the twelfth aspect of the present application, there is provided a ninth datapath according to the twelfth aspect of the present application, releasing resources to at least one resource manager when the callback function is called.
According to one of the first to ninth data paths of the twelfth aspect of the present application, there is provided the tenth data path of the twelfth aspect of the present application, wherein the at least one task processing unit sends a resource release request to the at least one resource manager during the downlink transmission or the uplink transmission of the data.
According to one of the first to tenth data paths of the twelfth aspect of the present application, there is provided the eleventh data path of the twelfth aspect of the present application, wherein after the at least one task processing unit acquires the DTU from the channel, it determines whether the use of the resource allocated to the DTU is completed; and when the judgment result is yes, at least one task processing unit releases the resources to at least one resource manager.
According to an eleventh data path of the twelfth aspect of the present application, there is provided the twelfth data path of the twelfth aspect of the present application, wherein if it is determined that all of the plurality of resources allocated to the DTU are used, the at least one task processing unit sequentially releases the plurality of resources in a forward order or a reverse order of the plurality of resource allocation orders.
According to one of the first to twelfth data paths of the twelfth aspect of the present application, there is provided a thirteenth data path of the twelfth aspect of the present application, the channel including a monitoring unit; the monitoring unit monitors the occurrence of adding or removing a DTU to or from the channel on which it is located and notifies one or more resource managers in response.
According to a thirteenth data path of the twelfth aspect of the present application, there is provided the fourteenth data path of the twelfth aspect of the present application, wherein the monitoring unit records a monitoring function with which the one or more resource managers are registered; in response to the occurrence of the monitoring unit adding or taking out a DTU to or from the channel, the registered monitoring function is called to notify one or more resource managers.
According to a thirteenth or fourteenth data path of the twelfth aspect of the present application, there is provided a fifteenth data path of the twelfth aspect of the present application, the monitoring unit notifies the resource manager to request allocation of resources for the DTU; and/or the monitoring unit informs the resource manager that the state of the resource it manages changes.
According to a thirteenth aspect of the present application, there is provided a first resource management method according to the thirteenth aspect of the present application, applied to a task processing unit, the method including: acquiring a DTU from a channel, wherein the DTU carries a sub-command; requesting allocation of resources to at least one resource manager while processing the subcommand; in response to obtaining resources from a resource manager, allocating the obtained resources to the DTU; in response to completion of use of the resources allocated to the DTU, releasing the resources to the at least one resource manager.
According to a first resource management method of a thirteenth aspect of the present application, there is provided the second resource management method of the thirteenth aspect of the present application, before requesting allocation of resources to at least one resource manager, the method further comprising: judging whether to request resource allocation according to the type of the subcommand; and if the judgment result is negative, not requesting to allocate the resources.
According to the first or second resource management method of the thirteenth aspect of the present application, there is provided the third resource management method of the thirteenth aspect of the present application, which, when processing a sub-command, requests allocation of a resource to at least one resource manager, including: requesting allocation of resources to a resource manager to cause a resource manager to allocate a resource; or requesting resource allocation from one resource manager so that one resource manager allocates a plurality of resources; or requesting allocation of resources to the plurality of resource managers so that the plurality of resource managers allocate the plurality of resources; wherein the plurality of resource managers allocating a plurality of resources comprises: the plurality of resource managers respectively allocate one resource, and each of the plurality of resources is of a different type.
According to one of the first to third resource management methods of the thirteenth aspect of the present application, there is provided the fourth resource management method of the thirteenth aspect of the present application, which, in response to obtaining a resource from a resource manager, allocates the obtained resource to the DTU, including: upon obtaining a resource from a resource manager, allocating the resource to the DTU; or when obtaining a plurality of resources from the resource manager, allocating the plurality of resources to the DTU according to the allocation request sequence.
According to a fourth resource management method of a thirteenth aspect of the present application, there is provided the fifth resource management method of the thirteenth aspect of the present application, when allocating a resource to the DTU, further writing a callback function index in the DTU, including: writing at least one callback function index in the DTU when allocating a plurality of resources to the DTU; wherein writing a plurality of callback function indexes in the DTU comprises: and writing the indexes of the callback functions in a forward sequence or a backward sequence according to the distribution sequence of the resources.
According to one of the first to fifth resource management methods of the thirteenth aspect of the present application, there is provided the sixth resource management method of the thirteenth aspect of the present application, before said releasing the resource to the at least one resource manager in response to completion of the use of the resource allocated to the DTU, the method further comprising: judging whether the resource distributed to the DTU is used completely;
and if so, determining that the resource allocated to the DTU is used completely.
According to a fifth or sixth resource management method of a thirteenth aspect of the present application, there is provided the seventh resource management method of the thirteenth aspect of the present application, which, in response to completion of use of a resource allocated to the DTU, releases the resource to the at least one resource manager, including: calling at least one callback function through at least one callback function index in the DTU so as to release at least one resource; when the plurality of callback functions are called, the plurality of callback functions are sequentially called in a forward order or a reverse order according to the distribution sequence of the plurality of resources.
According to a fourteenth aspect of the present application, there is provided a first information processing apparatus according to the fourteenth aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the above-mentioned thirteenth aspects when executing the program.
According to a fifteenth aspect of the present application, there is provided a first resource management method according to the fifteenth aspect of the present application, applied to a resource manager, the method comprising: responding to the request of at least one task processing unit for allocating resources, and allocating the resources for the at least one task processing unit; and in response to the at least one task processing unit releasing the resource release request, reclaiming the allocated resources.
According to a first resource management method of a fifteenth aspect of the present application, there is provided the second resource management method of the fifteenth aspect of the present application, which allocates resources to at least one task processing unit in response to at least one task processing unit requesting allocation of resources, including: responding to a task processing unit request to allocate a resource, and allocating a resource for the task processing unit; or responding to a request of one task processing unit for allocating resources once, and allocating a plurality of resources for the task processing unit, wherein the types of the resources are different; or responding to a plurality of task processing units to request to allocate a plurality of resources, allocating a plurality of resources for the plurality of task processing units, wherein the types of the plurality of resources are the same or different, and each task processing unit in the plurality of task processing units obtains at least one resource.
According to the first or second resource management method of the fifteenth aspect of the present application, there is provided the third resource management method of the fifteenth aspect of the present application, which, in response to at least one task processing unit releasing a resource, reclaims the allocated resource, comprising: in response to a task processing unit releasing a resource, reclaiming the allocated resource; or, responding to one task processing unit to release the primary resource, and recovering a plurality of allocated resources, wherein the types of the resources are different; or, in response to the plurality of task processing units releasing the plurality of resources, the allocated plurality of resources are reclaimed, the plurality of resources being of the same or different types.
According to a sixteenth aspect of the present application, there is provided the first information processing apparatus according to the sixteenth aspect of the present application, comprising a memory, a processor and a program stored on the memory and executable on the processor, the processor implementing the method according to any one of the above-mentioned fifteenth aspects when executing the program.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and other drawings can be obtained by those skilled in the art according to the drawings.
FIG. 1A is a diagram illustrating task scheduling in the prior art;
FIG. 1B is a block diagram of a prior art task processing system;
FIG. 2A is a block diagram of a task processing system provided by an embodiment of the present application;
fig. 2B is a block diagram of a task processing unit provided in an embodiment of the present application;
fig. 2C is a schematic flowchart of processing a DTU according to an embodiment of the present application;
FIG. 2D is a block diagram of a channel provided by an embodiment of the present application;
fig. 3A is a schematic diagram of a downlink data path according to an embodiment of the present application;
fig. 3B is a schematic flow chart of constructing a downlink according to an embodiment of the present disclosure;
fig. 3C is a schematic diagram of another downlink data path provided in the embodiment of the present application;
fig. 4A is a schematic diagram of another downstream data path according to the embodiment of the present application;
fig. 4B is a schematic diagram of another downlink data path according to an embodiment of the present application;
fig. 5A is a schematic diagram of an uplink data path according to an embodiment of the present application;
fig. 5B is a schematic diagram of another uplink data path according to an embodiment of the present application;
fig. 5C is a schematic flowchart of a process for constructing an uplink path according to an embodiment of the present application;
fig. 5D is a schematic diagram of a DTU provided in an embodiment of the present application;
FIG. 6A is a diagram illustrating resource management provided by an embodiment of the present application;
FIG. 6B is a diagram illustrating another example of resource management provided by the present application;
FIG. 6C is a diagram illustrating further resource management provided by an embodiment of the present application;
fig. 7 is a block diagram of a storage device according to an embodiment of the present application;
FIG. 8A is a block diagram of yet another memory device provided by an embodiment of the present application;
fig. 8B is a block diagram of another storage device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application are clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
FIG. 2A illustrates a block diagram of a task processing system for a storage device according to an embodiment of the present application.
The CPU of the control component of the storage device runs software (also referred to as firmware). The running software comprises a scheduler, a task processing unit, a channel and a resource manager. The scheduler schedules the execution of the task processing units. A task processing unit is a thread, process, task, or other software unit of an operating system that can be scheduled by a scheduler.
The task processing units, channels, and resource managers each include one or more instances, 4 task processing units (210, 212, 214, and 216), 4 channels (220, 222, 224, and 226), and 2 resource managers (240 and 245) are illustrated in FIG. 2A.
The channels are used for communication between the task processing units. The message Unit carried by the channel is called a Data Transfer Unit (DTU). For example, the DTUs correspond one-to-one to the subcommands. And allocating a DTU (delay tolerant unit) for each sub-command to carry the context related to the sub-command and tracking the processing process and the result of the sub-command. A DTU is an example of a data structure, such as an encapsulation. The subcommands, for example, access the same size of memory space. The channel includes, for example, a DTU list to accommodate one or more DTUs.
The channel also includes functions that handle DTU operations, such as adding a Push function of a DTU to the DTU list and/or taking a DTU from the DTU list. Optionally, a custom callback function may also be registered with the channel. Callback functions registered to the channel are used, for example, to call services provided by the resource manager.
According to embodiments of the application, a channel instance may be bound to a task processing unit. The channel receives DTUs from one or more bound task processing units and provides DTUs to only one bound task processing unit.
According to embodiments of the application, only the task processing unit is enabled to use the channel, and the scheduler cannot invoke or schedule the channel instance. The index of the channel instance bound for it is recorded in the task processing unit so that the task processing unit can operate on the channel instance by, for example, a Push/Pop function.
The channel instances are bound among the task processing unit instances, so that the task processing unit obtains the DTU from the bound channel, processes the sub-command carried by the DTU, and adds the processed sub-command to other channels through the DTU. The task processing unit instances are coupled through the channels, so that the task processing unit instances are decoupled and executed concurrently and asynchronously. The execution process and current state of one task processing unit instance does not affect (e.g., block) the execution of other task processing unit instances. If the control unit includes multiple CPUs, the CPUs can process the task processing unit instances in parallel.
The resource manager manages the use of specified resources, such as allocation, release, and/or reclamation. The resource manager is only called by the task processing unit and/or the channel, and not called or scheduled by the channel. According to one embodiment, resources (e.g., FTL tables (which record mapping of logical addresses of storage devices to physical addresses of NVM chips) or caches) shared by multiple task processing units in a task processing system are managed by a resource manager. The task processing unit accesses the appointed resource through the appointed resource management unit, and if the access of a plurality of task processing unit instances to the appointed resource has conflict, the resource management unit solves the conflict through locking, queuing and the like.
FIG. 2B illustrates a block diagram of a task processing unit according to an embodiment of the present application. FIG. 2C illustrates a flowchart implemented by a task processing unit according to an embodiment of the present application.
Referring to fig. 2B, the task processing unit includes an inbound interface, an outbound interface, a DTU processing module, and optionally one or more callback functions.
The inbound interface obtains the DTU from the channel bound to the task processing unit. For example, the DTU is obtained from its DTU list by calling the Pop function of the bound channel. The outbound interface adds a DTU to the channel bound to the task processing unit. For example, a DTU is added to its DTU list by calling the Push function of the channel being bound. The DTU processing module extracts the sub-command from the DTU acquired from the inbound interface, processes the sub-command, and loads the processed sub-command in the DTU to be added to the channel through the outbound interface. The DTU processing module optionally adds an index of one or more callback functions to the DTU to construct the upstream data path. The embodiment of constructing the upstream data path will be described in detail later.
Optionally, the task processing unit comprises two or more inbound interfaces, and/or two or more outbound interfaces. Each inbound interface is coupled to one of the channels. Each outbound interface is coupled to one of the lanes.
According to embodiments of the present application, a task processing system provides templates, such as task processing units, channels, resource managers, and a user, such as a programmer, builds a task processing unit by copying a template and adding the required code to the copied template. Taking a task processing unit as an example, its reusable part (e.g., inbound interface, outbound interface, DTU processing module) is provided by a template, and it is necessary to set up a channel instance to which the inbound interface is coupled, a channel instance to which the outbound interface is coupled, and a function to be implemented by the task processing unit to process a sub-command (e.g., allocating a cache for the sub-command, or querying a physical address to be accessed by the sub-command). Therefore, when the task processing unit is constructed, only the realization of the specific function of the task processing unit instance needs to be concerned, and the problems of how to acquire the sub-command to be processed, how to process the sub-command concurrently and the like do not need to be concerned.
Fig. 2C shows a flowchart of the DTU processing unit processing the DTU.
Referring to fig. 2C, a DTU processing unit (see also fig. 2B) obtains DTUs (260) from the channel to which each inbound interface of the task processing unit it is coupled. The DTU processing unit polls each inbound interface for pending DTUs. Optionally, the plurality of inbound interfaces of the task processing unit each have a priority, and the DTU processing unit obtains the DTU from each inbound interface according to the priority.
The acquired DTU carries the subcommand. The DTU processing unit obtains the sub-command from the DTU and processes the sub-command according to the content of the sub-command (262). Optionally, the DTU processing unit of each task processing unit implements a specific phase of processing of the sub-command. The processing of the sub-commands by the plurality of task processing units may be the same or different. For example, two task processing units process the same phase of a sub-command, so that the two task processing units process two sub-commands in parallel. As yet another example, two task processing units process different phases of a sub-command, such that the two task processing units process the same sub-command sequentially.
The sub-commands processed by the DTU processing unit are still carried in the DTU. The DTU processing unit issues (264) the DTU carrying the sub-command whose processing is completed through the outbound interface. The channel to which the outbound interface is coupled is in turn coupled to other task processing units. For example, the DTU processing unit selects one of the outbound interfaces to send the DTU, depending on the processing that the subcommand carried by the DTU is to undergo next. Still alternatively, the plurality of outbound interfaces of the task processing unit each have a priority, and the DTU processing unit sends the DTU through each outbound interface according to the priority.
FIG. 2D illustrates a block diagram of a channel in accordance with an embodiment of the present application.
The channel includes a DTU list and a function (270, 272, 274) of a plurality of operational DTU lists. The DTU list holds a plurality of DTUs. Task processing units coupled to the channel call a Push function (270) (through the outbound interface) for adding a DTU to the DTU list, and call a Pop function (274) (through the inbound interface) for fetching a DTU from the DTU list.
A long time may elapse from when a DTU is added to the DTU list to when the DTU is taken out of the DTU list. For example, if the DTU list is implemented as a queue, the DTU is added to the tail of the queue and not pulled from the queue until it becomes the head of the queue. While some of the sub-commands represented by DTUs are intended to be processed with low latency. Thus, optionally, the channel also includes a Push function (272). The Push function (272) is different from the Push function (270). A task processing unit coupled to the channel invokes a Push function (272) to provide a DTU to the channel, and in response, a direct forwarding unit of the channel retrieves the DTU from the Push function (272) and invokes a function indicated by the destination index and provides the retrieved DTU to the function indicated by the destination index to complete delivery of the DTU. Thus, the Push function (272) that invokes the channel provides the DTU for the channel to be provided to and immediately processed by the function indicated by the destination index, reducing the time that the DTU stays in the channel. The function indicated by the destination index is a function of the task processing unit receiving the DTU from the channel. Therefore, the Push function (272), the direct forwarding unit and the destination index form a channel for rapidly processing the DTU.
The DTU list of the channel, the functions of the multiple operational DTU lists and the direct forwarding unit are reusable parts thereof, provided by the template. When a channel is instantiated, storage space is provided for the DTU list to accommodate the DTU. Optionally, a destination index is also set to indicate to the direct forwarding unit of the channel the function of receiving the DTU.
Optionally, the channel further comprises a monitoring unit. The monitoring unit monitors operations (e.g., add and/or remove) of the DTU list and/or monitors direct forwarding operations to the DTU.
According to an alternative embodiment, one or more resource managers monitor the addition of DTUs to the DTU list by the Push function (270) by means of a monitoring unit. So that the resource manager can know in time that the resources associated with the DTU are used or the required resources are allocated for the DTU.
By way of example, the resource manager registers the monitoring function with the monitoring unit. The registered one or more monitoring functions are invoked in response to the Push function (270) being invoked. The DTU or parameters carried by it added to the channel by the Push function (270) are taken as parameters of the monitoring function.
Still by way of example, a DTU added to a channel and waiting for another task processing unit to process requires some kind of resource, the allocation of which requires some time. The monitoring function applies for such resources from the corresponding resource manager in response to the DTU being added to the channel, thereby advancing the resource application operation appropriately. When a task processing unit acquires a DTU from a channel, such resources have already been allocated to the DTU, thereby reducing the delay in processing the DTU (hiding the resource allocation time).
As yet another example, a resource manager needs to monitor the status of a certain resource. For example, in addition to the "idle" and "used" states, the storage medium resources storing data may fail due to update and be in the "failed" state. The resource manager maintains the state of the storage medium, transitions from the "used" state to the "idle" state being triggered by resource release operations, and transitions from the "idle" state to the "used" state being triggered by resource allocation operations. According to an embodiment of the application, the resource manager learns of the transition from the "used" state to the "dead" state by monitoring the operation of the Push function (270) of the channel or monitoring the operation of the DTU list. For example, the monitoring function obtains a logical address and a physical address corresponding to the DTU carrying the write sub-command, and records the status of the storage medium corresponding to the physical address as "invalid". So that the task processing unit or other party does not have to additionally inform the resource manager about such a status change in another way.
Fig. 3A shows a schematic diagram of a downstream data path according to an embodiment of the present application.
According to an embodiment of the present application, the data path includes a downlink data path and an uplink data path. The downlink data path is used for processing the subcommands of the IO commands, and the uplink data path is used for collecting and delivering the processing results of the subcommands.
The downstream data path includes task processing units (310, 312, 314, 316) and channels (320, 322, 324) connecting the task processing units and passing DTUs between the data processing units.
The channel connects two or more data processing units. The channel unidirectionally transmits the DTU. The multiple task handling units connected by channels are thus producer and consumer relationships of the DTU. Referring to fig. 3A, the task processing unit 310 provides the DTU, which is a producer, to the task processing unit 314 through the channel 320, and the task processing unit 314 is a consumer of the DTU produced by the task processing unit 310. The task processing unit 312 also provides the DTU to the task processing unit 314 via channel 320. According to embodiments of the present application, one channel may be coupled to one or more task processing units that are DTU producers (e.g., task processing units 310, 312 relative to channel 320), but only one task processing unit that is DTU consumers (e.g., task processing unit 316 relative to channel 320). The task processing unit is capable of transmitting DTUs to and receiving DTUs from a plurality of channels. For example, the task processing unit 312 sends DTUs to two channels (320 and 324), and the task processing unit receives DTUs from two channels (322 and 324).
In an alternative embodiment, the scheduler schedules execution of, or places in a schedulable state, a task processing unit coupled to the channel as a consumer in response to the DTU that the channel has pending. After the task processing unit is scheduled to execute, the DTU is obtained from the channel coupled with the task processing unit and processed.
A downstream data path may be constructed. To construct the downstream data path, one or more task processing units are created, and one or more channels are created. The created channel is bound to the created task processing unit and instructs the task processing unit whether to fetch or deliver the DTU from or through the channel.
The construction of the downlink data path is performed when the task processing system is initialized. Optionally, during the operation of the task processing system, the downstream data path is constructed, or the constructed downstream data path is changed.
Fig. 3B shows a flow chart for constructing the downstream data path.
One or more task processing units (340) and one or more channels (350) that create a downstream data path for constructing the downstream data path. By way of example, to construct the downstream data path shown in FIG. 3A, task processing units (310, 312, 314, and 316) and channels (320, 322, and 324) are created.
The created channel is bound to the inbound/outbound interface of the task processing unit in accordance with the specified coupling relationship (360). Still by way of example, to construct the downstream data channel illustrated in FIG. 3A, the channel 320 is bound to the outbound interface of the task processing unit 310 and the outbound interface of the task processing unit 312. The channel 320 is also bound to the inbound interface of the task processing unit 314. The channel 322 is bound to the outbound interface of the task processing unit 314 and the inbound interface of the task processing unit 316. The channel 324 is bound to the outbound interface of the task processing unit 312 and the inbound interface of the task processing unit 316.
Fig. 3C shows a schematic diagram of a downstream data path according to yet another embodiment of the present application.
The cache management unit 380, the address mapping unit 382, and the array assembling unit 384 are task processing units that implement different processing functions for sub-commands, respectively. Fig. 3C shows a downstream data path 300 comprising a plurality of channels (370, 372, and 374), a plurality of task processing units (cache management unit 380, address mapping unit 382, and data assembly unit 384), and a plurality of resource managers (390, 392, and 394). The downstream data path shown in fig. 3C is thus used to implement the functionality of the memory device.
The memory device further includes a command transmission unit 302 and a sub-command processing unit 304. Optionally, the command transmission unit 302 exchanges IO commands with the host according to a specified storage protocol. The command transmission unit splits the IO command into one or more subcommands, uses the allocated DTU to carry the subcommands, and delivers the DTU to the downlink data path 300. The downstream data path 300 performs one-stage or multi-stage processing on the subcommands, and finally delivers the DTUs to the subcommand processing unit 304. The sub-command processing unit 304 converts the sub-commands carried by the DTU into commands for accessing the storage medium. By way of example, the sub-command processing unit 304 is a media interface controller. The sub-command processing unit 304 also queries the storage medium to obtain the result of processing the sub-command, and the upstream data path through which the data passes delivers the result of processing the sub-command to the command transmitting unit 302. If necessary, the command transfer unit 302 will collect the processing results of all the subcommands split by the same IO command and indicate to the host that the IO command processing is completed.
By way of example, the command transfer unit 302 adds a DTU to the channel 370, and the DTU carrying the sub-command is provided to the cache management unit 380 through the channel 370. The cache management unit 380 allocates a cache for the subcommand and moves the data accessed by the subcommand to the allocated cache. Cache management unit 380 is associated with resource manager 390. The resource manager 390 manages cache resources, e.g., manages allocation and release of cache resources. The buffer management unit 380 requests allocation of a buffer to the resource manager 390 in response to a sub-command carried by the DTU acquired from the channel 370. The buffers allocated for the subcommands are also recorded in the DTU carrying the subcommands. In response to moving the data accessed by the subcommand to the cache, the cache management unit 380 completes processing of the subcommand and sends the DTU carrying the subcommand to the channel 372.
Optionally, the command transmission unit 302 processes IO commands complying with a plurality of storage protocols, including, for example, SAS/SATA protocols, open channel (OpenChannel) protocols, Key-Value (Key-Value) storage protocols, NVMe protocols, and/or the like.
The address mapping unit 380 obtains the DTU from the channel 372 and obtains the subcommands carried by the DTU. Address mapping unit 380 assigns physical addresses to the sub-commands and establishes a mapping of logical addresses to be accessed by the sub-commands to the physical addresses. An address mapping unit is associated with the resource manager 392. The resource manager 392 manages an address mapping table in which the mapping relationship of all logical addresses to physical addresses of the storage device is recorded. The address mapping unit 382 responds to the DTU to request the resource manager 392 to allocate an entry associated with a logical address accessed by a sub-command carried by the DTU, wherein the association relationship between the logical address and a physical address is recorded in the entry. The entry allocated for the sub-command is also recorded in the DTU carrying the sub-command. In response to obtaining the physical address accessed by the subcommand, the cache management unit 382 completes processing of the subcommand and sends the DTU carrying the subcommand to the channel 374.
The data assembly unit 384 obtains the DTU from the channel 374 and obtains the subcommands carried by the DTU. The data assembly unit 384 assembles the data to be accessed by the sub-commands to generate commands to write the data to the storage medium. Data assembly unit 384 is associated with resource manager 394. The resource manager 394 manages the accelerators (referred to as "XOR units") for XOR computation. The data assembly unit 384 requests the resource manager 394 to allocate an exclusive or unit thereto in response to the DTU.
The data assembling unit 384 operates the sub-command processing unit 304 to write the assembled data to the storage medium. Optionally, the data assembly unit 384 also releases the one or more resources from which the DTU was allocated to the resource manager (390, 392 and/or 394) according to the record in the DTU.
Fig. 4A illustrates a schematic diagram of a downstream data path according to yet another embodiment of the present application.
The cache management unit 410, the address mapping unit 412, the array assembling unit 414, the log unit 416, and the garbage collection unit 418 are task processing units with different capabilities, respectively. The downstream data path shown in fig. 4A includes a plurality of channels (420, 421, 422, 424, 426, and 428) and a plurality of task processing units (the cache management unit 410, the address mapping unit 412, the data assembling unit 414, the log unit 416, and the garbage collection unit 418). The downstream data path shown in fig. 4A is used to implement the functions of the memory device. The memory device further includes a command transmission unit 402 and a sub-command processing unit 404.
By way of example, the command transfer unit 402 adds a DTU to the channel 420, and the DTU carrying the subcommand is provided to the cache management unit 410 through the channel 420. The DTUs processed by the cache management unit 410 are added to the channel 422. The address mapping unit 412 obtains the DTU from the channel 422 and adds the processed DTU to the channel 424. The data assembly unit 414 obtains the DTU from the channel 424 and accesses the storage medium through the sub-command processing unit 404 according to the sub-commands carried by the DTU.
The data assembly unit 414 also generates a DTU to add to the channel 426. The log unit 416 acquires the DTU from the channel 416, generates a log to be recorded according to the DTU, and adds the processed DTU to the channel 428. The data assembling unit 414 also obtains the DTU from the channel 428 and writes the log to the storage medium according to the subcommand carried by the DTU.
The garbage collection unit 418 generates a DTU indicating a sub-command of the garbage collection operation and adds it to the channel 421. The cache management unit also obtains the DTU from the channel 421 and processes the subcommands therein.
Fig. 4B illustrates a schematic diagram of a downstream data path according to still another embodiment of the present application.
The downstream data path shown in fig. 4B includes a plurality of channels (420, 421, 422, 424, 426, 428, 431, 430, 432, and 434) and a plurality of task processing units (the cache management unit 410, the cache management unit 411, the address mapping unit 412, the address mapping unit 413, the data assembling unit 414, the log unit 416, and the garbage collection unit 418).
In contrast to the downstream data path shown in fig. 4A, the downstream data path shown in fig. 4B includes two buffer management units and two address mapping units. The buffer management unit 411 operates in parallel with the buffer management unit 410. The address mapping unit 413 works in parallel with the address mapping unit 412. Thus, the downstream data path shown in fig. 4B enables parallel processing of a plurality of subcommands provided by the command transmission unit 402, thereby enhancing the subcommand processing capability.
By way of example, the command transfer unit 402 adds a DTU to the channel 420, and the DTU carrying the subcommand is provided to the cache management unit 410 through the channel 420. The DTUs processed by the cache management unit 410 are added to the channel 422. The address mapping unit 412 obtains the DTU from the channel 422 and adds the processed DTU to the channel 424. The data assembly unit 414 obtains the DTU from the channel 424 and accesses the storage medium through the sub-command processing unit 404 according to the sub-commands carried by the DTU.
The data assembly unit 414 also generates a DTU to add to the channel 426. The log unit 416 acquires the DTU from the channel 416, generates a log to be recorded according to the DTU, and adds the processed DTU to the channel 428. The data assembling unit 414 also obtains the DTU from the channel 428 and writes the log to the storage medium according to the subcommand carried by the DTU.
The garbage collection unit 418 generates a DTU indicating a sub-command of the garbage collection operation and adds it to the channel 421. The cache management unit also obtains the DTU from the channel 421 and processes the subcommands therein.
According to the embodiment of the application, convenience is provided for enhancing the processing capacity of the task processing system. Referring to fig. 4B, task processing capabilities are enhanced by providing a plurality of task processing units in parallel with channels coupled thereto for the downstream data path. Since the task processing units are schedulable, as the number of processor cores or threads increases, for example, in the control unit of the memory device, the increased number of task processing units on the downstream data path makes use of the increased number of processor cores or threads, so that the increased number of processor cores or threads is conveniently and efficiently utilized for processing more subcommands in parallel. In some cases, it is difficult to find the most partitioning and optimal resource allocation for processing the stages due to the disparity in the sub-command processing stages. For example, the workload of the cache management and address mapping phase is heavier than log management. According to the embodiment of the application, the adjustment of the downlink data path is simple, and the settings of different numbers of task processing units and/or channels are tested by adjusting the downlink data path so as to conveniently find the optimal or better downlink data path structure.
The DTU carries one or more subcommands, and the downlink data path allocates one or more resources for processing the DTU. After the processing of the subcommands carried by the DTU is completed, the various resources allocated to the DTU are released and various processing results corresponding to the various subcommands are delivered. Even with the same type of sub-command, there are multiple states of processing success/failure, etc. Thus for each DTU there is a different resource release pattern and/or processing result identification and delivery pattern. Therefore, different processing methods need to be handled through different uplink data paths.
According to the embodiment of the application, in the process of processing the DTUs through the downlink data path, the uplink data path is constructed for each DTU, and after the sub-commands carried by the DTU are processed, the DTU is processed through the constructed uplink data path.
Fig. 5A illustrates a schematic diagram of an upstream data path according to an embodiment of the present application.
Referring to fig. 5A, the downstream data path includes, for example, a plurality of task processing units (510, 520, and 530) that further include one or more callback functions (512, 522, and 532). In the example of fig. 5A, the task processing unit 510 includes a callback function 512, the task processing unit 520 includes a callback function 522, and the task processing unit 530 includes a callback function 532. The DTUs processed by the task processing unit 510 are provided to the task processing unit 520 through a channel (indicated simply by an arrow), and the DTUs processed by the task processing unit are provided to the task processing unit 530 through a channel.
When processing the DTU, the task processing unit of the downstream data path records the index of one or more own callback functions in the processed DTU. Therefore, when the DTU bearing sub-command is processed, one or more callback function indexes are obtained from the DTU and the callback functions are called, so that the DTU is processed by the uplink data path. These callback functions thus constitute the upstream data path of the DTU or a portion thereof.
In the example of fig. 5A, the task processing unit 530 is the last task processing unit of the downstream data path. It submits the sub-commands carried by DTU 542 to a sub-command processing unit (not shown). The sub-command processing unit buffers the DTU 542, processes the sub-command indicated by the DTU 542, and supplies the processing result of the sub-command to the monitoring unit 550.
The monitoring unit 550 monitors and identifies whether the sub command is completed by the process. In response to completion of the sub-command processing, the monitoring unit 520 acquires the DTU 542 indicating the processing result of the sub-command. For example, the monitoring unit 550 receives a sub-command processing completion instruction transmitted from the sub-command processing unit, and determines that the sub-command processing is completed. In another example, the monitoring unit 550 polls the sub-command processing unit to the sub-command completion indication, and determines that the sub-command processing is completed. In response, the monitoring unit 550 obtains the DTU 542 carrying the processed subcommand, obtains one or more callback function indexes (e.g., callback functions 512, 522, and 532) from the DTU 542, and calls the callback functions indicated by the callback function indexes in a specified order. For example, the callback function 532 is used to release the resource allocated by the task processing unit 530 to the DTU 542, the callback function 522 is used to release the resource allocated by the task processing unit 520 to the DTU 542, and the callback function 512 is used to release the resource allocated by the task processing unit 510 to the DTU 542. The callback function 512 also provides the DTU 542 to a command transfer unit (not shown). The callback function 512 is the last called callback function in the upstream data path. The command transmission unit acquires the processing result of the sub-command according to the instruction of the DTU 542. The command transfer unit also releases DTU 542 so that DTU 542 can be used to carry other subcommands and provided to the downstream data path.
Optionally, the command transmission unit further merges processing results of the plurality of subcommands. These merged subcommands originate from the same command. In response to completion of processing of a plurality of subcommands generated from the same command, the command transmission unit returns a processing result of the command to the command issuing side.
Thus, in the example of fig. 5A, logically, the callback function 537, the callback function 527, and the callback function 517 sequentially process the DTU 547 (indicated by a dotted arrow), and the monitor unit 550 and the callback functions (512, 522, and 532) constitute an upstream data path of the DTU 542.
Therefore, according to the embodiment of the present application, one or more callback function indexes are recorded in the DTUs, and the monitoring unit 550 calls the corresponding callback functions according to the callback function indexes in the DTUs to be processed in a specified order, so that an uplink data path dedicated to each DTU is constructed for the DTU, and each DTU is processed by using the constructed uplink data path, thereby providing different processing modes for each DTU in the uplink data path.
FIG. 5B illustrates a schematic diagram of an upstream data path according to yet another embodiment of the present application.
In the example of fig. 5B, the downstream data path includes three task processing units, namely a buffer management unit 515, an address mapping unit 525, and a data assembling unit 535 (see also fig. 4A and 4B). The downstream datapath also includes a resource manager that manages cache resources, a resource manager that manages mapping table resources, and a resource manager that manages accelerator resources (the resource manager is not shown, only the managed resources are shown).
The buffer management unit 515 obtains the buffer allocation unit from the buffer resources for processing the DTU 547 and allocates the buffer to the DTU 547. The address mapping unit 525 allocates a mapping table resource (e.g., an entry of a lock mapping table) for the DTU 547 for recording the address of the storage medium carrying the write data. The data assembly unit 535 allocates accelerator resources for the DTU 547 (for computing check data for write data) and submits the sub-commands carried by the DTU 547 to a sub-command processing unit (e.g., media interface controller) (not shown).
By way of example, while processing DTU 547, cache management unit 515 records the index of callback function 517 in DTU 547 being processed, address mapping unit 525 records the index of callback function 527 in DTU 547, and data assembly unit 535 records the index of callback function 537 in DTU 547. The callback function 517 is used, for example, to release cache resources allocated to the DTU 547. The callback function 527 is used, for example, to write and unlock the storage medium address allocated to the DTU 547 to a mapping table entry. Callback function 537 is used, for example, to record accelerator releases assigned to DTU 547.
The monitoring unit 555 polls the sub-command processing unit to know that the sub-command corresponding to the DTU 547 is processed. The monitoring unit 555 acquires the indexes of the callback functions (517, 527, and 537) recorded therein from the DTU 547 and calls these callback functions.
By way of example, the callback functions (517, 527, and 537) use the DTU or a variable recorded by the DTU as a parameter to process the DTU. Still by way of example, monitoring unit 555 calls callback functions 537, 527, and 517 in that order. The order in which the callback functions are called is, for example, the reverse of the order in which they are added to the DTU 547. Therefore, each task processing unit adds the index of the callback function to the DTU 547 in the manner of an operation stack, and the monitoring unit 555 also obtains the index of the callback function from the DTU 547 in the manner of an operation stack and calls the corresponding callback function.
Fig. 5C shows a flow chart of constructing an upstream data path according to an embodiment of the present application.
One of the task processing units in the downstream data path writes one or more callback function indices (570) to the DTU and provides the DTU to another of the task processing units in the downstream data path. The other task processing unit also writes one or more callback function indices (572) into the DTU. One or more callback function indexes are written into the DTU through one or more task processing units, and the callback functions indicated by the callback function indexes form an uplink data path for processing the DTU.
And after the sub-command carried by the DTU is processed by the sub-command processing unit, all callback function indexes recorded in the DTU are obtained, and callback functions (574) indicated by the callback function indexes are sequentially called so as to process the DTU through an uplink data path.
Optionally, the task processing unit selects the callback function index recorded in the DTU according to the processing performed on the subcommand carried by the DTU by the task processing unit or the resource allocated to the subcommand, and the callback function corresponding to the callback function index is preset in the task processing unit. For example, an index corresponding to a callback function that will release the allocated resource is selected, or an index corresponding to a callback function that will process a scenario in which execution of the sub-command fails is selected.
Optionally, one task processing unit adds one or more callback function indexes to the DTU. Still optionally, one or more task processing units do not add any callback function index to the DTUs that it processes. Thus, the number of task processing units in the downstream data path is greater than, equal to, or less than the number of task processing units in the upstream data path. For example, the downstream data path has 5 task processing units in total, but in the process of processing the DTU by the 5 task processing units, only the cache management unit and the address mapping unit write a callback function index into the DTU when processing the subcommand. The DTUs at this time have two callback function indexes in common. So that only two callback functions are included in the upstream data path for the DTU.
For example, after the cache management unit 515 writes the index of the callback function 517 into the DTU 547, the callback function 517 of the cache management unit 515 becomes a part of the uplink data path 505, that is, the cache management unit 515 is a part of both the downlink data path and the uplink data path. When the callback function 517 is called by the index of the callback function 517, the cache resource requested to be allocated by the resource manager in the DTU 547 is released by executing the callback function 517.
In yet another example, the cache management unit 515 does not write the index of the callback function 517 into the DTU 547 when processing the subcommand. When the address mapping unit 525 processes the DTU 547, the index of the callback function 527 is written into the DTU 547, and the callback function 527 is used to release the cache resource allocated to the DTU 547 by the cache management unit 525. After the DTU 547 is processed, the index of the callback function 527 recorded by the DTU 547 is called to execute the callback function 527, and the cache resources are released. Optionally, the identification information of the allocated cache resources is also recorded in the DTU 547, and when the cache resources are released, the specific cache resources are indicated by the identification information.
In some embodiments, the downstream data path includes a plurality of task processing units, and each task processing unit in the plurality of task processing units writes an index of the callback function in the DTU during processing of the sub-command carried by the DTU. And the callback functions indicated by the callback function indexes written into the DTU by each task processing unit are the same or different. In one example, the cache management unit 515 requests allocation of cache resources when processing the DTU, and the address mapping conversion unit 525 and the data assembly unit 535 do not request cache resources when processing the DTU, so that the callback function index written by the cache management unit 515 to the DTU is different from the callback function index written by the address mapping conversion unit 525 and the data assembly unit 535 to the DTU, and still optionally, the callback function index written by the address mapping conversion unit 525 and the data assembly unit 535 to the DTU is the same.
And returning the processing result of the sub-command to the command transmission unit through the callback function of the uplink data path.
Optionally, one or more callback function indexes in the DTU 547 are ordered, and the callback functions indicated by the callback function indexes are called in sequence. One or more callback functions in the DTU 547 that are invoked in sequence constitute the upstream data path. By way of example, the order in which one or more callback functions indicated by the callback function indices in DTU 547 are called is the reverse order of writing those callback function indices to the DTU in the build upstream data path.
Fig. 5D shows a schematic diagram of a DTU.
As shown in fig. 5, a callback function index is recorded in the DTU, and the callback function index indicates a callback function list including a callback function index a, a callback function index B, and a callback function index C. By way of example, callback function index a is written by the cache management unit 515, callback function index B is written by the address mapping unit 525, and callback function index C is written by the data assembling unit 535. The callback function index A, the callback function index B and the callback function index C in the callback function list are ordered. In fig. 5D, the callback function index written earlier to the DTU is on the left side of the callback function list relative to the right side. Optionally, in the uplink data path, according to a reverse order of a writing order in which the callback function index is written into the DTU, 3 callback functions are sequentially called in an order of the callback function index C, the callback function index B, and the callback function index a.
In some embodiments, after acquiring the DTU, the monitoring unit calls one or more callback functions through a callback function index in the callback function list. For example, the monitoring unit obtains the last callback function index in the callback function list as a callback function index C, and calls the callback function C1 through the callback function index C to execute the callback function C1. After the execution of the callback function C1, the monitoring unit continues to call the callback function B1 via the callback function index B. After the execution of the callback function B1, the monitoring unit 220 continues to call the callback function a1 through the callback function index a. By way of example, the callback function a1 is the last callback function of the upstream data path, and it also returns a DTU to the command transfer unit, which carries the processing results of the subcommand.
FIG. 6A illustrates a schematic diagram of resource management according to an embodiment of the application.
And the task processing unit acquires the DTU from the channel and processes the DTU. And in the process of processing the DTU, requesting resources from a resource manager for processing the sub-commands carried by the DTU. The task processing unit records the identifier of the allocated resource in the DTU to indicate that the DTU currently occupies the resource. The task processing unit also records a callback function index into the DTU, and the callback function indicated by the callback function index releases the resource when being executed. The task processing unit provides the processed DTU to other task processing units or to the sub-command processing unit through the channel.
FIG. 6B illustrates a schematic diagram of resource management according to yet another embodiment of the present application.
The downstream data path includes two task processing units (610, 612), two resource managers (620, 622). In fig. 6B, DTU 640, DTU 642 and DTU 644 show different phases of the same DTU. DTU 640 is illustrated as DTU 642 after being processed by task processing unit 610, and DTU 642 is illustrated as DTU 644 after being processed by task processing unit 612.
When the task processing unit 610 processes the DTU 640, it requests the resource manager 620 to allocate resource a. For example, resource a represents a cache resource. The task processing unit 610 provides the DTU 642 to the task processing unit 612 and records the allocated resource a and the index of the callback function a1 in the DTU 642. Callback function a1, when executed, releases resource a to the resource manager.
When the task processing unit 612 processes the DTU 642, it requests the resource manager 622 to allocate resource B. For example, resource B represents an accelerator resource. The task processing unit 612 generates a DTU 644 and records an index of the allocated resource B and the callback function B1 in the DTU 644. Callback function B1, when executed, releases resource B to the resource manager. Thus, the index of the allocated resource A and the callback function A1 recorded in the DTU 644 is added by the task processing unit 610, and the index of the allocated resource B and the callback function B1 is added by the task processing unit 612.
FIG. 6C illustrates a schematic diagram of resource management according to yet another embodiment of the present application.
The downstream data path includes three task processing units (650, 652 and 654), two resource managers (660, 662). In fig. 6C, DTUs 670, 672 and 674 show different stages of the same DTU. And DTUs 670 and 680 represent different DTUs. DTU 680 and DTU 682 represent different phases of the same DTU. DTU 670 is illustrated as DTU 672 after being processed by task processing unit 650, and DTU 672 is illustrated as DTU 674 after being processed by task processing unit 652. DTU 680 is shown as DTU 682 after being processed by task processing unit 654.
When the task processing unit 650 processes the DTU 670, it requests the resource manager 660 to allocate resource a. For example, resource a represents a cache resource. The task processing unit 650 provides the DTU 672 to the task processing unit 652 and records the allocated resource a in the DTU 672. The task processing unit 650 also records the callback function index in the DTU 672, but the callback function indicated by the callback function index is not used to release the resource a when executed, for example.
When task processing unit 654 processes DTU 680, it requests resource manager 660 to allocate resource a'. Resource A 'and resource A are homogeneous resources (e.g., cache resources), but resource A' and resource A represent different instances of the same type of resource, respectively. The task processing unit 654 records the allocated resource a and a callback function index in the DTU 682, and the callback function indicated by the callback function index is not used to release the resource a' when executed.
The resource manager manages allocation of resources. For example, resource manager 660 ensures that a share of an instance of a resource (e.g., resource A) is not allocated to both DTU 672 and DTU 682. For example, resource manager 660 maintains a lock for each resource instance to ensure that a resource instance is assigned to only one DTU. The resource manager also manages the release of resource instances. Thus, the downstream data path of the task processing system according to the embodiment of the present application may include a plurality of task processing units having the same function and/or using the same resource, and the task processing units request the resource through the same resource manager.
Still referring to fig. 6C, the callback function that the task processing unit 650 adds to the DTU 672 is not used to release the resource a that it requested for the DTU 672. It means that the allocation and release of the same resource need not be taken care of by the same task processing unit (but may be taken care of by another task processing unit), thereby providing flexibility in task processing. It will be appreciated that it is also feasible that the same task processing unit is responsible for releasing its allocated resources.
When the task processing unit 652 processes the DTU 672, it requests the resource manager 622 to allocate resource B. Task processing unit 652 generates DTU 674 and records in DTU 644 an index of allocated resource B and callback function B1. Callback function B1, when executed, releases resource B to the resource manager. The task processing unit 652 also records an index of the callback function a1 in the DTU 674. Callback function a1, when executed, releases resource a to the resource manager.
FIG. 7 illustrates a block diagram of a storage device constructed with a task processing system according to an embodiment of the present application.
The task processing system according to the embodiment of the present application is used to construct a storage device and is implemented by, for example, a control section of the storage device.
The task processing system comprises a command transmission unit, a data path and a sub-command processing unit. The sub-command processing unit is coupled to the storage medium.
And the command transmission unit exchanges the IO command with the host according to the specified storage protocol. The command transmission unit splits the IO command into one or more subcommands, allocates a DTU to carry the subcommands, and delivers the DTU to the data path.
The data path includes a downlink data path and an uplink data path. The downstream data path performs one-stage or multi-stage processing on the subcommands, and finally delivers the DTUs to the subcommand processing unit. The sub-command processing unit converts the sub-commands carried by the DTU into commands for accessing the storage medium. By way of example, the sub-command processing unit is a media interface controller. The sub-command processing unit also queries the storage medium to obtain a processing result of the command to access the storage medium. The upstream data path delivers the processing result of the subcommand to the command transfer unit. As an example, the upstream data path polls the sub-command processing unit to obtain the processing result of the sub-command. If necessary, the command transmission unit splits the collected same IO command into one or more subcommands, and provides the processing result of the IO command to the host after all subcommands split by the same IO command are processed.
According to embodiments of the present application, facilities are provided for supporting virtualization in a storage device. For example, the NVMe protocol defines a NameSpace (NameSpace, abbreviated NS). The namespace exposes virtual storage devices or logical storage devices to hosts that access the storage devices. Thus, by providing multiple namespaces on a single control unit, each namespace provides a virtualized storage device to the host. As yet another example, multiple virtual storage devices accessed by different storage protocols are provided simultaneously by a single control component of a storage device, such as a storage device that supports NVMe protocol, open channel (OpenChannel) protocol, and/or SATA protocol simultaneously.
Resources are also allocated for each virtual storage device. For example, it is advantageous that cache resources, storage media resources, and accelerator resources are shared by each virtual storage device, and it is advantageous that mapping table resources are exclusive to each virtual storage device.
According to the embodiment of the application, the realization of the various requirements is facilitated.
FIG. 8A illustrates a block diagram of a storage device constructed with a task processing system according to yet another embodiment of the present application.
According to the embodiment of FIG. 8A, the storage device exposes multiple namespaces (denoted as NS0, NS1, NS2, and NS3, respectively) of, for example, the NVMe protocol to hosts. The host gains access to the virtual storage devices provided by each namespace according to the NVMe protocol.
The task processing system according to the embodiment of the present application is used to construct the storage device illustrated in fig. 8A, and is implemented by a control section such as a storage device.
Examples of the task processing system include a command transmission unit, a plurality of data paths (810, 812, 814, and 816), and a sub-command processing unit. The sub-command processing unit is coupled to the storage medium. Each of the plurality of data paths (810, 812, 814, and 816) is for providing one of the namespaces. By way of example, data path 810 provides namespace NS0, data path 812 provides namespace NS1, data path 814 provides namespace NS2, and data path 816 provides namespace NS 3.
The command transmission unit splits the IO command provided by the host into subcommands, and provides the subcommands split from the IO command to the data path corresponding to the namespace according to the namespace accessed by the IO command. For example, if an IO command accesses namespace NS2, the command transfer unit provides all the subcommands split from the IO command to data path 814, and data path 814 processes all IO commands accessing namespace NS 2.
Therefore, according to the embodiment of the application, the storage device is conveniently enabled to provide the function of multiple namespaces by copying the data path.
Optionally, each namespace is provided with its exclusive mapping table resources, as well as other resources (storage media resources, accelerator resources, etc.) that are shared. Thus, the task processing system provides, for example, 4 resource managers for managing mapping table resources, each resource manager for managing mapping table resources being coupled to one of the data lanes and managing only mapping table resources corresponding to namespaces associated with its coupled data lane, thereby achieving effective isolation of mapping table resources between namespaces. And the task processing system also provides each datapath with its own dedicated resource manager that manages other types of resources. Taking a storage medium resource manager for managing storage media as an example, each storage medium resource manager coupled to each data path manages, for example, all storage media of a storage device, so that each data path can use any available storage media of the storage device to carry data written by a sub-command, utilization of storage medium resources is improved, and global wear leveling of the storage device is facilitated.
It will be appreciated that the datapath, resource manager, and the various resources being managed may have a variety of other correspondences in embodiments consistent with the present application. For example, each data path may be provided with exclusive storage media resources to mitigate the mutual impact between the data paths. Importantly, according to the task processing system of the present application, the data path can be easily copied, the resource manager can be easily coupled to the data path, and the resources of the storage device can be easily managed by the resource manager. Thus, the development of new functions of the storage device is accelerated.
FIG. 8B illustrates a block diagram of a storage device constructed with a task processing system according to yet another embodiment of the present application.
According to the embodiment of fig. 8B, the storage device exposes to the host the functions of various devices, such as NVMe storage device, Open Channel (OC) storage device, accelerator with specified functions, MCTP (Management Component Transport Protocol) endpoint, Management Component Transport Protocol (nmq) queue (AdminQueue) of NVMe device, and the like. The host gains access to the functionality provided by each device according to different protocols.
The task processing system according to the embodiment of the present application is used to construct the storage device illustrated in fig. 8B, and is implemented by a control section such as a storage device.
Examples of the task processing system include a command transmitting unit, a plurality of data paths (820, 822, 824, and 826), and a plurality of sub-command processing units (380, 386, and 839). The sub-command processing unit 830 is coupled to a storage medium. Sub-command processing unit 893 is coupled to an accelerator (e.g., an accelerator that performs encryption/decryption calculations according to the AES/SM4 standard)
By way of example, the data path 820 provides NVMe storage device functionality, processing IO commands of the NVMe protocol; the data path 822 provides OC storage device functionality and handles IO commands of the OC protocol. The data path 824 handles management commands of the MCTP protocol or management commands of the NVMe protocol. Data path 826 processes access requests to the accelerator.
The command transmission unit transmits the command to the corresponding data path according to the protocol used by the command. Optionally, for an IO command, the first task processing unit of data path 820 and/or data path 822 performs a split from the IO command to a subcommand. The IO command is to access storage settings of the storage device. The data path 820 and the data path 822 are serviced by the single sub-command processing unit 830, so that all storage media of the storage device can be used by both the NVMe storage device provided by the data path 820 and the OC storage device provided by the data path 822, thereby improving the utilization rate of the storage media.
Still optionally, the command transmission unit splits the IO command into subcommands, allocates DTUs to the subcommands, and provides the subcommands to the data paths corresponding to the protocols according to the protocols thereof. The command transfer unit also assigns a DTU for managing commands and commands accessing the accelerator. The DTUs are used to carry various commands to be processed by one or more task processing units in the data path. By way of example, the command transmission unit need not split the management command and/or the command to access the accelerator into subcommands, but the DTU carries the management command and/or the command to access the accelerator.
For management commands (e.g., conforming to the MCTP protocol or NVMe protocol), which query or set the state of the device, they are processed by the data path 824 and the sub-command processing unit 836. When the management command queries, for example, the available space size of the storage device, the sub-command processing unit 836 acquires the use status of the storage medium from the memory without being associated with dedicated hardware. The sub-command processing unit 836 couples, for example, a temperature sensor (not shown) to obtain temperature information when the management command queries, for example, the device temperature.
The accelerator to which the sub-command processing unit 839 is coupled is an accelerator that performs encryption/decryption calculations, for example, according to the AES/SM4 standard. Optionally, the resource managers of data path 820 and data path 822 encapsulate accelerators as resources and are used by task processing units of data processing unit 820/830. The data path 826 and sub-command processing unit 839 then expose the accelerator as a device that provides the relevant service so that the host can directly use the accelerator.
Therefore, according to the embodiment of the application, the storage device is conveniently enabled to provide the functions of various virtual devices by creating various data paths.
Although the present application has been described with reference to examples, which are intended to be illustrative only and not to be limiting of the application, changes, additions and/or deletions may be made to the embodiments without departing from the scope of the application.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for constructing a downlink data path is characterized by comprising the following steps:
creating at least one task processing unit;
creating at least one channel;
associating the at least one task processing unit with the at least one channel.
2. The method of claim 1, wherein the task processing unit includes an inbound interface, an outbound interface, and a DTU processing module; the method further comprises the following steps:
the inbound interface obtains a DTU from its associated channel;
the outbound interface adds DTUs to the channels associated with the outbound interface;
the DTU processing module extracts the sub-commands from the DTUs acquired through the inbound interface, processes the sub-commands, and adds the DTUs carrying the processed sub-commands to the channels associated with the DTUs through the outbound interface.
3. The method of claim 1 or 2, wherein a channel comprises a DTU list comprising containers holding one or more DTUs and functions of a plurality of operational DTU lists comprising at least a first Push function and a Pop function; when the first Push function is called, adding at least one DTU to the DTU list; and when the Pop function is called, acquiring at least one DTU from the DTU list.
4. The method according to any one of claims 1 to 3,
wherein the channel obtains the DTU from one or more of its associated task processing units and provides the DTU to only one of its associated task processing units.
5. The method of any of claims 1-4, wherein the associating the at least one task processing unit with the at least one channel comprises:
setting one or more channels associated with an inbound interface of each task processing unit of the downstream channel;
and setting a channel associated with the outbound interface of each task processing unit of the downstream channel.
6. A downlink data path, comprising: at least one task processing unit and at least one channel; wherein the content of the first and second substances,
a first task processing unit in the at least one task processing unit acquires a DTU from a channel associated with the first task processing unit, wherein the DTU carries a sub-command, the first task processing unit is any one task processing unit in the at least one task processing unit, and a second task processing unit is one task processing unit in the at least one task processing unit except the first task processing unit;
and the first task processing unit processes the sub-command and fills the DTU into a channel associated with the first task processing unit after the sub-command is processed, so that the second task processing unit acquires the DTU from the channel.
7. The downstream data path of claim 6, wherein the first task processing unit obtains the DTU from a first channel associated with itself, and after the sub-command processing is completed, fills the DTU into a second channel associated with itself, so that the second task processing unit obtains the DTU from the second channel; alternatively, the first and second electrodes may be,
the first task processing unit acquires the DTU from a first channel associated with the first task processing unit, and fills the DTU into the first channel after the sub-command processing is completed, so that the second task processing unit acquires the DTU from the first channel; wherein the first channel and the second channel are both associated with the first task processing unit, and the first channel and the second channel are different channels.
8. The downstream data path of claim 6 or 7, wherein each of the at least one channel comprises a DTU list for accommodating a plurality of DTUs and functions of a plurality of operational DTU lists including at least a first Push function and a Pop function; wherein the content of the first and second substances,
the task processing unit adds at least one DTU in the DTU list by calling a first Push function;
and the task processing unit acquires at least one DTU from the DTU list by calling the Pop function.
9. The downstream data path of any of claims 6-8, wherein the downstream data path further comprises at least one resource manager;
the resource manager manages the use of the specified resources;
at least one task processing unit is associated with at least one resource manager such that the task processing unit accesses the resource through its associated resource manager.
10. An information processing apparatus comprising a memory, a processor and a program stored on the memory and executable on the processor, characterized in that the processor implements the method according to one of claims 1 to 5 when executing the program.
CN202010306299.9A 2020-04-17 2020-04-17 Method and device for constructing downlink data path Pending CN113535345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010306299.9A CN113535345A (en) 2020-04-17 2020-04-17 Method and device for constructing downlink data path

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010306299.9A CN113535345A (en) 2020-04-17 2020-04-17 Method and device for constructing downlink data path

Publications (1)

Publication Number Publication Date
CN113535345A true CN113535345A (en) 2021-10-22

Family

ID=78123397

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010306299.9A Pending CN113535345A (en) 2020-04-17 2020-04-17 Method and device for constructing downlink data path

Country Status (1)

Country Link
CN (1) CN113535345A (en)

Similar Documents

Publication Publication Date Title
US9606838B2 (en) Dynamically configurable hardware queues for dispatching jobs to a plurality of hardware acceleration engines
US11561830B2 (en) System and method for low latency node local scheduling in distributed resource management
CN108701058B (en) Virtualized sensor
US10248175B2 (en) Off-line affinity-aware parallel zeroing of memory in non-uniform memory access (NUMA) servers
WO2019223596A1 (en) Method, device, and apparatus for event processing, and storage medium
US10275558B2 (en) Technologies for providing FPGA infrastructure-as-a-service computing capabilities
US10013264B2 (en) Affinity of virtual processor dispatching
US9063918B2 (en) Determining a virtual interrupt source number from a physical interrupt source number
US10467052B2 (en) Cluster topology aware container scheduling for efficient data transfer
US10860352B2 (en) Host system and method for managing data consumption rate in a virtual data processing environment
CN113076180B (en) Method for constructing uplink data path and data processing system
CN113076189B (en) Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths
CN114816777A (en) Command processing device, method, electronic device and computer readable storage medium
JP6584529B2 (en) Method and apparatus for accessing a file and storage system
Zou et al. DirectNVM: Hardware-accelerated NVMe SSDs for high-performance embedded computing
CN113535345A (en) Method and device for constructing downlink data path
CN113535377A (en) Data path, resource management method thereof, and information processing apparatus thereof
US9176910B2 (en) Sending a next request to a resource before a completion interrupt for a previous request
CN115221099A (en) Task processing method and system
JP2011257973A (en) Memory management method and memory management device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination