CN107870779B - Scheduling method and device - Google Patents

Scheduling method and device Download PDF

Info

Publication number
CN107870779B
CN107870779B CN201610861793.5A CN201610861793A CN107870779B CN 107870779 B CN107870779 B CN 107870779B CN 201610861793 A CN201610861793 A CN 201610861793A CN 107870779 B CN107870779 B CN 107870779B
Authority
CN
China
Prior art keywords
register
command
processed
priority
event
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610861793.5A
Other languages
Chinese (zh)
Other versions
CN107870779A (en
Inventor
王祎磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Starblaze Technology Co ltd
Original Assignee
Beijing Starblaze Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Starblaze Technology Co ltd filed Critical Beijing Starblaze Technology Co ltd
Priority to CN201610861793.5A priority Critical patent/CN107870779B/en
Priority to CN202311576115.0A priority patent/CN117555598A/en
Publication of CN107870779A publication Critical patent/CN107870779A/en
Application granted granted Critical
Publication of CN107870779B publication Critical patent/CN107870779B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/30Arrangements for executing machine instructions, e.g. instruction decode
    • G06F9/30098Register arrangements
    • G06F9/30141Implementation provisions of register files, e.g. ports

Abstract

The invention discloses a scheduling method and device. The disclosed scheduling method comprises the following steps: selecting a first register having a first value from a high priority group of register sets; the first register corresponds to a first event to be processed; and scheduling a first thread corresponding to the first register to process the first event to be processed. The scheduling device comprises: the system comprises a command queue, a micro instruction memory, a micro instruction execution unit, a register set, a scheduler and an NVM medium interface; the register group is used for indicating the event to be processed and the priority of the event to be processed; the scheduler is used for scheduling the threads according to the registers in the register group; the micro instruction execution unit receives an instruction from the scheduler to execute the scheduled thread.

Description

Scheduling method and device
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a scheduling method and apparatus.
Background
The NVM (nonvolatile Memory) is used for realizing a Memory function, and has a nonvolatile characteristic. FIG. 1 is a block diagram of a solid state storage device 102 coupled to a host for providing storage capacity for the host. The host and solid state storage device 102 may be coupled by a variety of means including, but not limited to, connecting the host to the solid state storage device 102 via, for example, SATA (Serial Advanced Technology Attachment ), SCSI (Small Computer System Interface, small computer system interface), SAS (Serial Attached SCSI ), IDE (Integrated Drive Electronics, integrated drive electronics), USB (Universal Serial Bus ), PCIE (Peripheral Component Interconnect Express, PCIE, peripheral component interconnect Express), NVMe (NVM Express), ethernet, fibre channel, wireless communications network, and the like. The host may be an information processing device capable of communicating with the storage device in the manner described above, such as a personal computer, tablet, server, portable computer, network switch, router, cellular telephone, personal digital assistant, or the like. The memory device 102 includes an interface 103, a control unit 104, one or more NVM chips 105, and a DRAM (Dynamic Random Access Memory ) 110.
NAND flash memory, phase change memory, feRAM (Ferroelectric RAM, ferroelectric memory), MRAM (Magnetic Random Access Memory, magnetoresistive memory), RRAM (Resistive Random Access Memory, resistive memory), and the like are common NVM.
The interface 103 may be adapted to exchange data with a host by way of, for example, SATA, IDE, USB, PCIE, NVMe, SAS, ethernet, fibre channel, etc.
The control unit 104 is used to control data transfer between the interface 103 and the NVM chip 105 and the firmware memory 110, and is also used for storage management, mapping of host logical addresses to flash physical addresses, erasure balancing, bad block management, etc. The control component 104 can be implemented in a variety of ways, such as software, hardware, firmware, or a combination thereof, for example, the control component 104 can be in the form of an FPGA (Field-programmable gate array, field programmable gate array), an ASIC (Application Specific Integrated Circuit, application-specific integrated circuit), or a combination thereof; the control component 104 may also include a processor or controller in which software is executed to manipulate the hardware of the control component 104 to process IO (Input/Output) commands; control unit 104 may also be coupled to DRAM 110 and may access data of DRAM 110; FTL tables and/or cached data of IO commands may be stored in the DRAM.
The control section 104 includes a flash interface controller (or referred to as a flash channel controller) that is coupled to the NVM chip 105 and issues commands to the NVM chip 105 in a manner conforming to an interface protocol of the NVM chip 105 to operate the NVM chip 105 and receive a command execution result output from the NVM chip 105. The interface protocols of NVM chip 105 include well-known interface protocols or standards such as "Toggle", "ONFI".
The memory Target (Target) is one or more logical units (Logic units) of a shared Chip Enable (CE) signal within the flash granule 105 package, each logical Unit having a logical Unit number (LUN, logic Unit Number). One or more dies (Die) may be included within the NAND flash package. Typically, the logic unit corresponds to a single die. The logic cell may include multiple planes (planes). Multiple planes within a logic unit may be accessed in parallel, while multiple logic units within a NAND flash memory chip may execute commands and report status independently of each other. At the position ofhttp://www.onfi.org/ ~/media/ONFI/specs/ONFI_3_2%20Gold.pdfIn the obtained "Open NAND Flash Interface Specification (review 3.2)", meanings about target, logical unit, LUN, plane (Plane) are provided, and commands to operate the NVM chip are also provided.
In chinese patent application publication No. CN1414468A, a scheme is provided for processing CPU (Central Processing Unit ) instructions by executing a micro instruction sequence. When the CPU is to process the specific instruction, the conversion logic circuit converts the specific instruction into a micro instruction sequence corresponding to the specific instruction, and the function of the specific instruction is realized by executing the micro instruction sequence. The micro instruction sequence or a template of the micro instruction sequence is stored in a ROM (Read Only Memory). In the process of converting a specific instruction into a micro instruction sequence, the micro instruction sequence template can be filled so as to correspond to the specific instruction.
In addition, methods and apparatus for executing microinstructions for flash interface controllers are provided in chinese patent applications CN201610009789.6 and CN 201510253428.1.
Since an NVM controller used in NVM memory is typically coupled to multiple NVM chips, which include multiple LUNs (logical units) or dies, which can respond to and access NVM commands in parallel, and since there can be multiple NVM commands pending on each LUN or die, the NVM controller needs to schedule multiple NVM commands or multiple sequences of microinstructions to maintain multiple in-process or pending NVM commands, or multiple sequences of microinstructions to generate NVM commands.
In the prior art, a plurality of command queues are provided, each command queue is used for indicating different priorities, commands received from a high priority queue are processed preferentially, commands received from a low priority queue are processed with lower priorities, but adjustment strategies are not adjusted based on the execution state of a current command or a micro instruction sequence, and the technical problem of poor configuration flexibility exists.
Disclosure of Invention
The invention aims to provide a scheduling method and a scheduling device, which are used for scheduling based on the execution state of a current command or a scheduled event.
A first aspect of the present invention provides a scheduling method, the scheduling method comprising:
selecting a first register having a first value from the set of registers; the first register corresponds to a first event to be processed; and scheduling a first thread corresponding to the first register to process the first event to be processed.
With reference to the first aspect of the present invention, in a first possible implementation manner, the register set includes a plurality of registers, each register having a first value corresponds to one event to be processed, registers in the register set are organized as rows and columns, a same column is used to indicate threads operating a same resource, and registers in a same row belong to a same priority group.
With reference to the first aspect of the present invention and the first possible implementation manner thereof, in a second possible implementation manner, the thread scheduling method further includes: in response to completion of the first pending event processing, a register in the register set corresponding to the first pending event is modified to a second value to indicate completion of the first pending event processing.
With reference to the first aspect of the present invention and the first possible implementation manner thereof, in a third possible implementation manner, the thread scheduling method further includes: in response to completion of the first to-be-processed event processing, modifying a register in the register set corresponding to the first to-be-processed event to a second value to indicate completion of the first to-be-processed event processing; and setting a register corresponding to the second pending command in the register set to a first value to indicate the second pending event.
With reference to the third possible implementation manner of the first aspect of the present invention, in a fourth possible implementation manner, the first event to be processed indicates a data transmission stage of the write command, and the second event to be processed indicates a result query stage of the write command; and the priority group of the register corresponding to the first event to be processed has a lower priority than the priority group of the register corresponding to the second event to be processed.
With reference to the first aspect, the first to the fourth possible implementation manners of the present invention, in a fifth possible implementation manner, the thread scheduling method further includes: selecting a high priority group indicating events to be processed according to the priority of the priority group; and selecting a first register having a first value from the high priority group in a round robin fashion.
With reference to the first aspect, the first to the fifth possible implementation manners of the present invention, in a sixth possible implementation manner, the thread scheduling method further includes: acquiring a command to be processed from a command queue, and setting a first priority for the command to be processed; and setting a register corresponding to the resource operated by the command to be processed to a first value from the priority group having the first priority according to the resource operated by the command to be processed.
With reference to the sixth possible implementation manner of the first aspect of the present invention, in a seventh possible implementation manner, the first priority is determined according to a type of the command to be processed and/or a command queue for acquiring the command to be processed.
With reference to the sixth or seventh possible implementation manner of the first aspect of the present invention, in an eighth possible implementation manner, if the command to be processed is a write command, a first priority is set for the write command, and a register corresponding to a resource operated by the write command is set to a first value from a priority group having the first priority according to the resource operated by the write command; responding to the completion of the data transmission stage of the write command, and setting a second priority for the result inquiry stage of the write command; setting a register corresponding to a resource operated by a write command to a first value from a priority group having a second priority in accordance with the resource operated by the write command; and the second priority is higher than the first priority.
With reference to the seventh or eighth possible implementation manner of the first aspect of the present invention, in a ninth possible implementation manner, the type of the command to be processed is updated and/or a correspondence relationship between a command queue and a priority group of the command to be processed is obtained.
The scheduling method of the first aspect of the present invention enables the register to correspond to the thread, the register group includes priority groups having different priorities, and the register having the first value indicates that there is an event to be scheduled to be processed, thereby implementing a preset operation on the priorities of the register group and the values of the register to map the scheduling situation of the event to be processed.
A second aspect of the present invention provides a scheduling apparatus comprising a command queue, a microinstruction memory, a microinstruction execution unit, a register set, a scheduler, and an NVM media interface; wherein the command queue is used for receiving commands from a user or an upper system; the micro instruction execution unit operates the NVM medium interface to process the command to be processed through an execution thread, wherein the thread is a micro instruction sequence which can be executed; the register group is used for indicating the event to be processed and the priority of the event to be processed; the scheduler is used for scheduling the threads according to the registers in the register group; the micro instruction execution unit receives an instruction from the scheduler to execute the scheduled thread.
With reference to the second aspect of the present invention, in a first possible implementation manner, the register set includes a plurality of registers, each register having a first value indicates one event to be processed, registers in the register set are organized as rows and columns, registers in a same column are used to indicate threads operating a same resource, and registers in a same row belong to a same priority group.
With reference to the second aspect of the present invention, in a second possible implementation manner, the scheduling apparatus further includes a context memory, where the context memory is used to save an execution state of the thread; when execution of the thread is suspended, the state of the thread is saved to the context memory, and when execution of the thread is resumed, the state of the thread is resumed from the context memory.
With reference to the second aspect, the first or the second possible implementation manner of the present invention, in a third possible implementation manner, the scheduling device further includes a mask register set, configured to indicate whether the event to be processed indicated in the register set needs to be processed.
With reference to the second aspect, the first to the third possible implementation manners of the present invention, in a fourth possible implementation manner, the scheduling device further includes a mapper, where the mapper selects a first priority group with a first priority according to a to-be-processed command in the command queue; and setting a register corresponding to the resource operated by the command to be processed to a first value from a first priority group having a first priority according to the resource operated by the command to be processed.
With reference to the second aspect, the first to fourth possible implementation manners of the present invention, in a fifth possible implementation manner, the scheduler selects a high priority group indicating an event to be processed, selects a first register having a first value from the high priority group in a round robin manner, and schedules a thread for processing the event to be processed indicated by the first register.
With reference to the fifth possible implementation manner of the second aspect of the present invention, in a sixth possible implementation manner, if no event to be processed is indicated in the high priority group, the scheduler selects the second register with the first value from the low priority group in a round robin manner, and schedules a thread for processing the event to be processed indicated by the second register.
With reference to the second aspect, the first to the sixth possible implementation manners of the present invention, in a seventh possible implementation manner, in response to a command to be processed of the command queue being a write command, a register of a resource accessed by the write command in a first priority group having a first priority is set to a first value; after the micro instruction execution unit completes the data transmission stage of the write command, setting a register corresponding to the resource accessed by the write command in a second priority group with a second priority as a first value; wherein the second priority is higher than the first priority.
With reference to the second aspect, the first to the seventh possible implementation manners of the present invention, in an eighth possible implementation manner, the resource is a logic unit of the NVM.
A third aspect of the present invention provides an IO command scheduling method, including:
selecting a first register having a first value from a high priority group of register sets; the first register corresponds to a first command to be processed; and processing the first command to be processed.
With reference to the third aspect of the present invention, in a first possible implementation manner, the register set includes a plurality of registers, each register having a first value corresponds to a to-be-processed command, registers in the register set are organized as rows and columns, registers in a same column are used to indicate to operate on to-be-processed commands of a same resource, and registers in a same row belong to a same priority group.
With reference to the third aspect of the present invention and the first possible implementation manner thereof, in a second possible implementation manner, the command scheduling method further includes: in response to completion of the first pending command processing, a register in the register set corresponding to the first pending command is modified to a second value to indicate completion of the first pending command processing.
With reference to the third aspect of the present invention and the first possible implementation manner thereof, in a third possible implementation manner, the command scheduling method further includes: in response to the first stage processing of the first command to be processed being completed, modifying a register in the register set corresponding to the first command to be processed to a second value to indicate the first command to be processed or the first stage processing of the first command to be processed being completed; and setting a register of the second stage corresponding to the first command to be processed in the register set to a first value to indicate that the second stage of the first command to be processed is waiting for processing.
With reference to the third possible implementation manner of the third aspect of the present invention, in a fourth possible implementation manner, the first command to be processed is a write command, the first stage of the first command to be processed is a data transmission stage, and the second stage of the first command to be processed is a result query stage; and the priority group of the register corresponding to the second stage of the first command to be processed has a lower priority than the priority group of the register corresponding to the first stage of the first command to be processed.
With reference to the third aspect, the first to fourth possible implementation manners of the present invention, in a fifth possible implementation manner, the command scheduling method further includes: selecting a high priority group indicating commands to be processed according to the priorities of the priority groups; and selecting a first register having a first value from the high priority group in a round robin fashion.
With reference to the third aspect, the first to the fifth possible implementation manners of the present invention, in a sixth possible implementation manner, the command scheduling method further includes: after a specified number of commands corresponding to the registers of the first priority group are executed, the priority of the first priority group is lowered.
With reference to the third aspect, the first to the fifth possible implementation manners of the present invention, in a seventh possible implementation manner, the command scheduling method further includes: if the second priority group is not scheduled for a long time, the priority of the second priority group is increased.
With reference to the third aspect, the first to seventh possible implementation manners of the present invention, in an eighth possible implementation manner, the command scheduling method further includes: acquiring a command to be processed from a command queue, and setting a first priority for the command to be processed; and setting a register corresponding to the resource operated by the command to be processed to a first value from the priority group having the first priority according to the resource operated by the command to be processed.
With reference to the eighth possible implementation manner of the third aspect of the present invention, in a ninth possible implementation manner, the first priority is determined according to a type of the command to be processed and/or a command queue for acquiring the command to be processed.
With reference to the eighth or ninth possible implementation manner of the third aspect of the present invention, in a tenth possible implementation manner, if the command to be processed is a write command, a first priority is set for the write command; responding to the completion of the data transmission stage of the write command, and setting a second priority for the result inquiry stage of the write command; and the second priority is higher than the first priority.
With reference to the ninth or tenth possible implementation manner of the third aspect of the present invention, in an eleventh possible implementation manner, the command scheduling method further includes: updating the type of the command to be processed and/or obtaining the corresponding relation between the command queue and the priority group of the command to be processed.
With reference to the third aspect, the first to tenth possible implementation manners of the present invention, the command is an NVM interface command, and the resource is a logic unit of the NVM.
A fourth aspect of the invention provides an NVM interface controller, the NVM interface controller comprising: a command queue, a register set, a scheduler, and an NVM command processing unit, wherein the command queue is configured to receive commands from a user or an upper system; the register group is used for indicating the dispatching priority of the command; the scheduler is used for scheduling the command according to the register in the register group and indicating the command to be processed; and the NVM command processing unit receives the instruction of the scheduler and processes the command to be processed.
With reference to the fourth aspect of the present invention, in a first possible implementation manner, the register set includes a plurality of registers, each register having a first value corresponds to a command to be processed, registers in the register set are organized as rows and columns, registers in a same column are used to indicate the command to be processed that operates the same resource, and registers in a same row belong to a same priority group.
With reference to the fourth aspect of the present invention, in a second possible implementation manner, the scheduler selects a high priority group indicating a command to be processed according to the priority of the priority group; and selecting a first register having a first value from the high priority group in a round robin manner and scheduling execution of a pending command corresponding to the first register.
With reference to the fourth aspect, the first or the second possible implementation manner of the present invention, in a third possible implementation manner, after the scheduler schedules a specified number of commands corresponding to the registers of the first priority group, the priority of the first priority group is reduced.
With reference to the fourth aspect, the first to third possible implementation manners of the present invention, in a fourth possible implementation manner, if the second priority group is not scheduled for a long time, the scheduler increases the priority of the second priority group.
With reference to the fourth aspect, the first to fourth possible implementation manners of the present invention, in a fifth possible implementation manner, the NVM interface controller further includes a mapper, where the mapper maps the command to be processed to the first priority according to a type of the command to be processed in the command queue; and setting a register corresponding to the resource operated by the command to be processed to a first value from the priority group having the first priority according to the resource operated by the command to be processed.
With reference to the fourth aspect, the first to the fifth possible implementation manners of the present invention, in a sixth possible implementation manner, a mapping rule of the mapper may be updated.
With reference to the fourth aspect, the first to the fifth possible implementation manners of the present invention, in a seventh possible implementation manner, after the NVM command processing unit completes processing the first command, a register corresponding to the first command in the register set is set to a second value.
With reference to the fourth aspect, the first to the seventh possible implementation manners of the present invention, in an eighth possible implementation manner, a first priority is set for a write command to be processed; after the NVM command processing unit completes processing the data transmission stage of the write command, a second priority is set for the result query stage of the write command, and the second priority is higher than the first priority.
With reference to the fourth aspect, the first to eighth possible implementation manners of the present invention, in a ninth possible implementation manner, the NVM command processing unit is coupled to the NVM chip and accesses the NVM chip according to a pending command.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a prior art solid state storage device;
FIG. 2 is a schematic diagram of an NVM interface controller according to a first embodiment of the present invention; and
fig. 3 is a schematic diagram of an NVM interface controller according to a second embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 2 is a block diagram of an NVM interface controller of a control section (see also fig. 1, control section 104) of a solid-state storage device according to a first embodiment of the invention. The NVM interface controller includes a command queue 210, a register set 220, a scheduler 230, and an NVM command processing unit 240.
The command queue 210 is used to receive commands from a user or an upper layer system. Commands from a user or an upper layer system may include commands to read, write, delete, mark as invalid, etc., commands to read NVM chip status, read/set NVM chip features (features), etc., and user-defined commands. The command queue 210 may be implemented by a memory, a first-in-first-out memory, or a register file, etc.
The NVM command processing unit 240 retrieves commands from the command queue 210 and sends NVM interface commands conforming to the NVM chip interface standard to the NVM chip or receives data or status from the NVM according to the NVM chip interface standard according to the instructions of the commands. NVM command processing unit 240 is coupled to a plurality of NVM chips, illustratively in fig. 2 NVM command processing unit 240 is coupled to 4 NVM chips through 2 lanes, each NVM chip including 2 LUNs, with LUN0-LUN3 provided in the NVM chip of lane 1 and LUN4-LUN7 provided in the NVM chip of lane 2.
Register set 220 includes a plurality of registers. For purposes of illustration, the register sets 220 in the register set 220 are organized as rows and columns, with the registers of each column for the same LUN, and the registers of each row belonging to the same priority group. In FIG. 2, each 1 register in column 1 from left to right in register set 220 indicates that there is a command to be scheduled to access LUN 0; each value of 1 register in column 2 from left to right in register set 220 indicates that there is a command to be scheduled to access LUN 1. Similarly, each value of 1 register in column 8 from left to right in register set 220 indicates that there is a command to be scheduled to access LUN 7. A 1 st behavior priority group P1 of the register group 220; a 2 nd behavior priority group P2 of the register group 220; and a 3 rd behavior priority group P3 of the register set 220.
Alternatively, each column of register set 220 is for the same die or NVM chip.
When a pending command is obtained from the command queue 210, the register set 220 is set according to its command type, the LUN accessed. As shown in fig. 2, in response to a read command to access LUN0 being fetched from command queue 210, the register of row 1, column 1 of register set 220 is set to 1. Next, a write command to access LUN0 is obtained from command queue 210, and the register set 220 row 3, column 1 register is set to 1 to indicate that the write command has a lower dispatch priority than the previous read command.
Alternatively, different scheduling priorities may be set for commands accessing the NVM depending on the operating state of the solid-state storage device (power up, power down, low power consumption, normal, etc.).
Optionally, the command obtained from the command queue 210 also includes a priority indication, and the command is further provided with a scheduling priority according to the priority indication of the command.
Still alternatively, the write command includes two processing stages, a data transfer stage and a result query stage. As shown in fig. 2, upon receipt of a write command, to perform a data transfer phase, the register of row 3, column 1 of register set 220 is set to 1; and after the data transfer phase is completed, the register set 220 row 3, column 1 is set to 0 and the register set 220 row 2, column 1 is set to 1 to indicate that the scheduling priority of the result query phase of the write command is higher than that of the data transfer phase.
In an embodiment according to the invention, priority group P1 has the highest priority, priority group P2 has the centered priority, and priority group P3 has the lowest priority. When any one of a plurality of registers of the priority group P1 indicates a command to be processed, selecting the priority group P1 for scheduling; the lower priority group is selected only if there is no pending command in the high priority group.
After selecting the priority group, the scheduler selects a command to be processed from the priority group. For example, a pending command to access one of the LUNs is selected in a round robin fashion.
Optionally, a plurality of scheduling strategies may be employed for selecting the priority group, and/or for selecting the commands to be processed in the selected priority group. For example, round robin, weighted round robin, priority based scheduling, highest response to priority, etc. scheduling policies may be employed to select a priority group and/or to select a pending command in the selected priority group.
Scheduler 230 selects a pending command according to register set 220 and indicates the pending command to NVM command processing unit 240. The NVM command processing unit 240 processes the command to be processed according to the instruction of the scheduler. After the command processing is complete, or the command's phase processing is complete, the registers in register set 220 are modified to indicate the command or command phase processing is complete.
For example, in one scenario, a read command and an erase command to access LUN0 are obtained from command queue 210. To process a read command, the register corresponding to LUN0 in priority group P1 is set to 1, and to process an erase command, the register corresponding to LUN0 in priority group P3 is set to 1. The scheduler 230 thus prioritizes the read commands indicated by the registers of the corresponding LUN0 in the priority group P1. After the read command processing is completed, the register corresponding to LUN0 in the priority group P1 is cleared. Next, the scheduler 230 selects an erase command indicated by the register corresponding to LUN0 in the priority group P3. This reduces the blocking of read commands by erase commands with long processing delays, reducing the average processing delay of read commands.
As yet another example, in one scenario, there are a large number of write commands and a small number of read commands in the command queue 210. The read command or write command to be processed is retrieved from the command queue 210. The register of the priority group P2 is set to 1 depending on the LUN to be accessed by the pending command. Alternatively, if in priority group P2, the register indicating LUN2 has been set (e.g., set to 1) (indicating that there is a pending command on LUN2 that belongs to priority group P2), and the register indicating LUN2 of priority group P3 is set. The write command includes two processing stages, a data transfer stage and a result query stage. The read command includes two stages, a command transfer and result query stage. When the command transfer phase processing of the read command is completed, the instruction of the read command in the register group is cleared, and in the priority group P1, the register corresponding to the LUN accessed by the read command is set to set the result inquiry phase of the read command to high priority. When the data transfer stage processing of the write command is completed, the instruction of the write command in the register group is cleared, and in the priority group P3, the register corresponding to the LUN accessed by the write command is set to set the result inquiry stage of the write command to a low priority. Thus, the probability that the result inquiry stage of the read command is blocked by the data transmission of the write command can be reduced, and the average processing delay of the read command is reduced.
In yet another example, the pending command is retrieved from the command queue 210. If the command to be processed is a write command, the register of the priority group P1 is set to 1 according to the LUN to be accessed by the write command. If the command to be processed is a read command, the register of the priority group P2 is set to 1 according to the LUN to be accessed by the write command. When the data transfer phase processing of a write command is completed, the instruction for the write command in the register group is cleared, and in the priority group P3, the register corresponding to the LUN accessed by the write command is set to set the result inquiry phase of the write command to low priority. When the command transfer phase processing of the read command is completed, the instruction of the read command in the register group is cleared, and the register corresponding to the LUN accessed by the read command is set in the priority group P1 to set the data transfer phase of the read command to a high priority.
Example two
Fig. 3 is a block diagram of an NVM interface controller of a control section (see also fig. 1, control section 104) of a solid-state storage device of a second embodiment of the invention. The NVM interface controller generates commands to operate the NVM chip by processing the microinstructions. To enable processing of micro instructions, the NVM interface controller includes a micro instruction execution unit 310, a command queue 320, an NVM media interface 330, a micro instruction memory 340, a context memory 360, and/or general purpose registers 350.
The micro instruction memory 340 is used for storing micro instructions. The micro instruction execution unit 310 reads and executes micro instructions from the micro instruction memory 340. The micro instructions cause the micro instruction execution unit to issue commands to operate the NVM chip via NVM media interface 330. Illustratively, the commands include commands to read, program, erase, pause, read feature (feature), and/or set feature. The micro instructions also cause micro instruction execution unit 310 to obtain data read from the NVM chip through NVM media interface 330. One or more micro-instructions correspond to one of the commands to operate the NVM chip. The microinstructions also include branch, jump microinstructions that cause the microinstruction execution unit to change the order in which the microinstructions are executed. In addition, yield (yield) micro instructions may be provided in the micro instruction sequence, and the micro instruction execution unit may schedule and execute other micro instruction sequences when executing to yield micro instructions.
One or more sequences of micro instructions may be stored in the micro instruction memory 340. By way of example, in the microinstruction memory 340 of fig. 3, n-piece microinstruction sequences, i.e., microinstruction sequence 1, microinstruction sequence 2. Micro instruction sequence 1, micro instruction sequence 2.
Multiple micro instructions in each micro instruction sequence may be executed by the micro instruction execution unit 310. Each micro instruction sequence has its own execution state, so that the micro instruction execution unit 310 can suspend the executing micro execution sequence and select to execute other micro instruction sequences. The micro instruction execution unit 310 suspends the executing micro instruction sequence or, when a yield micro instruction is executed, the executing state of the executing micro instruction sequence is saved; when the micro instruction execution unit resumes execution of the micro instruction sequence, the saved execution state is read, thereby resuming execution of the resumed micro instruction sequence.
In one example, a general purpose register 350 is coupled to the micro instruction execution unit 310 for storing and providing the execution state of a sequence of micro instructions. The execution state of the micro instruction sequence held by general purpose registers 350 includes a Program Counter (PC), general purpose registers (GR), physical address registers, and/or timers, etc. The program counter is used for indicating the micro instruction address currently executed in the micro instruction sequence. The physical address register is used to indicate the address of the NVM chip accessed by the micro instruction sequence.
In another example, context memory 360 is used to hold the execution state of a sequence of micro instructions. The execution state of the sequence of microinstructions held by the context memory 360 may include the contents of the general purpose registers 350. In context memory 360, the execution state of one or more sequences of micro instructions may be preserved. The sequence of microinstructions that hold state information in context memory 360 may be scheduled to resume execution. The micro instruction execution unit 310 resumes execution of a sequence of micro instructions by restoring state information corresponding to the sequence to the general purpose registers 350 stored in the context memory 360. The sequence of micro instructions executed is referred to as a thread. The same micro instruction sequence has its own execution state at each execution, so that multiple threads can be created based on the same micro instruction sequence. In the context memory 360, an execution state is stored for each thread.
In addition, the micro instruction execution unit 310 may access the command queue 320. For example, when executing a micro instruction, the micro instruction execution unit 310 accesses the command queue 320 according to the micro instruction.
When a command in the command queue 320 is processed, a micro instruction sequence corresponding to the command is acquired, and executed by the micro instruction execution unit 310 to complete the processing of the command in the command queue 320. The conversion from processing commands in the command queue 320 to a sequence of micro instructions may be accomplished by a conversion circuit (not shown). Conversion from processing commands in the command queue 320 to micro instruction sequences may also be accomplished by the micro instruction execution unit 310. In fetching a sequence of micro instructions, the sequence of micro instructions may be populated or adapted based on the commands in the command queue 320 to adapt the sequence of micro instructions to the commands in the command queue 320. As another example, a micro instruction sequence controls the micro instruction execution unit 310 to access and process commands in the command queue 320. And selects to execute the corresponding micro instruction sequence according to the commands in the command queue 320.
In an embodiment in accordance with the invention, threads are created or used based on the LUNs to be accessed. For example, thread 1 is used to access LUN 1 and/or thread 2 is used to access LUN 2. In one example, the context memory 360 may accommodate the same number of threads as the number of LUNs of the flash grain to which the component of FIG. 3 processes the microinstructions. Threads are allocated or reserved for each LUN. When a request for a LUN is processed, the thread corresponding to the LUN is scheduled. In another example, the context memory 360 may accommodate a smaller number of threads than the number of LUNs coupled to the flash interface controller of FIG. 3. When a command to access a LUN is processed, the command is processed using the thread assigned to process the LUN or assigning a new thread.
A LUN buffer (not shown) is provided to store data read from or written to the LUN. Providing a LUN cache for each thread, wherein the size of the LUN cache corresponds to the page size of the NVM chip, provides a larger size of LUN cache to be advantageous for improved performance.
In another example, the LUN caching is provided by DRAM (Dynamic Random Access Memory ) or other memory external to the NVM interface controller of FIG. 3.
Threads may be scheduled. The NVM interface controller also includes a register set 370. Register set 370 includes a plurality of registers. For purposes of illustration, registers in register set 370 are organized in rows and columns, with registers of each column for the same LUN, and registers of each row belonging to the same priority group. By way of example, LUNs are in one-to-one correspondence with threads, and one thread is dedicated to handling commands that access its corresponding LUN. In FIG. 3, each register of column 1, from left to right, in register set 370 has a value of 1, indicating that there is an event to be processed to access LUN 0; each value of column 2 from left to right in register set 370 is a 1 register indicating that there is a pending event to access LUN 1. Similarly, the 1 register value for each column 8 from left to right in register set 370 indicates that there is a pending event to access LUN 7. Events have a variety of meanings. In one example, an event is generated based on a command to be processed in the command queue 320. The command may correspond to one or more events. For example, an event indicates a command to read/set a flash granule feature (feature) to be processed. Or one event indicates a data transfer phase of a write command to be processed or a command transfer phase of a read command, and the other event indicates a result query phase of a read/write command.
The 1 st behavior priority group P1 of the register group 370; a 2 nd behavior priority group P2 of the register group 370; and a 3 rd behavior priority group P3 of the register set 370.
Alternatively, each column of register set 370 is for the same die or NVM chip.
In an embodiment according to the invention, threads are scheduled for processing events. The thread corresponding to the LUN is scheduled according to the accessed LUN indicated by the pending event.
When 1 pending command is obtained from the command queue, a thread is created or scheduled for processing the command. And setting a register set to indicate that there is an event to be processed, depending on the command queue, command type, and/or LUN accessed to obtain the command. For example, a read command to access LUN0 is obtained from the command queue, and to indicate that the read command is pending, the register of row 1 and column 1 of the register set is set to 1. Next, a write command to access LUN0 is obtained from the command queue, and to indicate that the write command is to be processed, the register of row 3, column 1 of the register set is set to 1. And the priority of row 3 of the register set is lower than the priority of row 1 of the register set.
Optionally, the write command includes two processing stages, a data transfer stage and a result query stage. Taking a write command to access LUN0 as an example, to indicate a pending data transfer phase, the register of row 1 and column 3 of register set 370 is set to 1; and after the data transfer phase is completed, the 3 rd row and 1 st column registers of the register set 370 are set to 0, and the 2 nd row and 1 st column registers of the register set 370 are set to 1 to indicate that the scheduling priority of the result inquiry phase is higher than that of the data transfer phase.
Scheduler 380 schedules threads according to register set 370 to process pending events indicated by the register set. In the example of fig. 3, scheduler 380 first selects a priority group. By way of example, a priority group having an event to be processed and having the highest priority is selected. For example, priority group P1 has the highest priority, while priority group P3 has the lowest priority. Selecting the priority group P1 for scheduling when any one of a plurality of registers of the priority group P1 indicates an event to be processed; the lower priority group is selected only when there is no event to be processed in the priority group P1. And optionally, when a priority group is assigned its priority may be reversed, the priority of the priority group is temporarily adjusted. After selecting the priority group, the scheduler 380 selects the event to be processed from the priority group. For example, a pending event accessing one of the LUNs is selected in a round robin fashion.
Alternatively, the selection of a priority group, and/or the selection of threads to be executed in the selected priority group may employ various scheduling strategies, such as round robin, weighted round robin, priority-based scheduling, highest response to priority scheduling, and the like.
The scheduler 380 schedules threads for processing the selected pending event based on the selected pending event. For example, the thread corresponding to the LUN is scheduled to process the pending event selected by the scheduler 380 according to the LUN accessed by the event. The scheduler 380 also indicates the scheduled threads to the micro instruction execution unit 310.
Optionally, the scheduler 380 also indicates to the micro instruction execution unit 310 the scheduled pending events. When there are multiple pending events on a LUN, it is advantageous to indicate to the microinstruction execution unit 310 that the scheduled pending event is. Optionally, the scheduled thread identifies the pending event with the highest priority on its corresponding LUN and processes the event.
The micro instruction execution unit 310 processes threads to be executed as directed by the scheduler 380. After the thread execution is complete, or the thread's stage processing is complete, the registers in register set 370 are modified to indicate the thread or thread stage processing is complete.
To begin executing a thread, the micro instruction execution unit 330 obtains the thread's context from the context memory 360, restores the value of the general purpose registers 350 used by the thread, or switches the register window of the context memory 360, depending on the thread context. And when the thread stage processing is complete (e.g., the thread has executed a "yield" microinstruction), the microinstruction execution unit 330 saves the context of the current thread in the context memory 360 and retrieves the next thread to be executed from the scheduler 380.
As an example, in one scenario, there are a large number of write commands or a small number of read commands in the command queue 320, and the read commands or write commands to be processed are obtained from the command queue 320. A thread is created or restored for each LUN to handle threads that access read commands and/or write commands of that LUN. Depending on the LUN to which the command accesses, the register of priority group P2 is set to 1 for read commands and the register of priority group P3 is set to 1 for write commands. The read command includes two phases, a command transfer and result query phase. The write command includes two processing stages, a data transfer stage and a result query stage. After each stage is completed, the thread aborts its execution by executing the yield microinstruction. When the command transfer phase processing of the thread processing the read command is completed, the corresponding indication in the register set 370 is cleared, and in the priority set P1, the register corresponding to the LUN accessed by the thread is set (set to 1) to set the result query phase of the read command to high priority. When the data transfer stage of the thread processing the write command is completed, the corresponding instruction in the register set 370 is cleared, and the register corresponding to the LUN accessed by the thread is set in the priority set P3 to set the result inquiry stage of the write command to a low priority. Thus, the probability that the result query stage of the thread processing the read command is blocked by the thread processing the write command can be reduced, and the average processing delay of the read command is reduced.
In a further embodiment, command queue 320 includes N command queues, e.g., n=2. The two queues are a high priority queue and a low priority queue, respectively. And the register set 370 includes M priority sets, where M is an integer and M > N. Commands in the N command queues are mapped to register file 370 by a mapper (not shown in fig. 3).
The mapping rules may be adjusted or configured. Illustratively, a mapper (not shown in FIG. 3) may map commands of the high priority queue to priority group P1, commands of the low priority queue to priority group P3, and read commands of the low priority group to priority group P2, write commands of the high priority group to priority group P2.
Optionally, the NVM interface controller further includes a mask register set to indicate whether the pending event indicated in register set 370 needs to be scheduled. In one example, a mask register (not shown) includes 3 bits for each priority group of register set 370, allowing the priority group corresponding thereto to be scheduled when the bits of the mask register are set, and not scheduled when the bits of the mask register are cleared. In another example, the mask register includes 8 bits for each LUN, and only when a bit of the mask register is set, a command or thread accessing the LUN corresponding to the set bit is dispatched. In yet another example, the mask register includes the same number of registers as the register set 370, with bits of the mask register corresponding one-to-one to bits in the register set 370; each bit of the mask register is used to indicate whether a command or event indicated by a corresponding register in the register set 370 should be processed.
In the above embodiments, the scheduled object is a thread or command to be processed. And generally, the scheduled object may also have various forms, such as a process, a task, a sequence of instructions, and so on. And in the above embodiment, the resource operated on by the scheduled object comprises a LUN, a die, or an NVM chip, and each column of register set 370 indicates one of the resources operated on by the scheduled object. And generally the resources operated on by the scheduled objects may have other forms, such as queues, memory areas, etc.
It will be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by various means including computer program instructions. These computer program instructions may be loaded onto a general purpose computer, special purpose computer, or other programmable data control apparatus to produce a machine, such that the instructions which execute on the computer or other programmable data control apparatus create means for implementing the functions specified in the flowchart block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data control apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including computer-readable instructions for implementing the function specified in the flowchart block or blocks. The computer program instructions may also be loaded onto a computer or other programmable data control apparatus to cause a series of operational operations to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart block or blocks.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of operations for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions or operations, or combinations of special purpose hardware and computer instructions.
Although the present invention has been described with reference to examples, which are intended for purposes of illustration only and not to be limiting of the invention, variations, additions and/or deletions to the embodiments may be made without departing from the scope of the invention.
Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these embodiments pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (8)

1. A scheduling method, comprising:
selecting a first register with a first value from the register set according to the operation resource to be indicated and the priority to which the register belongs; the first register corresponds to a first event to be processed;
scheduling a first thread corresponding to the first register to process a first event to be processed; wherein,
the register group comprises a plurality of registers, each register with a first value corresponds to an event to be processed, the registers in the register group are organized into rows and columns, the same columns are used for indicating threads operating the same resources, and the registers in the same rows belong to the same priority group;
and selecting a register corresponding to the event to be processed from the register group according to the priority set for the event to be processed and the operation resource indicated by the event to be processed, and setting the value of the register as a first value.
2. The method as recited in claim 1, further comprising:
in response to completion of the first pending event processing, a register in the register set corresponding to the first pending event is modified to a second value to indicate completion of the first pending event processing.
3. The method according to claim 1 or 2, further comprising:
Selecting a high priority group indicating events to be processed according to the priority of the priority group; and
from the high priority group, a first register having a first value is selected in a round robin fashion.
4. The method according to claim 1 or 2, further comprising:
acquiring a command to be processed from a command queue, and setting a first priority for the command to be processed; and
the register corresponding to the resource operated by the pending command is set to a first value from the priority group having the first priority in accordance with the resource operated by the pending command.
5. The dispatching device is characterized by comprising a command queue, a micro-instruction memory, a micro-instruction execution unit, a register set, a dispatcher and an NVM medium interface; wherein the command queue is used for receiving commands from a user or an upper system; the micro instruction execution unit operates the NVM medium interface to process the command to be processed through an execution thread, wherein the thread is a micro instruction sequence which can be executed;
the register group is used for indicating the event to be processed and the priority of the event to be processed; the scheduler is used for scheduling the threads according to the registers in the register group; the micro instruction execution unit receives the instruction of the scheduler and executes the scheduled thread; the register group comprises a plurality of registers, each register with a first value indicates an event to be processed, the registers in the register group are organized into rows and columns, the registers in the same column are used for indicating threads operating the same resource, and the registers in the same row belong to the same priority group.
6. The scheduler of claim 5, further comprising a mask register set to indicate whether a pending event indicated in the register set needs to be processed.
7. The scheduling apparatus of claim 5 or 6, further comprising a mapper for selecting a first priority group of the register groups having a first priority according to a command to be processed in the command queue; and setting a register corresponding to the resource operated by the command to be processed to a first value from a first priority group having a first priority according to the resource operated by the command to be processed.
8. A scheduling apparatus according to claim 5 or 6 wherein the scheduler selects a high priority group indicating events to be processed, selects a first register having a first value from the high priority group in a round robin fashion, and schedules a thread to process events to be processed indicated by the first register.
CN201610861793.5A 2016-09-28 2016-09-28 Scheduling method and device Active CN107870779B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201610861793.5A CN107870779B (en) 2016-09-28 2016-09-28 Scheduling method and device
CN202311576115.0A CN117555598A (en) 2016-09-28 2016-09-28 Scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610861793.5A CN107870779B (en) 2016-09-28 2016-09-28 Scheduling method and device

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311576115.0A Division CN117555598A (en) 2016-09-28 2016-09-28 Scheduling method and device

Publications (2)

Publication Number Publication Date
CN107870779A CN107870779A (en) 2018-04-03
CN107870779B true CN107870779B (en) 2023-12-12

Family

ID=61761684

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201610861793.5A Active CN107870779B (en) 2016-09-28 2016-09-28 Scheduling method and device
CN202311576115.0A Pending CN117555598A (en) 2016-09-28 2016-09-28 Scheduling method and device

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311576115.0A Pending CN117555598A (en) 2016-09-28 2016-09-28 Scheduling method and device

Country Status (1)

Country Link
CN (2) CN107870779B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110554833B (en) * 2018-05-31 2023-09-19 北京忆芯科技有限公司 Parallel processing IO commands in a memory device
CN113254179B (en) * 2021-06-03 2022-03-01 核工业理化工程研究院 Job scheduling method, system, terminal and storage medium based on high response ratio
CN114546294B (en) * 2022-04-22 2022-07-22 苏州浪潮智能科技有限公司 Solid state disk reading method, system and related components

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524250A (en) * 1991-08-23 1996-06-04 Silicon Graphics, Inc. Central processing unit for processing a plurality of threads using dedicated general purpose registers and masque register for providing access to the registers
US6360243B1 (en) * 1998-03-10 2002-03-19 Motorola, Inc. Method, device and article of manufacture for implementing a real-time task scheduling accelerator
CN1851652A (en) * 2006-05-23 2006-10-25 浙江大学 Method for realizing process priority-level round robin scheduling for embedded SRAM operating system
CN1975663A (en) * 2005-11-30 2007-06-06 国际商业机器公司 Apparatus having asymmetric hardware multithreading support for different threads
WO2008036852A1 (en) * 2006-09-21 2008-03-27 Qualcomm Incorporated Graphics processors with parallel scheduling and execution of threads
CN101796487A (en) * 2007-08-10 2010-08-04 内特可力亚斯系统股份有限公司 Virtual queue processing circuit and task processor
CN102968289A (en) * 2011-08-30 2013-03-13 苹果公司 High priority command queue for peripheral component
CN103699437A (en) * 2013-12-20 2014-04-02 华为技术有限公司 Resource scheduling method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090138683A1 (en) * 2007-11-28 2009-05-28 Capps Jr Louis B Dynamic instruction execution using distributed transaction priority registers
US20130179614A1 (en) * 2012-01-10 2013-07-11 Diarmuid P. Ross Command Abort to Reduce Latency in Flash Memory Access

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5524250A (en) * 1991-08-23 1996-06-04 Silicon Graphics, Inc. Central processing unit for processing a plurality of threads using dedicated general purpose registers and masque register for providing access to the registers
US6360243B1 (en) * 1998-03-10 2002-03-19 Motorola, Inc. Method, device and article of manufacture for implementing a real-time task scheduling accelerator
CN1975663A (en) * 2005-11-30 2007-06-06 国际商业机器公司 Apparatus having asymmetric hardware multithreading support for different threads
CN1851652A (en) * 2006-05-23 2006-10-25 浙江大学 Method for realizing process priority-level round robin scheduling for embedded SRAM operating system
WO2008036852A1 (en) * 2006-09-21 2008-03-27 Qualcomm Incorporated Graphics processors with parallel scheduling and execution of threads
CN101796487A (en) * 2007-08-10 2010-08-04 内特可力亚斯系统股份有限公司 Virtual queue processing circuit and task processor
CN102968289A (en) * 2011-08-30 2013-03-13 苹果公司 High priority command queue for peripheral component
CN103699437A (en) * 2013-12-20 2014-04-02 华为技术有限公司 Resource scheduling method and device

Also Published As

Publication number Publication date
CN117555598A (en) 2024-02-13
CN107870779A (en) 2018-04-03

Similar Documents

Publication Publication Date Title
CN107870866B (en) IO command scheduling method and NVM interface controller
US20200356312A1 (en) Scheduling access commands for data storage devices
CN110088723B (en) System and method for processing and arbitrating commit and completion queues
CN110088725B (en) System and method for processing and arbitrating commit and completion queues
CN107885456B (en) Reducing conflicts for IO command access to NVM
US10156994B2 (en) Methods and systems to reduce SSD IO latency
EP3477461A1 (en) Devices and methods for data storage management
CN107305504B (en) Data storage device, control unit thereof and task sequencing method thereof
US10782915B2 (en) Device controller that schedules memory access to a host memory, and storage device including the same
US10437519B2 (en) Method and mobile terminal for processing write request
US10635349B2 (en) Storage device previously managing physical address to be allocated for write data
CN106951374B (en) Method for checking block page address and apparatus thereof
CN107870779B (en) Scheduling method and device
CN114661457A (en) Memory controller for managing QoS enforcement and migration between memories
TW201901454A (en) Methods for scheduling and executing commands in a flash memory and apparatuses using the same
US20220350655A1 (en) Controller and memory system having the same
US10089039B2 (en) Memory controller, memory device having the same, and memory control method
CN108628759B (en) Method and apparatus for out-of-order execution of NVM commands
CN108572932B (en) Multi-plane NVM command fusion method and device
CN107885667B (en) Method and apparatus for reducing read command processing delay
TW202004504A (en) Memory device, control method thereof and recording medium
US9245600B2 (en) Semiconductor device and operating method thereof
CN108345428B (en) Control intensive control system and method thereof
CN109144907B (en) Method for realizing quick reading and medium interface controller
CN111736779B (en) Method and device for optimizing execution of NVM interface command

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant