CN111755057A - Engine scheduling method, system and related device of channel - Google Patents

Engine scheduling method, system and related device of channel Download PDF

Info

Publication number
CN111755057A
CN111755057A CN202010745287.6A CN202010745287A CN111755057A CN 111755057 A CN111755057 A CN 111755057A CN 202010745287 A CN202010745287 A CN 202010745287A CN 111755057 A CN111755057 A CN 111755057A
Authority
CN
China
Prior art keywords
engine
coding
channel
list
descriptor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010745287.6A
Other languages
Chinese (zh)
Other versions
CN111755057B (en
Inventor
周永旺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Inspur Data Technology Co Ltd
Original Assignee
Beijing Inspur Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Inspur Data Technology Co Ltd filed Critical Beijing Inspur Data Technology Co Ltd
Priority to CN202010745287.6A priority Critical patent/CN111755057B/en
Publication of CN111755057A publication Critical patent/CN111755057A/en
Application granted granted Critical
Publication of CN111755057B publication Critical patent/CN111755057B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C16/00Erasable programmable read-only memories
    • G11C16/02Erasable programmable read-only memories electrically programmable
    • G11C16/06Auxiliary circuits, e.g. for writing into memory
    • G11C16/10Programming or data input circuits

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Memory System (AREA)

Abstract

The application provides a channel engine scheduling method, which comprises the following steps: acquiring a channel request; judging whether the engine list is empty or not; if not, distributing a corresponding coding engine for the channel request from the engine list, and recording the corresponding relation between the channel request and the coding engine by using a descriptor; when the coding engine processes the codes contained in the channel request, calculating check codes corresponding to the codes; and acquiring the descriptor, the check code and the coding engine status word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine status word is in an idle state. The method and the device improve the processing efficiency of the channel request, improve the utilization rate of the coding engine and reduce the hardware design cost. The application also provides an engine scheduling system of a channel, a computer readable storage medium and an electronic device, which have the above beneficial effects.

Description

Engine scheduling method, system and related device of channel
Technical Field
The present application relates to the field of storage, and in particular, to a method, a system, and a related apparatus for scheduling an engine of a channel.
Background
At present, the dense data of an enterprise mainly uses a high-bandwidth NVME SSD (Non-Volatile Memory express State Disk), so as to realize real ultra-low delay and high performance of the data. The NVME SSD is composed of modules such as a controller, a storage medium and a flash memory. However, as a memory cell in a storage medium represents more and more bits, the raw bit error rate of flash memory is higher and higher. Therefore, it becomes necessary to use an LDPC (Low-density parity-check) error correction technique based on hard decision and soft decision.
Generally, a flash memory controller usually has 8 channels, each channel is connected to a plurality of flash memory storage media, and for the entire SSD storage system, reducing the controller area and improving the read-write performance are the most important technical implementation objectives. At present, most controllers adopt one LDPC encoder for each channel to deal with the higher and higher flash memory error rate, but each encoder needs to consume larger resources, so that the area and the power consumption of the controller are easily increased, and the production cost of hardware is greatly improved.
Disclosure of Invention
The application aims to provide an engine scheduling method, an engine scheduling system, a computer-readable storage medium and an electronic device for a channel, which can reduce the hardware cost and the use power consumption of a flash memory controller.
In order to solve the above technical problem, the present application provides a channel engine scheduling method, which has the following specific technical scheme:
acquiring a channel request;
judging whether the engine list is empty or not;
if not, distributing a corresponding coding engine for the channel request from the engine list, and recording the corresponding relation between the channel request and the coding engine by using a descriptor;
when the coding engine processes the codes contained in the channel request, calculating check codes corresponding to the codes;
and acquiring the descriptor, the check code and the coding engine status word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine status word is in an idle state.
Optionally, before allocating a corresponding coding engine for the channel coding from the engine list, the method further includes:
distributing priority to each coding engine in the engine list;
the allocating the corresponding coding engine for the channel coding from the engine list comprises:
and allocating the coding engine with the current highest priority for the channel coding from an engine list.
Optionally, the processing, by the coding engine, of the coding included in the channel request includes:
reading the code in the channel request, and writing the code into a first buffer of the coding engine;
injecting the descriptor into a trigger queue;
after the codes are all written into the first buffer area, the descriptors are pushed into a work queue from the trigger queue;
performing a ping-pong operation on the code with a second buffer of the coding engine and the first buffer while the descriptor is in the work queue.
Optionally, the assigning the priority to each coding engine in the engine list includes:
and allocating priority to each coding engine in the engine list according to the data processing efficiency of each coding engine.
Optionally, the obtaining the descriptor, the check code, and the coding engine status word includes:
acquiring a coding engine status word;
and when the coding engine state word is in an idle state, acquiring the descriptor and the check code.
Optionally, calculating the check code corresponding to the code includes:
and calculating a check code corresponding to the code by using an LDPC algorithm.
The present application also provides a channel engine scheduling system, including:
an acquisition module for acquiring a channel request;
the judging module is used for judging whether the engine list is empty or not;
the engine allocation module is used for allocating a corresponding coding engine for the channel request from the engine list and recording the corresponding relation between the channel request and the coding engine by using a descriptor;
the check code calculation module is used for calculating a check code corresponding to the code when the code engine processes the code contained in the channel request;
and the engine recovery module is used for acquiring the descriptor, the check code and the coding engine state word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine state word is in an idle state.
Optionally, the method further includes:
the priority distribution module is used for distributing priority to each coding engine in the engine list;
the engine allocation module is specifically a module for allocating a coding engine with the current highest priority for the channel coding from the engine list.
The present application also provides a computer-readable storage medium having stored thereon a computer program which, when being executed by a processor, carries out the steps of the method as set forth above.
The present application further provides an electronic device, comprising a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the method described above when calling the computer program in the memory.
The application provides a channel engine scheduling method, which comprises the following steps: acquiring a channel request; judging whether the engine list is empty or not; if not, distributing a corresponding coding engine for the channel request from the engine list, and recording the corresponding relation between the channel request and the coding engine by using a descriptor; when the coding engine processes the codes contained in the channel request, calculating check codes corresponding to the codes; and acquiring the descriptor, the check code and the coding engine status word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine status word is in an idle state.
Because the number of the current channels is larger than that of the coding engines, each channel request cannot be guaranteed to be processed by the coding engines in time, and only one channel request can be processed by all the coding engines at the same time. By adopting the engine list, all the coding engines do not need to be made candidate when the channel request is received, if the engine list is not empty, the available coding engines are selected from the engine list for processing, so that the next channel request can be processed without processing the previous channel request, the processing efficiency of the channel request is improved, the utilization rate of the coding engines is improved, and the hardware design cost is reduced. The application further provides an engine scheduling system of a channel, a computer-readable storage medium and an electronic device, which have the above beneficial effects and are not described herein again.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a flowchart of an engine scheduling method for a channel according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an engine scheduling system of a channel according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Currently, a flash memory controller usually includes 4 or 8 flash memory channels, and an adopted encoder, for example, an LDPC encoder, usually includes only 3 or 6 encoding engines, so that a one-to-one correspondence relationship between the flash memory channels and the encoding engines cannot be realized, so that all the encoding engines are required to be in a candidate state when a channel request is received, and a next channel request can be executed only after a previous channel request is executed. Obviously, in this case, the utilization rate of the coding engine is low, and the coding efficiency is low. In order to solve the problem, the application provides an engine scheduling method of a channel.
Referring to fig. 1, fig. 1 is a flowchart of a channel engine scheduling method according to an embodiment of the present application, where the method includes:
s101: acquiring a channel request;
the step is intended to obtain the channel request, and the specific manner of obtaining the channel request and what kind of channel request is adopted is not limited herein. In particular, the channel request may be issued by a flash memory controller.
S102: judging whether the engine list is empty or not; if not, the step S103 is entered;
the step aims to judge whether the engine list is empty, if the engine list is empty, no idle coding engine is available for a channel, and the step can be executed circularly until the engine list is not empty.
S103: distributing a corresponding coding engine for the channel request from the engine list, and recording the corresponding relation between the channel request and the coding engine by using a descriptor;
this step needs to allocate a corresponding coding engine to the channel request from the engine list, and it should be noted that there may be a corresponding execution priority between the channel request and the coding engine, but there is no fixed execution relationship, that is, there is no channel request that must be encoded by the specified coding engine. Therefore, as long as there is an encoding engine in the engine list, the corresponding encoding engine can be allocated for the channel request. When there are multiple coding engines in the engine list, if there is no priority between coding engines, the coding engines can be executed in sequence according to the numbering order of the coding engines, and if there is priority between coding engines, the coding engines can be allocated in sequence according to the priority. It should be noted that the coding engines in the engine list are all available coding engines, and the coding engine that is performing the coding task is not in the engine list.
It will be readily appreciated that the present embodiment requires the engine list to be configured by default before this step is performed, i.e. all coding engines are filled into the engine list at initialization for use in channel request allocation.
As a preferred implementation of this step, if there is a priority between the coding engines, the coding engines in the engine list need to be assigned a priority before this step is executed. At this time, the present step may be performed by assigning the coding engine with the currently highest priority to the channel code from the engine list. How to prioritize is not limited herein, and prioritization may be performed according to the data processing efficiency of each coding engine. More specifically, when each coding engine performs coding, the data processing time may be recorded, and the data processing amount of each coding engine in a unit time is obtained according to the actual coding amount, and the data processing amount is taken as the data processing efficiency of the coding engine, so as to further determine the priority of each coding engine.
In addition, this step also needs to record the mapping relationship between the channel request and the coding engine by using the descriptor, i.e. record the mapping relationship between the channel to which the coding engine has been allocated and its corresponding coding engine. Of course, the correspondence between the channel, the coding engine and the check code may also be recorded thereafter. It should be noted that the channel in this application is the sender of the channel request, and obviously, there is a unique correspondence between the channel and the channel request, so the descriptor plays the same role regardless of recording the channel or requesting the channel.
S104: when the coding engine processes the codes contained in the channel request, calculating check codes corresponding to the codes;
this step aims at calculating the check code corresponding to the code. It should be noted that, since the check code is obtained by encoding, and the encoding is derived from the channel request, the channel request corresponding to the obtained check code can be determined according to the descriptor. How to calculate the check code corresponding to the code is not particularly limited, and the check code corresponding to the code may be calculated by using an LDPC algorithm. The LDPC algorithm is an ECC (Error Correcting Code) algorithm.
The coding engine processes the codes in the channel request to obtain the check codes, which is a conventional coding process executed by the coding engine, and is not described herein again. It should be noted that only one check code is available per channel request, but the check code is usually large in data size.
S105: and acquiring the descriptor, the check code and the coding engine status word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine status word is in an idle state.
When the coding engine is determined to be idle according to the coding engine status word, the coding engine needs to be written into an engine list so that other channels can request to acquire and code. It is to be understood that the frequency of acquiring the coding engine status word is not limited herein, and may be periodically detected, may be irregularly detected, or may acquire the coding engine status word only immediately after the channel request is received to determine whether the current engine is idle and available. Meanwhile, after the coding engine generates the check code, certain state words of the engine can be updated to indicate that the coding process is completed, so that the working state of each coding engine can be updated in time, and the processing efficiency of channel requests is improved.
The obtained check code needs to be returned to the corresponding channel according to the relationship between the channel request (or channel) recorded in the descriptor and the coding engine, so as to complete the coding process.
It should be noted that all the above processes of this embodiment can be completed by using a scheduling module or a scheduling thread in the system as an execution subject.
By adopting the engine list, when the channel request is received, all the coding engines are not required to be candidate, but the available coding engines are selected from the engine list for processing, so that the next channel request can be processed without the need of processing the previous channel request, the processing efficiency of the channel request is improved, the utilization rate of the coding engines is improved, and the hardware design cost is reduced.
On the basis of the above embodiment, as a preferred embodiment, in combination with the previous embodiment, the following further explains the process of processing encoding by the encoding engine in a three-step queue manner, where it is to be noted that each encoding engine has two separate data buffers, which are a first buffer and a second buffer, respectively, and the three-step queue includes a trigger queue, a work queue and a completion queue, and then the encoding engine may include the following steps:
s201: reading the code in the channel request, and writing the code into a first buffer of the coding engine;
s202: injecting the descriptor into a trigger queue;
s203: after the codes are all written into the first buffer area, the descriptors are pushed into a work queue from the trigger queue;
s204: performing a ping-pong operation on the code with a second buffer of the coding engine and the first buffer while the descriptor is in the work queue.
In step S102, the descriptor is the descriptor in the previous embodiment, and at the same time, includes request information corresponding to the channel request, and is also used as an execution status tag of the channel request in the three-step queue, and the descriptor is injected into the trigger queue, and the channel request currently being processed is recorded through the trigger queue. Once the code is completely written into the first buffer, meaning that the coding engine is about to perform data processing, the descriptor is pushed into the work queue at this time, indicating that the coding engine is processing the code corresponding to the channel request, and thereafter the coding engine performs ping-pong operation on the code using the two buffers, which is a common data stream processing method in the field. And when the descriptor is positioned in the work queue, calculating and coding a corresponding check code by using an LDPC algorithm.
When the encoding engine finishes executing, the descriptor can be pushed into the completion queue, and the encoding and checking codes also enter the completion queue. At this time, the flash memory controller reads the check code from the completion queue, determines the idle coding engine according to the check code, and pops up the channel request from the three-step queue once the idle coding engine is determined, so as to complete the coding process of one channel request.
In the following, a channel engine scheduling system provided by an embodiment of the present application is introduced, and the engine scheduling system described below and the channel engine scheduling method described above may be referred to correspondingly.
Referring to fig. 2, fig. 2 is a schematic structural diagram of an engine scheduling system of a channel according to an embodiment of the present application, and the present application further provides an engine scheduling system of a channel, including:
an obtaining module 100, configured to obtain a channel request;
a judging module 200, configured to judge whether the engine list is empty;
an engine allocation module 300, configured to allocate a corresponding coding engine to the channel request from the engine list, and record a corresponding relationship between the channel request and the coding engine by using a descriptor;
a check code calculation module 400, configured to calculate a check code corresponding to a code included in the channel request when the coding engine processes the code;
and the engine recycling module 500 is configured to obtain the descriptor, the check code, and the coding engine status word, send the check code to a corresponding channel according to the descriptor, and write the coding engine into the engine list when the coding engine status word corresponds to an idle state.
Based on the above embodiment, as a preferred embodiment, the method further includes:
the priority distribution module is used for distributing priority to each coding engine in the engine list;
the engine allocation module 200 is specifically a module for allocating the coding engine with the current highest priority for the channel coding from the engine list.
The present application also provides a computer readable storage medium having stored thereon a computer program which, when executed, may implement the steps provided by the above-described embodiments. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The application further provides an electronic device, which may include a memory and a processor, where the memory stores a computer program, and the processor may implement the steps provided by the foregoing embodiments when calling the computer program in the memory. Of course, the electronic device may also include various network interfaces, power supplies, and the like.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the system provided by the embodiment, the description is relatively simple because the system corresponds to the method provided by the embodiment, and the relevant points can be referred to the method part for description.
The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, it is possible to make several improvements and modifications to the present application without departing from the principle of the present application, and such improvements and modifications also fall within the scope of the claims of the present application.
It is further noted that, in the present specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.

Claims (10)

1. A method for scheduling an engine for a channel, comprising:
acquiring a channel request;
judging whether the engine list is empty or not;
if not, distributing a corresponding coding engine for the channel request from the engine list, and recording the corresponding relation between the channel request and the coding engine by using a descriptor;
when the coding engine processes the codes contained in the channel request, calculating check codes corresponding to the codes;
and acquiring the descriptor, the check code and the coding engine status word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine status word is in an idle state.
2. The engine scheduling method of claim 1, wherein before allocating the corresponding coding engine for the channel code from the engine list, further comprising:
distributing priority to each coding engine in the engine list;
the allocating the corresponding coding engine for the channel coding from the engine list comprises:
and allocating the coding engine with the current highest priority for the channel coding from an engine list.
3. The engine scheduling method of claim 1, wherein the processing, by the coding engine, of the codes included in the channel request comprises:
reading the code in the channel request, and writing the code into a first buffer of the coding engine;
injecting the descriptor into a trigger queue;
after the codes are all written into the first buffer area, the descriptors are pushed into a work queue from the trigger queue;
performing a ping-pong operation on the code with a second buffer of the coding engine and the first buffer while the descriptor is in the work queue.
4. The engine scheduling method of claim 2, wherein assigning a priority to each coding engine in the engine list comprises:
and allocating priority to each coding engine in the engine list according to the data processing efficiency of each coding engine.
5. The engine scheduling method of claim 1, wherein obtaining the descriptor, the check code, and the encoded engine status word comprises:
acquiring a coding engine status word;
and when the coding engine state word is in an idle state, acquiring the descriptor and the check code.
6. The engine scheduling method of claim 1, wherein calculating the check code corresponding to the code comprises:
and calculating a check code corresponding to the code by using an LDPC algorithm.
7. A system for scheduling engines for channels, comprising:
an acquisition module for acquiring a channel request;
the judging module is used for judging whether the engine list is empty or not;
the engine allocation module is used for allocating a corresponding coding engine for the channel request from the engine list and recording the corresponding relation between the channel request and the coding engine by using a descriptor;
the check code calculation module is used for calculating a check code corresponding to the code when the code engine processes the code contained in the channel request;
and the engine recovery module is used for acquiring the descriptor, the check code and the coding engine state word, sending the check code to a corresponding channel according to the descriptor, and writing the coding engine into the engine list when the coding engine state word is in an idle state.
8. The engine scheduling system of claim 7, further comprising:
the priority distribution module is used for distributing priority to each coding engine in the engine list;
the engine allocation module is specifically a module for allocating a coding engine with the current highest priority for the channel coding from the engine list.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6.
10. An electronic device, comprising a memory in which a computer program is stored and a processor which, when called upon in the memory, implements the steps of the method according to any one of claims 1-6.
CN202010745287.6A 2020-07-29 2020-07-29 Engine scheduling method, system and related device of channel Active CN111755057B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010745287.6A CN111755057B (en) 2020-07-29 2020-07-29 Engine scheduling method, system and related device of channel

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010745287.6A CN111755057B (en) 2020-07-29 2020-07-29 Engine scheduling method, system and related device of channel

Publications (2)

Publication Number Publication Date
CN111755057A true CN111755057A (en) 2020-10-09
CN111755057B CN111755057B (en) 2022-06-17

Family

ID=72712527

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010745287.6A Active CN111755057B (en) 2020-07-29 2020-07-29 Engine scheduling method, system and related device of channel

Country Status (1)

Country Link
CN (1) CN111755057B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7912997B1 (en) * 2008-03-27 2011-03-22 Xilinx, Inc. Direct memory access engine
CN102073603A (en) * 2009-11-24 2011-05-25 联发科技股份有限公司 Multi-channel memory apparatus and method for accessing multi-channel memory apparatus
US8559439B1 (en) * 2010-11-03 2013-10-15 Pmc-Sierra Us, Inc. Method and apparatus for queue ordering in a multi-engine processing system
US20140095737A1 (en) * 2010-11-03 2014-04-03 Pmc-Sierra Us, Inc Method and apparatus for a multi-engine descriptor controller
CN103870411A (en) * 2012-12-11 2014-06-18 三星电子株式会社 Memory controller and memory system including the same
US20190188134A1 (en) * 2017-12-14 2019-06-20 SK Hynix Inc. Memory system and operating method thereof
CN110968449A (en) * 2018-09-28 2020-04-07 方一信息科技(上海)有限公司 BCH ECC error correction resource sharing system and method for multichannel flash memory controller

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7912997B1 (en) * 2008-03-27 2011-03-22 Xilinx, Inc. Direct memory access engine
CN102073603A (en) * 2009-11-24 2011-05-25 联发科技股份有限公司 Multi-channel memory apparatus and method for accessing multi-channel memory apparatus
US8559439B1 (en) * 2010-11-03 2013-10-15 Pmc-Sierra Us, Inc. Method and apparatus for queue ordering in a multi-engine processing system
US20140095737A1 (en) * 2010-11-03 2014-04-03 Pmc-Sierra Us, Inc Method and apparatus for a multi-engine descriptor controller
CN103870411A (en) * 2012-12-11 2014-06-18 三星电子株式会社 Memory controller and memory system including the same
US20190188134A1 (en) * 2017-12-14 2019-06-20 SK Hynix Inc. Memory system and operating method thereof
CN110968449A (en) * 2018-09-28 2020-04-07 方一信息科技(上海)有限公司 BCH ECC error correction resource sharing system and method for multichannel flash memory controller

Also Published As

Publication number Publication date
CN111755057B (en) 2022-06-17

Similar Documents

Publication Publication Date Title
US10817217B2 (en) Data storage system with improved time-to-ready
US10152501B2 (en) Rollover strategies in a n-bit dictionary compressed column store
CN106855832B (en) Data storage device and operation method thereof
US20080195833A1 (en) Systems, methods and computer program products for operating a data processing system in which a file system's unit of memory allocation is coordinated with a storage system's read/write operation unit
CN112596681B (en) Rereading command processing method, flash memory controller and solid state disk
KR20150095781A (en) Memory recycling method and device
US11218163B2 (en) Memory system and information processing system
CN108399109A (en) soft decoding scheduling
US20210081235A1 (en) Memory system
JP2013137713A (en) Memory controller, memory system, and memory write method
CN106980466B (en) Data storage device and operation method thereof
KR20200065489A (en) Apparatus and method for daynamically allocating data paths in response to resource usage in data processing system
WO2017077624A1 (en) Nonvolatile memory device, and storage device having nonvolatile memory device
CN113360083A (en) Apparatus and method for controlling mapping data in memory system
US20180217892A1 (en) System and method for implementing super word line zones in a memory device
KR20210012641A (en) Memory system, data processing system and method for operation the same
US20210208802A1 (en) Method and device for allocating resource of hard disk in distributed storage system
US10372377B2 (en) Memory controller, memory system, and control method
CN111755057B (en) Engine scheduling method, system and related device of channel
CN112068772B (en) Data storage method, data storage device and storage device
CN109150792B (en) Method and device for improving data storage security
CN111625395B (en) Parallel erasure code encoding and decoding method and system
CN114174977B (en) Improved handling of host initiated requests in a memory subsystem
CN110401458B (en) Data check coding method and system
CN113760781A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant