CN114546263B - Data storage method, system, equipment and medium - Google Patents

Data storage method, system, equipment and medium Download PDF

Info

Publication number
CN114546263B
CN114546263B CN202210076086.0A CN202210076086A CN114546263B CN 114546263 B CN114546263 B CN 114546263B CN 202210076086 A CN202210076086 A CN 202210076086A CN 114546263 B CN114546263 B CN 114546263B
Authority
CN
China
Prior art keywords
cache control
control block
data
storage unit
write
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210076086.0A
Other languages
Chinese (zh)
Other versions
CN114546263A (en
Inventor
李子锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202210076086.0A priority Critical patent/CN114546263B/en
Publication of CN114546263A publication Critical patent/CN114546263A/en
Application granted granted Critical
Publication of CN114546263B publication Critical patent/CN114546263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • G06F3/0607Improving or facilitating administration, e.g. storage management by facilitating the process of upgrading existing storage systems, e.g. for improving compatibility between host and storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/064Management of blocks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a data storage method, which comprises the following steps: determining the number and the size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-layer storage unit; constructing a plurality of cache control block groups; in response to receiving the data writing request, sequentially writing data carried in the writing request into the storage unit groups of the cache control block group by taking the hierarchy of the storage unit groups as the sequence; in response to the cache control block group having the cache control block in which each layer of storage units writes data, the data manager sends the cache control block in which each layer of storage units writes data to the write manager; the write manager puts the received cache control block into a list corresponding to the data manager; and in response to the number of cache control blocks in the list reaching the preset number, writing the preset number of cache control blocks in the list into the LUNs of the stripe. The invention also discloses a system, computer equipment and a readable storage medium.

Description

Data storage method, system, equipment and medium
Technical Field
The present invention relates to the field of storage, and in particular, to a data storage method, system, device, and storage medium.
Background
One important item in the performance index of the solid state disk is sequential reading performance, and due to the influence of NAND characteristics, reasonable data arrangement can exert the reading efficiency of NAND as much as possible, and the rationality of the data arrangement is determined in a writing flow, so that the writing flow can influence the exertion of sequential reading performance.
The main reasons are as follows: the sequential reading and writing are performed by using a relatively large BS (Block Size), and because the NAND is characterized in that one LUN (Logic Unit Number, logical unit number) can only process one command at a time, if the data read by the BS at one time is all on the same LUN, the NAND will process the commands on a plurality of LUNs at one time according to the queuing sequence, and if the data read by the BS at one time is on different LUNs, the NAND will process the commands on a plurality of LUNs simultaneously, and overall, the parallel processing efficiency of a plurality of LUNs is higher than the serial processing efficiency of one LUN.
At present, sequential writing is organized and written according to a stripe, one stripe is transversely composed of 32 LUNs, the longitudinal direction is composed of 3 pages (storage units), the current arrangement is that page0, page1 and page2 of LUN0 are written first, page0, page1 and page2 of LUN1 are written again, and so on, so that data of one BS is preferentially arranged on the same LUN, and serial reading is necessarily performed on the same LUN when sequential reading is performed.
And according to the transmission path of the user data, the method is generally divided into the following steps: after receiving the Write command, the solid state disk firstly moves Data to a DDR (Double Data Rate, double speed synchronous dynamic random Access memory) through a DM (Data Manager) module, and a Data structure forming a CCB (Cache Control Block ) is sent to a WM (Write Manager) module, namely, the DM fills Data according to pages 0, 1 and 2 of the CCB and then sends the Data to the WM, and then applies for the next CCB, and so on, wherein the CCB Data structure is a structure combination of 3 pages of one LUN, after the WM receives the CCB, the WM writes the Data to obtain an available NAND physical address (PBA), and then writes the CCB to the NAND and so on, and thus, the Write operation of the Data is completed; now that 8 DM's correspond to 1 WM, WM receives a CCB write a CCB, so WM receives 8 DM's CCB out of order, so write NAND is out of order.
Disclosure of Invention
In view of this, in order to overcome at least one aspect of the above-mentioned problems, an embodiment of the present invention proposes a data storage method, including the steps of:
determining the number and the size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-layer storage unit;
constructing a plurality of cache control block groups, wherein each cache control block group comprises the same number of cache control blocks, each cache control block group comprises a plurality of layers of storage unit groups, and each layer of storage unit group consists of storage units of a corresponding layer of each cache control block;
in response to receiving a data writing request, sequentially writing data carried in the writing request into the storage unit groups of the cache control block group by taking the hierarchy of the storage unit groups as the sequence;
in response to the existence of a cache control block in which each layer of storage units writes data in the cache control block group, the data manager sends the cache control block in which each layer of storage units writes data to a write manager;
the write manager puts the received cache control block into a list corresponding to the data manager;
and in response to the number of cache control blocks in the list reaching a preset number, writing the preset number of cache control blocks in the list into the LUNs of the stripe.
In some embodiments, in response to receiving a data write request, writing data carried in the write request to a storage unit group of a corresponding layer in order of a hierarchy of the storage unit group, further including:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
In some embodiments, in response to the number of cache control blocks in the list reaching a preset number, writing the preset number of cache control blocks in the list into the LUN of the stripe, further comprising:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
In some embodiments, further comprising:
and recording the logical address carried in the write request and the available physical address into an L2P table.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a data storage system, including:
an application module configured to determine the number and size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-tier storage unit;
a construction module configured to construct a plurality of cache control block groups, wherein each cache control block group includes the same number of cache control blocks and each cache control block group includes a plurality of layers of memory cell groups, each layer of memory cell groups is composed of memory cells of a corresponding layer of each cache control block;
the first writing module is configured to respond to the received data writing request, and sequentially write the data carried in the writing request into the storage unit group of the cache control block group by taking the hierarchy of the storage unit group as the sequence;
a transmission module configured to transmit a cache control block in which each layer of storage units writes data to a write manager in response to the cache control block group having the cache control block in which each layer of storage units writes data;
a list module configured to cause the write manager to put the received cache control block into a list corresponding to the data manager;
and the second writing module is configured to write the preset number of cache control blocks in the list into the LUNs of the stripe in response to the number of the cache control blocks in the list reaching the preset number.
In some embodiments, the first write module is further configured to:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
In some embodiments, the second write module is further configured to:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
In some embodiments, the second write module is further configured to:
and recording the logical address carried in the write request and the available physical address into an L2P table.
Based on the same inventive concept, according to another aspect of the present invention, an embodiment of the present invention further provides a computer apparatus, including:
at least one processor; and
a memory storing a computer program executable on the processor, wherein the processor performs the steps of any of the data storage methods described above when the program is executed.
Based on the same inventive concept, according to another aspect of the present invention, there is also provided a computer-readable storage medium storing a computer program which, when executed by a processor, performs the steps of any of the data storage methods described above.
The invention has one of the following beneficial technical effects: the scheme provided by the invention adopts a data pre-arrangement and multi-flow control mode, the data is pre-arranged according to the mode to be written after being received from host, and before the NAND writing process is initiated, flow control processing is respectively adopted for a plurality of DM, so that the data of the plurality of DM is continuously written, namely the data is written according to the preset mode, the plurality of DM cannot be disordered, the sequential reading process is ensured to be carried out according to the LUN concurrency, and the maximum performance of NAND reading is exerted.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other embodiments may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a data storage method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a data storage system according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a computer device according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a computer-readable storage medium according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the following embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
It should be noted that, in the embodiments of the present invention, all the expressions "first" and "second" are used to distinguish two entities with the same name but different entities or different parameters, and it is noted that the "first" and "second" are only used for convenience of expression, and should not be construed as limiting the embodiments of the present invention, and the following embodiments are not described one by one.
According to an aspect of the present invention, an embodiment of the present invention proposes a data storage method, as shown in fig. 1, which may include the steps of:
s1, determining the number and the size of cache control blocks in each data manager according to the size of a stripe, wherein each cache control block comprises a multi-layer storage unit;
s2, constructing a plurality of cache control block groups, wherein each cache control block group comprises the same number of cache control blocks, each cache control block group comprises a plurality of layers of storage unit groups, and each layer of storage unit group consists of storage units of a corresponding layer of each cache control block;
s3, responding to the received data writing request, and sequentially writing the data carried in the writing request into the storage unit groups of the cache control block group by taking the hierarchy of the storage unit groups as the sequence;
s4, in response to the existence of a cache control block in which each layer of storage units writes data in the cache control block group, the data manager sends the cache control block in which each layer of storage units writes data to a write manager;
s5, the write manager puts the received cache control block into a list corresponding to the data manager;
and S6, responding to the fact that the number of the cache control blocks in the list reaches the preset number, and writing the preset number of the cache control blocks in the list into the LUNs of the stripe.
The scheme provided by the invention adopts a data pre-arrangement and multi-flow control mode, the data is pre-arranged according to the mode to be written after being received from host, and before the NAND writing process is initiated, flow control processing is respectively adopted for a plurality of DM, so that the data of the plurality of DM is continuously written, namely the data is written according to the preset mode, the plurality of DM cannot be disordered, the sequential reading process is ensured to be carried out according to the LUN concurrency, and the maximum performance of NAND reading is exerted.
In some embodiments, in step S1, the number and size of the cache control blocks in each data manager are determined according to the stripe size, where each cache control block includes a multi-layer storage unit, specifically, the DM may apply for multiple CCBs according to the stripe size in advance, and each CCB may include page0, page1, and page2, and each page is the same size. For example, the stripe includes 32 LUNs, and each LUN is in a three-layer four-column structure, that is, each LUN includes 12 pages, and the 12 pages are arranged in a 3-row four-column arrangement manner. Thus, 32 CCBs can be applied for each DM.
In some embodiments, in step S2, a plurality of buffer control block groups are constructed, where each buffer control block group includes the same number of buffer control blocks and each buffer control block group includes a plurality of storage unit groups, and each storage unit group of each layer is formed by storage units of a corresponding layer of each buffer control block.
For example, each 3 CCBs may be configured into a CCB group, so that each CCB group may include three-layer page groups, where each layer page group includes pages of 3 CCBs corresponding to layers, that is, the first layer page group includes page0 of the first CCB, page0 of the second CCB, page0 of the third CCB, the second layer page group includes page1 of the first CCB, page1 of the second CCB, page1 of the third CCB, and the third layer page group includes page2 of the first CCB, page2 of the second CCB, and page2 of the third CCB.
In some embodiments, in step S3, in response to receiving the data write request, the data carried in the write request is written into the storage unit groups of the cache control block group sequentially in order of the levels of the storage unit groups, and specifically, the DM may fill the first layer page group, the second layer page group, and the third layer page group of the CCB group sequentially in order of the levels of the storage unit groups. And each time one CCB is filled with data, it is sent to the WM until all CCBs in the CCB group are filled with data, and the filling of data is continued with the next CCB group.
For example, the data is filled into the pages of CCB0, CCB1, CCB2, CCB0, CCB1, CCB0, CCB1, CCB2, and CCB2, respectively, and then sent to the WM, and so on.
In some embodiments, S3, in response to receiving a data write request, sequentially writing data carried in the write request to a storage unit group of a corresponding layer in order of a hierarchy of the storage unit group, further includes:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
Specifically, when the last received write request is insufficient to write the current CCB group, or one CCB group is insufficient to complete the containment of the data in the write request, the remaining data continues to be filled with the next CCB group.
For example, when the data in the write request is only the first and second layer pages in the current CCB group, the third layer page is only the first page2, and since the first CCB is filled, only the first CCB in the current CCB group is sent to the WM, and then when the next write request is received, the pages 2 of the second and third CCBs in the current CCB group are filled first, and the remaining data is filled with the next CCB group.
In some embodiments, S6, in response to the number of cache control blocks in the list reaching a preset number, writes the preset number of cache control blocks in the list into the LUN of the stripe, further includes:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
Specifically, a LIST is created for each DM in the WM, after a CCB sent by the DM is received, the CCB is put in the corresponding LIST, and then each LIST is circularly traversed, when a certain LIST reaches a writing condition, that is, the number of CCBs in a certain LIST reaches a threshold value, the WM sequentially obtains CCBs from the LIST, then initiates a NAND writing operation, that is, applies for the PCB, obtains an available physical address, initiates NAND writing, and writes a preset number of cache control blocks in the LIST into LUNs corresponding to the available physical address. Thus, the WM can be ensured not to be interrupted by other DM writing when writing CCB of one DM, and data arrangement is not disordered.
In some embodiments, further comprising:
and recording the logical address carried in the write request and the available physical address into an L2P table.
According to the scheme provided by the invention, the user data is prearranged in the DM in a data prearranging and multi-flow control mode, a plurality of CCBs are preapplied in the DM according to the number of the LUNs of the stripe, and pages 1 and 2 are rewritten after the writing of pages 0 of all CCBs is finished. And the WM adopts a multi-flow control mode to process CCBs of 8 DM, the WM creates an independent LIST for each DM to store the CCBs of the corresponding DM, when the LIST of a certain DM reaches a writing condition, the WM obtains the CCBs from the LIST to initiate writing operation, and the obtained data of the CCBs are determined by the BS written currently, so that the data of one DM are distributed in an expected mode as much as possible, the CCBs among a plurality of DM are not intersected with each other to cause disorder, and the CCBs are read concurrently from a plurality of different LUNs according to the size of the BS to improve the sequential reading performance.
Based on the same inventive concept, according to another aspect of the present invention, there is also provided a data storage system 400, as shown in fig. 2, including:
an application module 401 configured to determine the number and size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-tier storage unit;
a construction module 402 configured to construct a plurality of cache control block groups, wherein each cache control block group includes the same number of cache control blocks and each cache control block group includes a plurality of layers of memory cell groups, each layer of memory cell groups being constituted by memory cells of a corresponding layer of each cache control block;
the first writing module 403 is configured to sequentially write data carried in the writing request into the storage unit group of the cache control block group in order of the hierarchy of the storage unit group in response to receiving the data writing request;
a sending module 404 configured to send, to the write manager, the cache control block in which each layer of storage units writes data, in response to the cache control block group having the cache control block in which each layer of storage units writes data;
a list module 405 configured to cause the write manager to put the received cache control block into a list corresponding to the data manager;
and a second writing module 406 configured to write the preset number of cache control blocks in the list into the LUN of the stripe in response to the number of cache control blocks in the list reaching the preset number.
In some embodiments, the first write module 403 is further configured to:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
In some embodiments, the second write module 406 is further configured to:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
In some embodiments, the second write module 406 is further configured to:
and recording the logical address carried in the write request and the available physical address into an L2P table.
Based on the same inventive concept, according to another aspect of the present invention, as shown in fig. 3, an embodiment of the present invention further provides a computer apparatus 501, including:
at least one processor 520; and
the memory 510, the memory 510 stores a computer program 511 executable on a processor, and the processor 520 performs the steps of any of the data storage methods described above when executing the program.
According to another aspect of the present invention, as shown in fig. 4, based on the same inventive concept, an embodiment of the present invention further provides a computer-readable storage medium 601, the computer-readable storage medium 601 storing computer program instructions 610, the computer program instructions 610 when executed by a processor performing the steps of any of the data storage methods as above.
Finally, it should be noted that, as will be appreciated by those skilled in the art, all or part of the procedures in implementing the methods of the embodiments described above may be implemented by a computer program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, and the program may include the procedures of the embodiments of the methods described above when executed.
Further, it should be appreciated that the computer-readable storage medium (e.g., memory) herein can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as software or hardware depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The foregoing is an exemplary embodiment of the present disclosure, but it should be noted that various changes and modifications could be made herein without departing from the scope of the disclosure as defined by the appended claims. The functions, steps and/or actions of the method claims in accordance with the disclosed embodiments described herein need not be performed in any particular order. Furthermore, although elements of the disclosed embodiments may be described or claimed in the singular, the plural is contemplated unless limitation to the singular is explicitly stated.
It should be understood that as used herein, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly supports the exception. It should also be understood that "and/or" as used herein is meant to include any and all possible combinations of one or more of the associated listed items.
The foregoing embodiment of the present invention has been disclosed with reference to the number of embodiments for the purpose of description only, and does not represent the advantages or disadvantages of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, and the program may be stored in a computer readable storage medium, where the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
Those of ordinary skill in the art will appreciate that: the above discussion of any embodiment is merely exemplary and is not intended to imply that the scope of the disclosure of embodiments of the invention, including the claims, is limited to such examples; combinations of features of the above embodiments or in different embodiments are also possible within the idea of an embodiment of the invention, and many other variations of the different aspects of the embodiments of the invention as described above exist, which are not provided in detail for the sake of brevity. Therefore, any omission, modification, equivalent replacement, improvement, etc. of the embodiments should be included in the protection scope of the embodiments of the present invention.

Claims (10)

1. A method of data storage comprising the steps of:
determining the number and the size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-layer storage unit;
constructing a plurality of cache control block groups, wherein each cache control block group comprises the same number of cache control blocks, each cache control block group comprises a plurality of layers of storage unit groups, and each layer of storage unit group consists of storage units of a corresponding layer of each cache control block;
in response to receiving a data writing request, sequentially writing data carried in the writing request into the storage unit groups of the cache control block group by taking the hierarchy of the storage unit groups as the sequence;
in response to the existence of a cache control block in which each layer of storage units writes data in the cache control block group, the data manager sends the cache control block in which each layer of storage units writes data to a write manager;
the write manager puts the received cache control block into a list corresponding to the data manager;
and in response to the number of cache control blocks in the list reaching a preset number, writing the preset number of cache control blocks in the list into the LUNs of the stripe.
2. The method of claim 1, wherein in response to receiving a data write request, writing data carried in the write request to a storage unit group of a corresponding layer in order of a hierarchy of the storage unit group, further comprising:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
3. The method of claim 1, wherein writing the preset number of cache control blocks in the list into the LUN of the stripe in response to the number of cache control blocks in the list reaching the preset number, further comprises:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
4. A method as recited in claim 3, further comprising:
and recording the logical address carried in the write request and the available physical address into an L2P table.
5. A data storage system, comprising:
an application module configured to determine the number and size of cache control blocks in each data manager according to the size of the stripe, wherein each cache control block comprises a multi-tier storage unit;
a construction module configured to construct a plurality of cache control block groups, wherein each cache control block group includes the same number of cache control blocks and each cache control block group includes a plurality of layers of memory cell groups, each layer of memory cell groups is composed of memory cells of a corresponding layer of each cache control block;
the first writing module is configured to respond to the received data writing request, and sequentially write the data carried in the writing request into the storage unit group of the cache control block group by taking the hierarchy of the storage unit group as the sequence;
a transmission module configured to transmit a cache control block in which each layer of storage units writes data to a write manager in response to the cache control block group having the cache control block in which each layer of storage units writes data;
a list module configured to cause the write manager to put the received cache control block into a list corresponding to the data manager;
and the second writing module is configured to write the preset number of cache control blocks in the list into the LUNs of the stripe in response to the number of the cache control blocks in the list reaching the preset number.
6. The system of claim 5, wherein the first write module is further configured to:
and in response to the existence of the remaining unwritten data after the data carried in the write request is written into the storage unit group of the last layer of the current cache control block group, sequentially writing the data carried in the write request into the storage unit group of the next cache control block group by taking the hierarchy of the storage unit group as the sequence.
7. The system of claim 5, wherein the second write module is further configured to:
the write manager obtains the available physical addresses and writes a preset number of cache control blocks in the list into the LUNs corresponding to the available physical addresses.
8. The system of claim 7, wherein the second write module is further configured to:
and recording the logical address carried in the write request and the available physical address into an L2P table.
9. A computer device, comprising:
at least one processor; and
a memory storing a computer program executable on the processor, wherein the processor performs the steps of the method of any of claims 1-4 when the program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor performs the steps of the method according to any of claims 1-4.
CN202210076086.0A 2022-01-23 2022-01-23 Data storage method, system, equipment and medium Active CN114546263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210076086.0A CN114546263B (en) 2022-01-23 2022-01-23 Data storage method, system, equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210076086.0A CN114546263B (en) 2022-01-23 2022-01-23 Data storage method, system, equipment and medium

Publications (2)

Publication Number Publication Date
CN114546263A CN114546263A (en) 2022-05-27
CN114546263B true CN114546263B (en) 2023-08-18

Family

ID=81670790

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210076086.0A Active CN114546263B (en) 2022-01-23 2022-01-23 Data storage method, system, equipment and medium

Country Status (1)

Country Link
CN (1) CN114546263B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1303053A (en) * 2000-01-04 2001-07-11 国际商业机器公司 Queue supervisor of buffer
WO2015077955A1 (en) * 2013-11-28 2015-06-04 华为技术有限公司 Method, apparatus and system of data writing
CN108920192A (en) * 2018-07-03 2018-11-30 中国人民解放军国防科技大学 Cache data consistency implementation method and device based on distributed limited directory
CN113608695A (en) * 2021-07-29 2021-11-05 济南浪潮数据技术有限公司 Data processing method, system, device and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9542328B2 (en) * 2015-01-26 2017-01-10 International Business Machines Corporation Dynamically controlling a file system write cache
WO2019200142A1 (en) * 2018-04-12 2019-10-17 Micron Technology, Inc. Replay protected memory block command queue

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1303053A (en) * 2000-01-04 2001-07-11 国际商业机器公司 Queue supervisor of buffer
WO2015077955A1 (en) * 2013-11-28 2015-06-04 华为技术有限公司 Method, apparatus and system of data writing
CN108920192A (en) * 2018-07-03 2018-11-30 中国人民解放军国防科技大学 Cache data consistency implementation method and device based on distributed limited directory
CN113608695A (en) * 2021-07-29 2021-11-05 济南浪潮数据技术有限公司 Data processing method, system, device and medium

Also Published As

Publication number Publication date
CN114546263A (en) 2022-05-27

Similar Documents

Publication Publication Date Title
US11481121B2 (en) Physical media aware spacially coupled journaling and replay
EP3726364B1 (en) Data write-in method and solid-state drive array
CN102623042B (en) Accumulator system and operational approach thereof
CN107728937B (en) Key value pair persistent storage method and system using nonvolatile memory medium
CN102156738A (en) Method for processing data blocks, and data block storage equipment and system
CN114860163B (en) Storage system, memory management method and management node
US10489289B1 (en) Physical media aware spacially coupled journaling and trim
US11061788B2 (en) Storage management method, electronic device, and computer program product
KR20110093035A (en) Apparatus for flash address translation apparatus and method thereof
US10789170B2 (en) Storage management method, electronic device and computer readable medium
US20160266805A1 (en) Sliding-window multi-class striping
CN115309348B (en) Metadata management method and device, computer equipment and storage medium
CN110413454A (en) Data re-establishing method, device and storage medium based on storage array
CN105393228A (en) Method, device and user equipment for reading/writing data in nand flash
CN109558456A (en) A kind of file migration method, apparatus, equipment and readable storage medium storing program for executing
CN112394874A (en) Key value KV storage method and device and storage equipment
CN114546263B (en) Data storage method, system, equipment and medium
CN106980471B (en) Method and device for improving hard disk writing performance of intelligent equipment
CN113608695A (en) Data processing method, system, device and medium
US11379326B2 (en) Data access method, apparatus and computer program product
CN115904255B (en) Data request method, device, equipment and storage medium
CN112463055A (en) Method, system, equipment and medium for optimizing and using L2P table of solid state disk
CN103279562B (en) A kind of method, device and database storage system for database L2 cache
US11513951B2 (en) System and method for improving write performance for log structured storage systems
CN112433957B (en) Data access method, data access system and readable storage device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant