CN115543871A - Data storage method and related equipment - Google Patents

Data storage method and related equipment Download PDF

Info

Publication number
CN115543871A
CN115543871A CN202211508013.0A CN202211508013A CN115543871A CN 115543871 A CN115543871 A CN 115543871A CN 202211508013 A CN202211508013 A CN 202211508013A CN 115543871 A CN115543871 A CN 115543871A
Authority
CN
China
Prior art keywords
data
check
target
cache
stripe
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211508013.0A
Other languages
Chinese (zh)
Other versions
CN115543871B (en
Inventor
李飞龙
许永良
孙明刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202211508013.0A priority Critical patent/CN115543871B/en
Publication of CN115543871A publication Critical patent/CN115543871A/en
Application granted granted Critical
Publication of CN115543871B publication Critical patent/CN115543871B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0871Allocation or management of cache space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
    • G06F12/0866Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches for peripheral storage systems, e.g. disk cache
    • G06F12/0868Data transfer between cache memory and other subsystems, e.g. storage devices or host systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/10Providing a specific technical effect
    • G06F2212/1016Performance improvement
    • G06F2212/1024Latency reduction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/26Using a specific storage system architecture
    • G06F2212/261Storage comprising a plurality of storage devices
    • G06F2212/262Storage comprising a plurality of storage devices configured as RAID

Abstract

The application discloses a data storage method, which comprises the steps of determining target data according to a data storage request; acquiring a target cache node, and determining a designated target cache region in the target cache node; performing stripe division on the target data to obtain each data stripe; responding to a band division completion prompt, and writing each data band into the target cache region; and feeding back a storage completion signal to an initiating end of the data storage request. By applying the technical scheme provided by the application, the response delay facing to the host end can be effectively reduced in the data storage process, so that the quick response is realized. The application also discloses a data storage device, equipment and a computer readable storage medium, which have the beneficial effects.

Description

Data storage method and related equipment
Technical Field
The present application relates to the field of data storage technologies, and in particular, to a data storage method and a related device.
Background
In the modern society of high-speed development, all trades need safe and efficient storage data. RAID (Redundant Array of Independent Disks) technology is an important technology in storage, and mainly strips, mirror images and checks are used for ensuring data security and improving the I/O performance of the RAID Array. The cache is used as an important component of the RAID array, so that the I/O performance can be greatly improved, the read-write response is accelerated, the storage system only needs to write data into the cache, a data write success signal is immediately sent to the host, then other I/O requests initiated by the host are continuously processed, the speed of writing the data into the cache is far greater than the speed of writing the data into a physical disk, and the I/O data temporarily stored in the cache can be written into each disk in the RAID array by a plurality of threads in a thread pool, so that the response delay of I/O is greatly reduced.
At present, in the field of storage, a hard RAID storage technology (i.e., RAID card) is proposed in the industry on the basis of a soft RAID storage technology, and by using the RAID card technology, the I/O performance of read-write data and data security can be effectively improved. Among the cache Write strategies of the RAID card, there are a WB (Write Back) Write strategy and a WT (Write Through mode) Write strategy, and the WB Write strategy has a lower response delay than the WT Write strategy, because the WB Write strategy sends a data Write success signal to the host immediately after writing Write data and check data into the cache, and the WT Write strategy sends a data Write success signal to the host only after writing all the Write data and check data into the physical disk of the RAID card, so the WB Write strategy has higher I/O performance than the WT Write strategy.
In the details of implementing the WB write strategy in the industry, after receiving write data, stripe-splitting the write data, waiting for obtaining check blocks through data block xor operation in each split stripe, writing the check blocks in each stripe into a cache after obtaining the check blocks, and finally responding to the host.
Therefore, how to effectively reduce the response delay towards the host side in the data storage process, so as to realize fast response is a problem to be urgently solved by those skilled in the art.
Disclosure of Invention
The data storage method can effectively reduce response delay facing a host end in the data storage process, thereby realizing quick response; it is another object of the present application to provide a data storage device, an apparatus and a computer readable storage medium, all having the above-mentioned advantages.
In a first aspect, the present application provides a data storage method, including:
determining target data according to the data storage request;
acquiring a target cache node, and determining a designated target cache region in the target cache node;
performing stripe division on the target data to obtain each data stripe;
responding to a strip division completion prompt, and writing each data strip into the target cache region;
and feeding back a storage completion signal to an initiating end of the data storage request.
Optionally, the obtaining the target cache node includes:
acquiring an idle cache node from a global idle cache node linked list;
taking the idle cache node as the target cache node;
wherein the number of the data storage requests is the same as the number of the target cache nodes.
Optionally, the determining a target cache region specified in the target cache node includes:
performing field analysis on the target cache node to obtain a cache region field;
and determining the target cache region according to the record information of the cache region field.
Optionally, the target cache node includes a flushing status field, a cache region field, a front pointer field, and a back pointer field;
the flushing status field is used for recording the cache flushing status of the cache region, and the cache flushing status comprises the finished cache flushing and the unfinished cache flushing;
the cache region field is used for recording the position information of the target cache region in the cache region;
the front pointer field is used for recording node information of a previous cache node of the target cache node in the global idle cache node linked list;
and the back pointer field is used for recording the node information of the next cache node of the target cache node in the global idle cache node linked list.
Optionally, after writing each of the data stripes into the target cache region, the method further includes:
for each data stripe, calculating according to each data block in the data stripe to obtain a check block;
and writing the check blocks into corresponding data stripes in the target cache region.
Optionally, after writing the check block into the corresponding data stripe in the target cache region, the method further includes:
acquiring an idle check element as a target check element in the global check table;
searching a check state bit corresponding to the data strip in a strip check field of the target check element;
and updating the check state bit to be that the strip check is completed.
Optionally, the target check element includes a global check status field, a stripe check status field, and a pointer field;
the global check state field is used for recording a global check state of the target data, and the global check state comprises a completed global check and an incomplete global check;
the stripe check state field is used for recording stripe check states of the data stripes, and the stripe check states comprise a stripe check completed state and a stripe check incomplete state;
the pointer field is used for recording element information of a next check element of the target check element in the global check table.
Optionally, the method further comprises:
and when all the check status bits in the strip check field are the strip check completion status, updating the global check status field to be the global check completion.
Optionally, after the updating the global check status field to that the global check is completed, the method further includes:
for each data stripe in the cache region, flushing each data block and check block in the data stripe down to a physical disk.
Optionally, the under-brush status field is associated with bitmap metadata for recording a stripe under-brush status of each of the data stripes, the stripe under-brush status comprising a stripe under-brush completed and a stripe under-brush not completed;
correspondingly, after the flushing of each data partition and the parity partition in the data stripe to a physical disk, the method further includes:
searching a lower brushing state bit corresponding to the data strip in the bitmap metadata;
updating the lower brushing status bit to that the strip lower brushing is completed.
Optionally, the method further comprises:
when all of the flush status bits in the bitmap metadata are that the stripe flushing is completed, updating the flush status field to that the cache flushing is completed.
Optionally, the method further comprises:
when the global check state field is updated to be that the global check is completed, releasing the target check element;
and when the update of the brushing-down state field is that the cache brushing-down is completed, releasing the target cache node.
In a second aspect, the present application also discloses a data storage device comprising:
the determining module is used for determining target data according to the data storage request;
the system comprises an acquisition module, a cache module and a cache module, wherein the acquisition module is used for acquiring a target cache node and determining a designated target cache region in the target cache node;
the dividing module is used for carrying out stripe division on the target data to obtain each data stripe;
a write-in module, configured to write each data stripe into the target cache region in response to a stripe division completion prompt;
and the feedback module is used for feeding back a storage completion signal to the initiating end of the data storage request.
In a third aspect, the present application further discloses a data storage device, including:
a memory for storing a computer program;
a processor for implementing the steps of any of the data storage methods described above when executing the computer program.
In a fourth aspect, the present application also discloses a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of any of the data storage methods described above.
By applying the technical scheme provided by the application, after the target data needing to be written into the storage system is determined according to the data storage request, the target cache region corresponding to the target data can be determined in the target cache node, the target data can be subjected to stripe division, in the process, the stripe division condition can be monitored in real time, once the prompt of completing the stripe division is monitored, each data stripe obtained after division is immediately written into the target cache region, and a storage completion signal is directly fed back to the initiating terminal, namely, the technical scheme can realize data caching and reply a response to the requesting terminal without waiting for the generation of a check partition block after the data stripe division is completed, can effectively reduce response delay facing the requesting terminal in the data storage process, further realize quick response, and is also beneficial to improving the system performance of the storage terminal.
Drawings
In order to more clearly illustrate the technical solutions in the prior art and the embodiments of the present application, the drawings that are needed to be used in the description of the prior art and the embodiments of the present application will be briefly described below. Of course, the following description of the drawings related to the embodiments of the present application is only a part of the embodiments of the present application, and it will be apparent to those skilled in the art that other drawings may be obtained from the provided drawings without any creative effort, and the obtained other drawings also belong to the protection scope of the present application.
Fig. 1 is a schematic flow chart of a data storage method provided in the present application;
FIG. 2 is a schematic diagram of a stripe division provided in the present application;
fig. 3 is a schematic structural diagram of a cache module provided in the present application;
FIG. 4 is a schematic structural diagram of a global check table provided in the present application;
fig. 5 is a schematic structural diagram of a cache node provided in the present application;
fig. 6 is a schematic structural diagram of a global idle cache node linked list according to the present application;
FIG. 7 is a schematic flow chart of another data storage method provided herein;
FIG. 8 is a schematic structural diagram of a data storage device provided in the present application;
fig. 9 is a schematic structural diagram of a data storage device provided in the present application.
Detailed Description
The core of the application is to provide a data storage method, which can effectively reduce the response delay facing a host end in the data storage process, thereby realizing quick response; another core of the present application is to provide a data storage device, an apparatus and a computer readable storage medium, all having the above advantages.
In order to more clearly and completely describe the technical solutions in the embodiments of the present application, the technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
The embodiment of the application provides a data storage method.
Referring to fig. 1, fig. 1 is a schematic flow chart of a data storage method provided in the present application, and the data storage method may include the following steps S101 to S105.
S101: target data is determined according to the data storage request.
This step is intended to achieve the determination of the target data, i.e. the data that needs to be stored. Specifically, when some data needs to be written into the storage system for storage, the host (the initiator that initiates the data storage request) may add the data as target data to the data storage request, send the target data to the storage system, and the storage system obtains the target data by analyzing the data storage request.
S102: and acquiring a target cache node, and determining a designated target cache region in the target cache node.
This step is intended to achieve the acquisition of the target cache node. Specifically, a cache region may be created in advance in the storage system, and is divided into a plurality of cache regions, each cache region is used for storing one piece of target data, and each cache region records the location information of each cache node, so that each cache node is used for processing one data storage request.
On this basis, after the target data is determined according to the data storage request, a cache node may be obtained first as a target cache node for processing the data storage request, and since the target cache node records location information of a cache region, the cache region may be used as a target cache region of the target data, so as to write the target data into the target cache region.
It should be noted that the target cache node should be a cache node in an idle state, in other words, the target cache region specified by the target cache node should be a cache region into which no target data is written.
S103: and carrying out stripe division on the target data to obtain each data stripe.
This step is intended to achieve stripe partitioning of the target data. It will be appreciated that the storage of data by physical disks is in the form of chunks (strip), i.e. a partition is divided into a number of equal-size, address-adjacent chunks (blocks), which are then referred to as chunks, which are usually considered as elements of a stripe (strip), which is a collection of location-dependent chunks on different partitions. Therefore, in order to facilitate the subsequent flushing of the target data in the cache region to the physical disk, the target data may also be stripe-divided to obtain each data stripe, so that the target data is stored in the target cache region of the cache region in the form of a data stripe.
Referring to fig. 2, fig. 2 is a schematic diagram of stripe division provided in the present application, in which 101 is divided data stripes stripe0 to stripe2, stripe 1 to stripe 20 represent 20 data chunks, and parity1 to parity5 represent 5 parity chunks.
S104: and responding to the strip division completion prompt, and writing each data strip into the target cache region.
This step is intended to implement writing each data stripe into the target cache area. As described above, after receiving write data, stripe-splitting the write data, waiting for obtaining check blocks through xor operation of data blocks in each split stripe, writing the check blocks in each stripe into a cache after obtaining the check blocks, and finally responding to a host, obviously, the waiting process may increase response delay of a WB write strategy. However, in this embodiment of the present application, the stripe dividing process may be monitored in real time, and once the stripe division completion prompt is monitored, each data stripe is immediately written into the target cache region, so that it can be seen that writing each data stripe into the target cache region is performed before calculating the check block for each data stripe (i.e., the check value is not written in the check block) after dividing the target data into a plurality of data stripes, that is, the check block in each data stripe is written into the target cache region in the form of a null block, and after that, the calculation of the check value is performed in the target cache region, and then the check block is filled.
S105: and feeding back a storage completion signal to an initiating end of the data storage request.
This step is intended to implement feedback of a storage completion signal, and specifically, immediately feeds back the storage completion signal to an originating end (i.e., the host end) of the data storage request after each data stripe is written into the target cache region. On the basis of S104, the feedback of the storage completion signal to the initiating terminal of the data storage request is executed before the check value is calculated and the check blocks are filled, so that the response can be replied to the initiating terminal without waiting for the check value processing, and the response delay is effectively reduced.
It can be seen that, according to the data storage method provided in the embodiment of the present application, after target data to be written into a storage system is determined according to a data storage request, a target cache region corresponding to the target data may be determined in a target cache node, and stripe division may be performed on the target data.
In an embodiment of the application, the obtaining the target cache node may include: acquiring an idle cache node from a global idle cache node linked list; taking the idle cache node as a target cache node; wherein the number of the data storage requests is the same as the number of the target cache nodes.
The embodiment of the application provides a method for acquiring a target cache node. Specifically, a global idle cache node linked list may be created in advance for managing idle cache nodes, and therefore, idle cache nodes may be directly called from the global idle cache node linked list as target cache nodes.
In an embodiment of the application, the determining a target cache region specified in the target cache node may include: performing field analysis on the target cache node to obtain a cache region field; and determining a target cache region according to the record information of the cache region field.
The embodiment of the application provides an implementation method for determining a target cache region. As described above, each cache region records its location information by one cache node, so after obtaining a cache node, it may perform field parsing to obtain a cache region field for storing the location information of the target cache region, and then determine the target cache region according to the record information in the cache region field, where of course, the record information is the location information of the target cache region.
In one embodiment of the present application, the target cache node may include a flush status field, a cache region field, a front pointer field, and a back pointer field; the brushing-down state field is used for recording the caching brushing-down state of the caching area, and the caching brushing-down state comprises the completed caching brushing-down and the incomplete caching brushing-down; the cache region field is used for recording the position information of the target cache region in the cache region; the front pointer field is used for recording node information of a previous cache node of the target cache node in the global idle cache node linked list; and the back pointer field is used for recording the node information of the next cache node of the target cache node in the global idle cache node linked list.
The embodiment of the present application provides a specific form of cache node, which may include four fields, namely a flushing status field, a cache region field, a front pointer field, and a back pointer field. The lower-brushing state field is used for recording the lower-brushing state of the cache in the cache region, and the lower-brushing state of the cache is whether the cache data in the cache region is flushed to the physical disk or not, so that the lower-brushing state of the cache comprises that the lower-brushing of the cache is completed and the lower-brushing of the cache is not completed, the lower-brushing of the cache is completed to indicate that the cache data in the cache region is flushed to the physical disk, and the lower-brushing of the cache is not completed to indicate that the cache data in the cache region is not flushed to the disk in the room; the front pointer field and the back pointer field are used for confirming front and back cache nodes of the current cache node, and guaranteeing the consistency sequence of the cache nodes, so that the consistency sequence of processing data storage requests is guaranteed.
In an embodiment of the present application, after writing each data stripe into the target cache area, the method may further include: for each data stripe, calculating according to each data block in the data stripe to obtain a check block; and writing the check blocks into corresponding data stripes in the target cache region.
This step is intended to enable the computation of the check value to check the filling of the blocks. It should be noted that this step may be performed after the storage completion signal is fed back to the request end, or may be performed while the storage completion signal is fed back to the request end, but not before the storage completion signal is fed back to the request end. In the implementation process, for each divided data stripe, check block calculation may be performed according to each data block therein, and here, a method of performing an exclusive or operation on each data block may be specifically adopted to obtain a check block, and then the check block is written into a corresponding data stripe in the target cache area.
In an embodiment of the application, after writing the check blocks into the corresponding data stripes in the target cache region, the method may further include: acquiring an idle check element as a target check element in the global check table; searching a check state bit corresponding to the data strip in a strip check field of the target check element; the check status bit is updated to the stripe check is complete.
The embodiment of the application provides an implementation method for recording the verification state of each data stripe, wherein the verification state of the stripe is used for indicating whether the corresponding data stripe completes verification or not, and the verification state of the stripe comprises that the stripe verification is completed and the stripe verification is not completed. In the implementation process, a global check table may be created in advance, where the global check table includes a plurality of check elements in an idle state, each check element is used to record a check state of each data stripe in the target data, and specifically, each check element may record a check state of each data stripe in the target data through a stripe check field therein. On this basis, a free check element can be obtained from the global check table as a target check element corresponding to the target data, and a stripe check field is found therein, where the stripe check field includes multiple check status bits, and each check status bit is used to record a check status of one data stripe, so that after checking blocks are written into a corresponding data stripe in the target cache region, the check status bit corresponding to the data stripe can be found in the stripe check field and updated to a status that the stripe check is completed, for example, when the check status position is 0, it indicates that the corresponding data stripe check is not completed, and when the check status position is 1, it indicates that the corresponding data stripe check is completed.
In one embodiment of the present application, the target check element may include a global check status field, a stripe check status field, and a pointer field; the global check state field is used for recording the global check state of the target data, and the global check state comprises a completed global check and an incomplete global check; the stripe check state field is used for recording the stripe check state of each data stripe, and the stripe check state comprises a stripe check completed state and a stripe check incomplete state; the pointer field is used for recording element information of a next check element of the target check element in the global check table.
The embodiment of the application provides a specific form of check element, which may include three fields, namely a global check state field, a stripe check state field, and a pointer field. The system comprises a data storage unit, a data processing unit and a data processing unit, wherein the data storage unit is used for storing a global check state of target data, and the global check state is used for indicating whether all data stripes in the target data are completely checked or not; the pointer field is then used to point to the next check element.
In one embodiment of the present application, the method may further comprise: and when all the check status bits in the strip check field are strip check completion status, updating the global check status field to be global check completion.
As described above, the stripe check field in the check element is used to record the check state of each data stripe in the target data, and the global check state field is used to record the check states of all data stripes in the target data, so that when all check state bits in the stripe check field are the stripe check complete state, the global check state field may be updated to be the global check complete state. For example, 0 indicates that the parity is not completed, and 1 indicates that the parity is completed, then, when all parity status bits in the stripe parity field are 1, the global parity status field is set to 1, and when there is a parity status bit with a value of 0 in the stripe parity field, the global parity status field is set to 0.
In an embodiment of the application, after the updating the global check state field to the global check is completed, the method may further include: and for each data stripe in the cache region, flushing each data block and check block in the data stripe to a physical disk.
The data storage provided by the embodiment of the application is used for implementing the flushing of the cache data of the cache region to the physical disk, and after the global check state field is updated to the state that the global check is completed, that is, after the target data is confirmed to have completed the global check, each data block and check block in each data stripe can be flushed to the physical disk. In one possible implementation, the flushing operation may be executed together after all the cache regions in the cache region have completed the global check.
In one embodiment of the present application, the under-brush status field is associated with bitmap metadata for recording under-brush statuses of the data strips, the under-brush statuses including under-brush completed and under-brush not completed;
correspondingly, after the foregoing flushing each data block and check block in the data stripe to the physical disk, the method may further include: searching a lower brushing state bit corresponding to the data strip in the bit primitive data; and updating the lower brushing status bit to be that the strip lower brushing is completed.
The embodiment of the application provides an implementation method for recording the brushing state of each data strip, wherein the brushing state of each data strip is used for indicating whether the corresponding data strip is brushed completely, and the brushing state comprises that the brushing is finished and the brushing of each strip is not finished. In the implementation process, bitmap metadata can be created in advance, wherein the bitmap metadata comprises a plurality of bits in the brushing status, and each bit in the brushing status is used for recording the brushing status of one data stripe. Therefore, after each data partition and check partition in the data stripe are flushed to the physical disk, the flushing status bit corresponding to the data stripe can be found in the bit primitive data, and the flushing status bit is updated to the status that the flushing of the stripe is completed, for example, when the flushing status position is 0, it indicates that the flushing of the corresponding data stripe is not completed, and when the flushing status position is 1, it indicates that the flushing of the corresponding data stripe is completed.
The bitmap metadata is associated with the brushing status field in the cache node and is used for realizing the updating of the brushing status field.
In one embodiment of the present application, the method may further comprise: and when all the brushing status bits in the bitmap metadata are the completion of the strip brushing, updating the brushing status field into the completion of the cache brushing.
As described above, the bitmap metadata is used to record the brush-down status of each data stripe in the target data, the brush-down status field in the cache node is used to record the brush-down status of all data stripes in the target data, and the bitmap metadata is associated with the brush-down status field, so that when all the brush-down status bits in the bitmap metadata are the brush-down completion of the stripe, the brush-down status field can be updated to be the cache brush-down completion. For example, 0 indicates that the flush is not completed, and 1 indicates that the flush is completed, then the flush status field is set to 1 when all the flush status bits in the bitmap metadata are 1, and the flush status field is set to 0 when the flush status bit with 0 exists in the bitmap metadata.
In one embodiment of the present application, the method may further comprise: when the global check state field is updated to be that the global check is completed, releasing the target check element; and when the brushing status field is updated to be the finished brushing of the cache, releasing the target cache node.
The method and the device are used for releasing the target check element and the target cache node, and when the global check state field is updated to be that the global check is completed, namely when the target data is confirmed to complete the global check, the target check element is released so as to provide service for a subsequent new data storage request; when the refresh-down state field is updated to be that the cache refresh-down is completed, namely when the target data is confirmed to be completed with the global refresh-down, the target cache node is released so as to provide service for a subsequent new data storage request.
Based on the foregoing embodiments, another data storage method is provided in the embodiments of the present application.
First, please refer to fig. 3, fig. 3 is a schematic structural diagram of a cache module provided in the present application, where the cache module is a cache module in a RAID card, and includes an upper layer cache 210, a lower layer cache 211, and a cache area 212. Wherein:
1. the upper-level cache 210:
the upper-level cache 210 is used for maintaining a global check table, and the global check table may be managed in a metadata organization manner by combining a single direction chain table and a bitmap.
Referring to fig. 4, fig. 4 is a schematic structural diagram of a global check table provided in the present application, where for each check element in the global check table:
(1) The parity _ ok field (global check state field) indicated by 301 is a book type, and when all the check blocks are calculated by the stripe after the target data is split, the parity _ ok field is assigned to true, otherwise, the parity _ ok field is assigned to false.
(2) The stripe [32] field (stripe check state field) indicated by 302 is an array of int type, one int type data contains 8 bits, one bit corresponds to one stripe, if the stripe has been calculated to obtain check blocks, the corresponding bit is set to 1, otherwise, the corresponding bit is 0 (indicated by 304), therefore, when the bits corresponding to the stripe are all 1, the parity _ ok field is assigned to true, otherwise, the parity _ ok field is assigned to false.
(3) The nextStripe field (pointer field) indicated by 303 points to the next check element in the global check table.
2. The lower layer cache 211:
the lower layer cache 211 manages cache nodes in a doubly linked list metadata organization manner, one cache node manages and controls a primary host I/O request (data storage request), doubly linked list metadata connects the cache nodes together by using a front pointer field and a back pointer field in the cache nodes, and if target data are all brushed to a physical disk, the cache nodes (target cache nodes) which manage and control the secondary host I/O request are released back to a global idle cache node linked list, so that the lower layer cache 211 also maintains a global idle cache node linked list. When a new host I/O request comes, the RAID card controller takes out an idle cache node from the global idle cache node linked list, adds the idle cache node as a target cache node into the cache node linked list managed by the bi-directional linked list metadata organization mode, and releases the target cache node back into the global idle cache node linked list after all target data corresponding to the host I/O request is printed to a physical disk.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a cache node provided in the present application, wherein:
(1) The default _ control field (the flushing status field) indicated by 401 is of a pool type, and is associated with bitmap metadata (405), a bit in the bitmap metadata corresponds to a stripe, a bit of 1 indicates that the corresponding stripe has been flushed to the physical disk, a bit of 0 indicates that the corresponding stripe has not been flushed to the physical disk, and when all the stripes are flushed to the physical disk, that is, the corresponding bits are all 1, the default _ control field is set to true, otherwise, the default _ control field is set to false.
(2) A cache _ ptr field (cache region field) indicated by 402 is a specific region (such as the cache region indicated by 212 in fig. 3) indicating that the target data needs to be temporarily stored in the cache region;
(3) The pre _ pointer field (the prepointer field) indicated at 403 is a prepointer to the previous cache node;
(4) The next _ pointer field (front pointer field) indicated at 404 is a back pointer to the next cache node.
Further, referring to fig. 6, fig. 6 is a schematic structural diagram of a global idle cache node linked list provided in the present application, and an implementation process for implementing logic control based on a cache node includes:
indicated at 500 is a global idle cache node linked list maintained by a lower-layer cache, and when three host I/O write requests arrive, the RAID card controller fetches three cache nodes from the head of the global idle cache node linked list, and one cache node corresponds to one host I/O write request, and therefore, the global idle cache node linked list after the three cache nodes are fetched is shown as 510. Since the data chunks and check chunks in the stripe have not been flushed to the physical disk by the worker thread in the thread pool after being written into the cache, the bit in the bitmap metadata associated with the stall _ control field in the cache node is 0 (521 shown in fig. 6).
Finally, referring to fig. 7, fig. 7 is a schematic flow chart of another data storage method provided in the present application, and the implementation flow thereof may include:
the first step is as follows: the host sends a host I/O request to the RAID card, and the driver of the RAID card receives and parses the command.
The second step is that: according to the analyzed command parameters, applying for the check element corresponding to the host I/O request from the global check table maintained by the upper-layer cache, and similarly, taking out the cache node from the global idle cache node linked list maintained by the lower-layer cache and placing the cache node into the cache node managed in the metadata organization manner of the doubly linked list (520 shown in fig. 6).
The third step: the main control thread of the RAID card controller divides the target data into strips, and check blocks are obtained without calculation of all the divided strips, so that bit positions of all the strips corresponding to strip [32] fields in check elements are set to be 0. Since the parity _ ok field is assigned to true after all the split stripes are calculated to obtain check blocks, in this step, the parity _ ok field in the parity element is assigned to false.
The fourth step: and writing the data in each strip into a specific area (target cache area) in the cache area pointed by the cache _ ptr field in the cache node in a blocking manner.
The fifth step: the RAID card immediately sends a data write complete signal to the host (i.e., immediately responds to the host).
And a sixth step: and the working threads in the thread pool utilize the data block exclusive OR operation of the strip temporarily stored in the cache to obtain the check block of the strip, and then the check block is written into the cache. Because the check blocks of the stripes are calculated, all bit positions of the stripes corresponding to the stripe [32] field in the check element are set to be 1, and the parity _ ok field in the check element is assigned to be true. Since the data and parity blocks of the stripe have not yet been flushed to physical disk, each bit in the bitmap metadata associated with the stall _ control field in the cache node is set to 0 and the stall _ control field is assigned false.
The seventh step: and the working thread in the thread pool brushes the data blocks and the check blocks temporarily stored in the cache to the physical disk, and because the data blocks and the check blocks are both brushed to the physical disk, each bit in bitmap metadata associated with the stall _ control field in the cache node is set to be 1 in the step, and the stall _ control field is assigned to be true.
Eighth step: and judging whether the parity _ ok field in the check element is true, jointly judging whether the desk _ control field in the cache node is true, and returning to the sixth step if the desk _ ok field in the check element is not true.
The ninth step: and if the judgment is passed, releasing the check element resources into the check table, and similarly releasing the cache nodes managed by the metadata organization mode of the bidirectional linked list into the global idle cache node linked list.
It can be seen that, according to the data storage method provided in the embodiment of the present application, after target data to be written into a storage system is determined according to a data storage request, a target cache region corresponding to the target data may be determined in a target cache node, and stripe division may be performed on the target data.
The embodiment of the application provides a data storage device.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a data storage device provided in the present application, where the data storage device may include:
the determining module 1 is used for determining target data according to the data storage request;
the acquisition module 2 is used for acquiring a target cache node and determining a designated target cache area in the target cache node;
the dividing module 3 is used for carrying out stripe division on the target data to obtain each data stripe;
the writing module 4 is used for responding to the strip division completion prompt and writing each data strip into a target cache region;
and the feedback module 5 is used for feeding back a storage completion signal to the initiating end of the data storage request.
It can be seen that, after determining target data to be written into a storage system according to a data storage request, the data storage device provided in the embodiment of the present application may first determine a target cache region corresponding to the target data in a target cache node, and perform stripe division on the target data, in this process, a stripe division condition may be monitored in real time, and once a stripe division completion prompt is monitored, each data stripe obtained after division is immediately written into the target cache region, and a storage completion signal is directly fed back to an initiator.
In an embodiment of the present application, the obtaining module 2 may be specifically configured to obtain an idle cache node from a global idle cache node linked list; taking the idle cache node as a target cache node; wherein the number of the data storage requests is the same as the number of the target cache nodes.
In an embodiment of the present application, the obtaining module 2 may be specifically configured to perform field parsing on a target cache node to obtain a cache region field; and determining a target cache region according to the record information of the cache region field.
In one embodiment of the present application, the target cache node may include a flush status field, a cache region field, a front pointer field, and a back pointer field; the lower brushing state field is used for recording the lower brushing state of the cache area, and the lower brushing state of the cache comprises the finished lower brushing of the cache and the unfinished lower brushing of the cache; the cache region field is used for recording the position information of the target cache region in the cache region; the front pointer field is used for recording node information of a previous cache node of the target cache node in the global idle cache node linked list; the back pointer field is used for recording the node information of the next cache node of the target cache node in the global idle cache node linked list.
In an embodiment of the present application, the apparatus may further include a checking module, configured to, after the data stripes are written into the target cache area, for each data stripe, obtain a checking block according to a calculation of each data block in the data stripe; and writing the check blocks into corresponding data stripes in the target cache region.
In an embodiment of the present application, the apparatus may further include a stripe check state updating module, configured to, after the check block is written into the corresponding data stripe in the target cache area, obtain an idle check element in the global check table as a target check element; searching a check state bit corresponding to the data strip in a strip check field of the target check element; the check status bit is updated to be the completion of the stripe check.
In one embodiment of the present application, the target check element may include a global check status field, a stripe check status field, and a pointer field; the global check state field is used for recording the global check state of the target data, and the global check state comprises a completed global check and an incomplete global check; the stripe check state field is used for recording the stripe check state of each data stripe, and the stripe check state comprises a stripe check completed state and a stripe check incomplete state; the pointer field is used for recording element information of a next check element of the target check element in the global check table.
In an embodiment of the present application, the apparatus may further include a global check state updating module, configured to update the global check state field to that the global check is completed when all check state bits in the stripe check field are stripe check completed states.
In an embodiment of the present application, the apparatus may further include a flushing module, configured to flush, to the physical disk, each data chunk and parity chunk in the data stripe for each data stripe in the cache region after the global parity status field is updated to be complete.
In one embodiment of the present application, the under-brush status field is associated with bitmap metadata for recording under-brush statuses of the data strips, the under-brush statuses including under-brush completed and under-brush not completed;
correspondingly, the device may further include a strip-down-brushing state updating module, configured to search a down-brushing state bit corresponding to the data strip in the bit primitive data after the data blocks and the check blocks in the data strip are down-brushed to the physical disk; and updating the lower brushing state bit to be that the strip lower brushing is finished.
In an embodiment of the present application, the apparatus may further include a cache flush status update module, configured to update the flush status field to be that the cache flush is completed when all the flush status bits in the bitmap metadata are that the stripe flush is completed.
In an embodiment of the present application, the apparatus may further include a release module, configured to release the target check element when the global check status field is updated to be that the global check is completed; and releasing the target cache node when the brushing-down state field is updated to be that the cache brushing-down is completed.
For the introduction of the apparatus provided in the embodiment of the present application, please refer to the method embodiment described above, which is not described herein again.
The embodiment of the application provides a data storage device.
Referring to fig. 9, fig. 9 is a schematic structural diagram of a data storage device provided in the present application, where the data storage device may include:
a memory for storing a computer program;
a processor, when executing the computer program, may implement the steps of any of the data storage methods described above.
As shown in fig. 9, which is a schematic diagram of a structure of a data storage device, the data storage device may include: a processor 10, a memory 11, a communication interface 12 and a communication bus 13. The processor 10, the memory 11 and the communication interface 12 all communicate with each other through a communication bus 13.
In the embodiment of the present application, the processor 10 may be a Central Processing Unit (CPU), an application specific integrated circuit, a digital signal processor, a field programmable gate array or other programmable logic device, etc.
The processor 10 may call a program stored in the memory 11, and in particular, the processor 10 may perform operations in an embodiment of the data storage method.
The memory 11 is used for storing one or more programs, the program may include program codes, the program codes include computer operation instructions, in this embodiment, the memory 11 stores at least the program for implementing the following functions:
determining target data according to the data storage request;
acquiring a target cache node and determining a designated target cache region in the target cache node;
carrying out stripe division on target data to obtain each data stripe;
writing each data stripe into a target cache region in response to a stripe division completion prompt;
and feeding back a storage completion signal to an initiating end of the data storage request.
In one possible implementation, the memory 11 may include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created during use.
Further, the memory 11 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device or other volatile solid state storage device.
The communication interface 12 may be an interface of a communication module for connecting with other devices or systems.
Of course, it should be noted that the structure shown in fig. 9 does not constitute a limitation of the data storage device in the embodiment of the present application, and in practical applications, the data storage device may include more or less components than those shown in fig. 9, or some components may be combined.
The embodiment of the application provides a computer readable storage medium.
The computer-readable storage medium provided in the embodiments of the present application stores a computer program, and when the computer program is executed by a processor, the computer program can implement the steps of any of the data storage methods described above.
The computer-readable storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
For introduction of the computer-readable storage medium provided in the embodiment of the present application, please refer to the above method embodiment, which is not described herein again.
The embodiments are described in a progressive manner in the specification, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. The device disclosed by the embodiment corresponds to the method disclosed by the embodiment, so that the description is simple, and the relevant points can be referred to the method part for description.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the components and steps of the various examples have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, read-only memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The technical solutions provided by the present application are described in detail above. The principles and embodiments of the present application are explained herein using specific examples, which are provided only to help understand the method and the core idea of the present application. It should be noted that, for those skilled in the art, without departing from the principle of the present application, several improvements and modifications can be made to the present application, and these improvements and modifications also fall into the protection scope of the present application.

Claims (15)

1. A method of storing data, the method comprising:
determining target data according to the data storage request;
acquiring a target cache node, and determining a designated target cache region in the target cache node;
performing stripe division on the target data to obtain each data stripe;
responding to a strip division completion prompt, and writing each data strip into the target cache region;
and feeding back a storage completion signal to an initiating end of the data storage request.
2. The method of claim 1, wherein obtaining the target cache node comprises:
acquiring an idle cache node from a global idle cache node linked list;
taking the idle cache node as the target cache node;
wherein the number of the data storage requests is the same as the number of the target cache nodes.
3. The method of claim 2, wherein determining the specified target cache region in the target cache node comprises:
performing field analysis on the target cache node to obtain a cache region field;
and determining the target cache region according to the record information of the cache region field.
4. The method of claim 3, wherein the target cache node comprises a flush status field, a cache region field, a front pointer field, a back pointer field;
the flushing status field is used for recording the cache flushing status of the cache region, and the cache flushing status comprises the finished cache flushing and the unfinished cache flushing;
the cache region field is used for recording the position information of the target cache region in the cache region;
the front pointer field is used for recording node information of a previous cache node of the target cache node in the global idle cache node linked list;
and the back pointer field is used for recording the node information of the next cache node of the target cache node in the global idle cache node linked list.
5. The method of claim 4, wherein after writing each of the data stripes to the target cache region, further comprising:
for each data stripe, calculating according to each data block in the data stripe to obtain a check block;
and writing the check blocks into corresponding data stripes in the target cache region.
6. The method of claim 5, wherein after writing the parity chunks into the corresponding data stripes in the target cache region, further comprising:
acquiring an idle check element as a target check element in the global check table;
searching a check state bit corresponding to the data strip in a strip check field of the target check element;
and updating the check state bit to be that the strip check is completed.
7. The method of claim 6, wherein the target check element comprises a global check state field, a stripe check state field, a pointer field;
the global check state field is used for recording a global check state of the target data, and the global check state comprises a completed global check and an incomplete global check;
the stripe check state field is used for recording the stripe check state of each data stripe, and the stripe check state comprises a stripe check completed state and a stripe check incomplete state;
the pointer field is used for recording element information of a next check element of the target check element in the global check table.
8. The method of claim 7, further comprising:
and when all the check state bits in the strip check field are the strip check completion state, updating the global check state field to be the global check completion.
9. The method of claim 8, wherein after the updating the global check status field to the global check is completed, further comprising:
for each data stripe in the cache region, flushing each data block and check block in the data stripe to a physical disk.
10. The method of claim 9, wherein the brush-down status field is associated with bitmap metadata for recording a strip brush-down status for each of the data strips, the strip brush-down status comprising a strip brush complete and a strip brush incomplete;
correspondingly, after the flushing of each data partition and the parity partition in the data stripe to a physical disk, the method further includes:
searching a brushing status bit corresponding to the data strip in the bitmap metadata;
updating the under-brushing status bit to be that the strip under-brushing is completed.
11. The method of claim 10, further comprising:
when all of the flush status bits in the bitmap metadata are that the stripe flushing is completed, updating the flush status field to that the cache flushing is completed.
12. The method of claim 11, further comprising:
when the global check state field is updated to be that the global check is completed, releasing the target check element;
and when the update of the brushing status field is that the cache brushing is completed, releasing the target cache node.
13. A data storage management apparatus, the apparatus comprising:
the determining module is used for determining target data according to the data storage request;
the system comprises an acquisition module, a cache module and a cache module, wherein the acquisition module is used for acquiring a target cache node and determining a designated target cache region in the target cache node;
the dividing module is used for responding to a band division completion prompt and performing band division on the target data to obtain each data band;
a write-in module, configured to write each data stripe into the target cache area;
and the feedback module is used for feeding back a storage completion signal to the initiating end of the data storage request.
14. A data storage device, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the data storage method of any one of claims 1 to 12 when executing the computer program.
15. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out the steps of the data storage method according to any one of claims 1 to 12.
CN202211508013.0A 2022-11-29 2022-11-29 Data storage method and related equipment Active CN115543871B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211508013.0A CN115543871B (en) 2022-11-29 2022-11-29 Data storage method and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211508013.0A CN115543871B (en) 2022-11-29 2022-11-29 Data storage method and related equipment

Publications (2)

Publication Number Publication Date
CN115543871A true CN115543871A (en) 2022-12-30
CN115543871B CN115543871B (en) 2023-03-10

Family

ID=84722437

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211508013.0A Active CN115543871B (en) 2022-11-29 2022-11-29 Data storage method and related equipment

Country Status (1)

Country Link
CN (1) CN115543871B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450053A (en) * 2023-06-13 2023-07-18 苏州浪潮智能科技有限公司 Data storage method, device, system, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722340A (en) * 2012-04-27 2012-10-10 华为技术有限公司 Data processing method, apparatus and system
CN109213420A (en) * 2017-06-29 2019-01-15 杭州海康威视数字技术股份有限公司 Date storage method, apparatus and system
CN109376100A (en) * 2018-11-05 2019-02-22 浪潮电子信息产业股份有限公司 A kind of caching wiring method, device, equipment and readable storage medium storing program for executing
CN110297601A (en) * 2019-06-06 2019-10-01 清华大学 Solid state hard disk array construction method, electronic equipment and storage medium
CN110737475A (en) * 2019-09-29 2020-01-31 上海高性能集成电路设计中心 instruction buffer filling filter
CN111399764A (en) * 2019-12-25 2020-07-10 杭州海康威视系统技术有限公司 Data storage method, data reading device, data storage equipment and data storage medium
CN111857552A (en) * 2019-04-30 2020-10-30 伊姆西Ip控股有限责任公司 Storage management method, electronic device and computer program product
CN111930307A (en) * 2020-07-30 2020-11-13 北京浪潮数据技术有限公司 Data reading method, device and equipment and computer readable storage medium
US20210209231A1 (en) * 2019-09-25 2021-07-08 Shift5, Inc. Passive monitoring and prevention of unauthorized firmware or software upgrades between computing devices
CN113986604A (en) * 2021-11-16 2022-01-28 杭州海康威视系统技术有限公司 Data storage method and data storage device

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722340A (en) * 2012-04-27 2012-10-10 华为技术有限公司 Data processing method, apparatus and system
CN109213420A (en) * 2017-06-29 2019-01-15 杭州海康威视数字技术股份有限公司 Date storage method, apparatus and system
CN109376100A (en) * 2018-11-05 2019-02-22 浪潮电子信息产业股份有限公司 A kind of caching wiring method, device, equipment and readable storage medium storing program for executing
CN111857552A (en) * 2019-04-30 2020-10-30 伊姆西Ip控股有限责任公司 Storage management method, electronic device and computer program product
CN110297601A (en) * 2019-06-06 2019-10-01 清华大学 Solid state hard disk array construction method, electronic equipment and storage medium
US20210209231A1 (en) * 2019-09-25 2021-07-08 Shift5, Inc. Passive monitoring and prevention of unauthorized firmware or software upgrades between computing devices
CN110737475A (en) * 2019-09-29 2020-01-31 上海高性能集成电路设计中心 instruction buffer filling filter
CN111399764A (en) * 2019-12-25 2020-07-10 杭州海康威视系统技术有限公司 Data storage method, data reading device, data storage equipment and data storage medium
CN111930307A (en) * 2020-07-30 2020-11-13 北京浪潮数据技术有限公司 Data reading method, device and equipment and computer readable storage medium
CN113986604A (en) * 2021-11-16 2022-01-28 杭州海康威视系统技术有限公司 Data storage method and data storage device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
宋振龙等: "闪存阵列的可重构策略", 《国防科技大学学报》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116450053A (en) * 2023-06-13 2023-07-18 苏州浪潮智能科技有限公司 Data storage method, device, system, electronic equipment and storage medium
CN116450053B (en) * 2023-06-13 2023-09-05 苏州浪潮智能科技有限公司 Data storage method, device, system, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN115543871B (en) 2023-03-10

Similar Documents

Publication Publication Date Title
US20160026403A1 (en) Merging an out of synchronization indicator and a change recording indicator in response to a failure in consistency group formation
CN107066498B (en) Key value KV storage method and device
CN107643880A (en) The method and device of file data migration based on distributed file system
CN109582213B (en) Data reconstruction method and device and data storage system
CN107122130B (en) Data deduplication method and device
CN109614045B (en) Metadata dropping method and device and related equipment
CN115543871B (en) Data storage method and related equipment
US9946721B1 (en) Systems and methods for managing a network by generating files in a virtual file system
CN109542719A (en) Thread state monitoring method and device, computer equipment and storage medium
CN108829342B (en) Log storage method, system and storage device
CN112148218A (en) Method, device and equipment for storing check data of disk array and storage medium
CN116107516B (en) Data writing method and device, solid state disk, electronic equipment and storage medium
US20190347165A1 (en) Apparatus and method for recovering distributed file system
CN111291062B (en) Data synchronous writing method and device, computer equipment and storage medium
CN115981572A (en) Data consistency verification method and device, electronic equipment and readable storage medium
US20190317686A1 (en) Method, apparatus, device and storage medium for processing data location of storage device
CN117591009A (en) Data management method, storage device and server
CN110865901B (en) Method and device for building EC (embedded control) strip
CN109254870B (en) Data backup method and device
CN107329702B (en) Self-simplification metadata management method and device
CN112231290A (en) Method, device and equipment for processing local log and storage medium
CN115599589B (en) Data recovery method and related device
CN110402436B (en) Method and device for processing pre-written log
CN117056363B (en) Data caching method, system, equipment and storage medium
CN117170942B (en) Database backup method based on file system snapshot and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant