CN107704336B - Data storage method and device - Google Patents

Data storage method and device Download PDF

Info

Publication number
CN107704336B
CN107704336B CN201710900323.XA CN201710900323A CN107704336B CN 107704336 B CN107704336 B CN 107704336B CN 201710900323 A CN201710900323 A CN 201710900323A CN 107704336 B CN107704336 B CN 107704336B
Authority
CN
China
Prior art keywords
data
fragments
group
data fragments
data center
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710900323.XA
Other languages
Chinese (zh)
Other versions
CN107704336A (en
Inventor
姚唐仁
王晨
冯玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710900323.XA priority Critical patent/CN107704336B/en
Publication of CN107704336A publication Critical patent/CN107704336A/en
Application granted granted Critical
Publication of CN107704336B publication Critical patent/CN107704336B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1044Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices with specific ECC/EDC distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1048Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using arrangements adapted for a specific error detection or correction feature
    • G06F11/106Correcting systematically all correctable errors, i.e. scrubbing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data storage method and device, relates to the technical field of storage, and can reduce network consumption and improve data storage performance. The method comprises the following steps: acquiring data fragments and local check fragments of first data, wherein the data fragments of the first data comprise N data fragments, N is not less than 2, and N is an integer; storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the rest N-J data fragments in the first DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second DC corresponding to the group of data fragments, wherein each group of data fragments corresponds to one second DC.

Description

Data storage method and device
Technical Field
The present invention relates to the field of storage technologies, and in particular, to a data storage method and apparatus.
Background
In the field of storage technologies, in order to improve data security and reduce storage cost, an Erasure Coding (EC) redundancy method is generally used to store data. The EC redundancy is to fragment data to be stored to obtain a plurality of data fragments, and check the plurality of data fragments to obtain a plurality of corresponding check fragments. The resulting Data slices and check slices are then stored in a Data Center (DC). When a stored partial data fragment is corrupted, the corrupted partial data fragment may be recovered by verifying the fragment and the uncorrupted data fragment.
When storing fragments (including data fragments and check fragments), two methods are commonly used at present. One is EC-redundant storage in the form of data-shards stored across DCs, i.e., multiple data-shards are stored separately in multiple DCs. For example, as shown in fig. 1, assume that 3 DCs are taken as an example. After receiving data 1, DC1 performs EC redundancy on the multiple data to obtain data slices 1-6 and check slices. The check fragment comprises a local check fragment 1-3 and a global check fragment 1-3. Data slices 1-3 and corresponding local check slice 1 are stored in DC1, local check slice 1 being used to reconstruct any of the data slices 1-3 when that data slice is corrupted, using the uncorrupted data slice to reconstruct the corrupted data slice. Data slices 4-6 and corresponding local check slice 2 are stored in DC2, local check slice 2 being used to reconstruct any of the data slices 4-6 when that data slice is corrupted, using the uncorrupted data slice to reconstruct the corrupted data slice. Global parity slices 1-3 and corresponding parity slice 3 are stored in DC3, and local parity slice 3 is used to reconstruct any of global parity slices 1-3 from an uncorrupted global parity slice when the corrupted global parity slice is corrupted. The global check fragment 1-3 is used for reconstructing the damaged data fragment by using the undamaged data fragment when any 3 or less than 3 data fragments in the data fragments 1-6 are damaged. In this storage method, since each data slice of data 1 is written to DC1 and DC2 through DC1, when each data slice of data 1 is read, it is also read through DC 1. That is, each time data 1 is read, DC1 reads data slices 4-6 from DC2, so the network consumption due to cross-DC reading is relatively large.
The other is EC redundant storage by eXclusive OR (eXclusive OR). For example, as shown in fig. 2, take 3 data centers as an example. Data slices 7-9 and corresponding local check slices 4 of data 2 are stored in DC1, and data slice A, B, C and corresponding local check slices 5 of data 3 are stored in DC 2. Where data 2 and data 3 are independent of each other. Storing the XOR slices of data 2 and data 3 in DC3 and the corresponding local check slice 6 is stored in DC 2. The XOR slices storing data 2 and data 3 in DC3 include XOR slice of data slice 7 and data slice a (for recovering data slice 7 or data slice a when DC1 or DC2 fails), XOR slice of data slice 8 and data slice B (for recovering data slice 8 or data slice B when DC1 or DC2 fails), XOR slice of data slice 9 and data slice C (for recovering data slice 9 or data slice C when DC1 or DC2 fails). In this storage scheme, since the data fragments of one data are all stored in one DC, there is no need to perform data transmission between the DCs when reading the data, and thus the network consumption across the DCs when reading is zero. However, when data is deleted, for example, data 3, data 2 or data 3, and new data xored with data 2, need to be sent to the DC3 to regenerate the corresponding XOR slice. Therefore, when deleting data, network consumption is large due to deletion across DCs.
It can be seen that, after storing data in the currently common data storage manner, either the network consumption is large when reading data or the network consumption is large when deleting data.
Disclosure of Invention
The application provides a data storage method and device, which can reduce network consumption and improve data storage performance.
In a first aspect, the present application provides a data storage method, including: acquiring data fragments and local check fragments of first data, wherein the data fragments of the first data comprise N data fragments, N is not less than 2, and N is an integer; storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the rest N-J data fragments in the first DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second DC corresponding to the group of data fragments; the N-J data fragments are divided into at least one group, each group of data fragments comprises at least one data fragment, each group of data fragments corresponds to one second DC, J is not less than 1, and J is an integer.
In the data storage method provided by the application, since all the data slices of the first data are stored in the first DC, all the data slices of the first data can be directly read from the first DC when the first data is read. When the first data is deleted, each data slice of the first data stored in the first DC and each second DC can be directly deleted, and the corresponding storage space is released without transmitting the data slice of the first data between the DCs. By adopting the data storage method provided by the application, no matter data is read or deleted, data fragments cannot be transmitted between the first DC and each second DC, so that network consumption caused by cross-DC reading or deletion is avoided, and the data storage performance is improved.
Optionally, the method further includes: and deleting the N-J data fragments temporarily stored in the first DC when the total occupation ratio is larger than or equal to a preset first threshold value, wherein the total occupation ratio is the proportion of the total capacity occupied by the data fragments temporarily stored in the first DC to the capacity of the first DC.
Optionally, before deleting the N-J data slices temporarily stored in the first DC, the method further includes: judging whether the access frequency of the first data is less than or equal to a preset first access threshold or not; the deleting the N-J data slices temporarily stored in the first DC includes: and if the access frequency is less than or equal to the first access threshold, deleting the N-J data slices temporarily stored in the first DC.
Based on the two alternative modes, the consumption of the extra data storage space can be reduced.
Optionally, if the access frequency is less than or equal to the first access threshold, after deleting the N-J data slices temporarily stored in the first DC, the method further includes:
when the access frequency is greater than or equal to a preset second access threshold and the total occupancy is less than the first threshold, copying the N-J data fragments from second DCs stored in the N-J data fragments; the copied N-J data slices are temporarily stored in the first DC.
Based on the optional mode, when the N-J data fragments become hot data, the N-J data fragments can be temporarily stored in the first DC, so that the problem of overlarge network consumption caused by frequent cross-DC reading is avoided.
Optionally, after acquiring the data fragment and the local verification fragment of the first data, the method further includes: judging whether the total ratio is greater than or equal to a preset second threshold value; storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the remaining N-J data fragments in the first DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second DC corresponding to the group of data fragments, including: when the total proportion is determined to be smaller than the second threshold value, J data fragments of the N data fragments and local check fragments corresponding to the J data fragments are stored in different storage devices in a first data center DC, the remaining N-J data fragments are temporarily stored in the first DC, and each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments are sent to a second DC corresponding to the group of data fragments.
Based on this option, the data storage device can determine which storage mode to perform based on the total duty ratio of the first DC, thereby improving the storage performance of the first DC.
Optionally, after storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in the first DC, and temporarily storing the remaining N-J data fragments in the first DC, the method further includes: the client server reads the stored J data fragments and the temporarily stored N-J data fragments from the first DC; the client server combines the J data fragments and the N-J data fragments into first data.
In a second aspect, the present application provides a data storage device comprising: the device comprises an acquisition unit, a verification unit and a verification unit, wherein the acquisition unit is used for acquiring data fragments and local verification fragments of first data, the data fragments of the first data comprise N data fragments, N is not less than 2, and N is an integer; a storage unit, configured to store J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily store the remaining N-J data fragments in the first DC, and send each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments to a second DC corresponding to the group of data fragments; the N-J data fragments are divided into at least one group, each group of data fragments comprises at least one data fragment, each group of data fragments corresponds to one second DC, J is not less than 1, and J is an integer.
Optionally, the data storage device further comprises a deletion unit: the deleting unit is configured to store, in the storage unit, J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily store the remaining N-J data fragments in the first DC, and delete, after storing, in a second DC, each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments, the N-J data fragments temporarily stored in the first DC when a total occupancy is greater than or equal to a preset first threshold, where the total occupancy is a ratio of a total capacity occupied by the data fragments temporarily stored in the first DC to a capacity of the first DC.
Optionally, the deleting unit is further configured to determine whether an access frequency of the first data is less than or equal to a preset first access threshold before deleting the N-J data slices temporarily stored in the first DC; the deleting unit deletes N-J data slices temporarily stored in the first DC, specifically including: and if the access frequency is less than or equal to the first access threshold, deleting the N-J data slices temporarily stored in the first DC.
Optionally, the storage unit is further configured to copy the N-J data slices from the second DC stored in the N-J data slices and temporarily store the copied N-J data slices in the first DC when the access frequency is greater than or equal to a preset second access threshold and the total occupancy is smaller than a first threshold after the deleting unit deletes the N-J data slices temporarily stored in the first DC.
Optionally, the storage unit is further configured to determine, after the obtaining unit obtains the data segments of the first data, whether a total occupancy is greater than or equal to a preset second threshold, where the total occupancy is a ratio of a total capacity occupied by the data segments temporarily stored in the first DC to a capacity of the first DC; the storage unit stores J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily stores the remaining N-J data fragments in the first DC, and sends each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments to a second DC corresponding to the group of data fragments, and specifically includes: when the total proportion is determined to be smaller than the second threshold value, J data fragments of the N data fragments and local check fragments corresponding to the J data fragments are stored in different storage devices in a first data center DC, the remaining N-J data fragments are temporarily stored in the first DC, and each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments are sent to a second DC corresponding to the group of data fragments.
For technical effects of the data storage device provided by the present application, reference may be made to the technical effects of the first aspect or each implementation manner of the first aspect, and details are not described here.
In a third aspect, the present application provides a client server, comprising: a reading unit, configured to read, after a data storage device stores J data fragments of N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first DC, the stored J data fragments and the temporarily stored N-J data fragments from the first DC; and the combination unit is used for combining the J data slices and the N-J data slices read by the reading unit into the first data.
Optionally, in the first aspect and the second aspect, the first threshold value is calculated as follows: x ═ n/K/(n + m)) × h; wherein, X represents the first threshold value, K represents the number of the second DCs, K is more than or equal to 1, K is an integer, m represents the total number of the check fragments allowed to be stored in the K second DCs, n represents the total number of the data fragments allowed to be stored in the K second DCs, and h% represents the proportion of the hot data.
In a fourth aspect, the present application provides a data storage device comprising a processor, a memory, and a communication interface; the memory for storing computer-executable instructions; the communication interface sends and receives N data fragments and local check fragments; the processor, connected to the memory and the communication interface through the bus, executes the computer execution instructions stored in the memory when the data storage device is running, so as to implement the data storage method described in the various implementation manners executed by the data storage device in the first aspect.
For technical effects of the data storage device provided by the present application, reference may be made to the technical effects of the first aspect or each implementation manner of the first aspect, and details are not described here.
In a fifth aspect, the present application provides a client server, comprising a processor, a memory, and a communication interface; the memory for storing computer-executable instructions; the communication interface sends and receives N data fragments; the processor, coupled to the memory and the communication interface via the bus, executes computer-executable instructions stored in the memory when the client server is running to implement the method of data storage involved in the client server of the first aspect.
In a sixth aspect, the present application further provides a computer storage medium having stored therein instructions that, when run on a computer, cause the computer to perform the method of the first aspect or any of the alternatives of the first aspect.
In a seventh aspect, the present application further provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of the first aspect or any alternative form of the first aspect.
Drawings
Fig. 1 is a schematic diagram of EC redundant storage in a manner of data fragmentation across DC storage provided in the prior art;
FIG. 2 is a schematic diagram of EC redundant storage by XOR method according to the prior art;
FIG. 3 is a schematic diagram of a distributed storage system provided herein;
FIG. 4 is a first schematic structural diagram of a data storage device provided in the present application;
FIG. 5 is a first flowchart of an embodiment of a data storage method provided herein;
FIG. 6 is a first schematic diagram illustrating a data storage method provided in the present application;
FIG. 7 is a second schematic diagram of a data storage method provided in the present application;
FIG. 8 is a flowchart of a second embodiment of a data storage method provided herein;
FIG. 9 is a third schematic diagram of a data storage method provided in the present application;
FIG. 10 is a fourth schematic diagram of a data storage method provided in the present application;
fig. 11 is a flowchart three of an embodiment of a data storage method provided in the present application;
FIG. 12 is a fourth flowchart of an embodiment of a data storage method provided herein;
FIG. 13 is a flow chart of one embodiment of a data reading method provided herein;
FIG. 14A is a second schematic structural diagram of a data storage device provided in the present application;
FIG. 14B is a schematic structural diagram of a data storage device provided in the present application;
fig. 14C is a schematic structural diagram of a data storage device according to the present application.
Detailed Description
First, the terms "system" and "network" are often used interchangeably herein. The term "and" herein is merely an association relationship describing an associated object, and means that there may be three relationships, for example, a and B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. When the present application refers to the ordinal numbers "first", "second", "third" or "fourth", etc., it should be understood that this is done for differentiation only, unless it is clear from the context that the order is actually expressed.
The data storage method provided by the application can be applied to a distributed storage system. As shown in fig. 3, a distributed storage system provided for the present application includes a plurality of DCs. Each DC is provided with a corresponding data storage device for performing the data storage method provided herein. The DC is composed of a plurality of servers, the data storage device can be integrated on one of the servers, and a server can be newly added to the DC to serve as the data storage device.
As shown in fig. 4, a data storage device provided for the present application includes: a processor, a memory, a communication interface, and a bus. The bus is connected with the processor, the memory and the communication interface, and data transmission is realized among the processor, the memory and the communication interface. For example, the processor receives a command from the communication interface through the bus, decrypts the received command, and performs calculation or data processing according to the decrypted command. The memory may include program modules and data modules for storing computer instructions and caching data, such as kernels (kernel), middleware (middleware), application program interfaces (AP), applications, and the like. Program modules may be comprised of software, firmware, hardware, or at least two of the same. The communication interface may be connected to exchange information with other DCs to enable control of the other DCs.
As shown in fig. 5, a flowchart of an embodiment of a data storage method provided in the present application is provided, where the method includes the following steps:
step 501, a data storage device obtains data fragments and local check fragments of first data, where the data fragments of the first data include N data fragments, N is greater than or equal to 2, and N is an integer.
The first data is data which needs EC redundant storage. The data storage device may perform fragmentation processing on the first data after receiving the first data, so as to obtain the N data fragments. For example, assuming that the length of the first data is N, the data processing apparatus may divide the first data into N data slices on average, each data slice having a length of N/N. After the N data fragments are obtained, EC redundancy calculation may be performed on the N data fragments to obtain a plurality of local check fragments corresponding to the N data fragments. Meanwhile, the global check fragment corresponding to the N data fragments can be obtained.
Step 502, the data storage device stores J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in the first data center DC, temporarily stores the remaining N-J data fragments in the first DC, and sends each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments to a second DC corresponding to the group of data fragments.
The N-J data fragments are divided into at least one group, each group of data fragments comprises at least one data fragment, each group of data fragments corresponds to one second DC, J is not less than 1, and J is an integer.
For example, when storing the data slice of the first data and the local check slice, the data storage device may equally divide the N data slices into N/J groups, each group including J data slices. That is, a group of data fragments in the N/J group and a local check fragment corresponding to the group of data fragments are stored in the first DC, and the remaining N/J-1 group of data transmission fragments (N-J data fragments in total) are temporarily stored in the first DC. Each group of data fragments in the N/J-1 group of data fragments and the local check fragment corresponding to the group of data fragments are respectively stored in the corresponding second DC.
Or, the N data slices may be grouped in a non-uniform manner, where one group includes J data slices, and the remaining N-J data slices may be divided into K groups, each group including (N-J)/K. Where K represents the number of second DCs.
It should be noted that, in the distributed storage system, the DC corresponding to the data storage device that acquires the data slice and allocates each data slice storage location may be referred to as a first DC, and the other DCs except the first DC may be referred to as a second DC.
Illustratively, the distributed storage system includes 3(K ═ 3) DCs, which are DC1, DC2, and DC3, N (N ═ 6) data shards of the first data are data shards a, b, c, d, e, and f, and the local parity shards of the first data include a local parity shard, and a local parity shard c.
If the data storage device 1 corresponding to DC1 receives the first data and obtains a data slice and a local parity slice of the first data, the data storage device 1 may store the data slices a, b, c, d, e, and f, and the local parity slice a, the local parity slice b, and the local parity slice c in a data storage manner as may be shown in fig. 6. The data fragments a and the data fragments b are a first group of data fragments and correspond to the local verification fragments a; the data fragment c and the data fragment d are data fragments of a second group and correspond to the local verification fragment b; namely, the data fragment e and the data fragment f are data fragments of the third group and correspond to the local check fragment c. The first group of data slices and the local parity slice a are stored in DC 1. Where data slice a, data slice b, and local check slice a are stored in different storage devices (e.g., different storage hard disks) in DC1, when any one of the data slices in the first set of data slices is corrupted, the corrupted data slice may be reconstructed from the local check slice a and the uncorrupted data slice. The remaining data slices c, d, e, f (i.e., the second and third sets of data slices) are temporarily stored in the DC 1. The data fragments c, d, e, f may be temporarily stored in different storage devices in the DC1, or may be temporarily stored in the same storage device.
The data storage device 1 sends the second set of data slices and the local parity slice b to the DC2, storing the second set of data slices and the local parity slice b in the DC 2. When any one data fragment in the second group of data fragments is damaged, the damaged data fragment can be reconstructed through the local check fragment b and the undamaged data fragments. Where data slice c, data slice d, and local parity slice b may be stored on different storage devices in DC 3.
The data storage device 1 sends the third set of data slices and the local parity slice c to the DC3, storing the third set of data slices and the local parity slice c in the DC 3. When any one data fragment in the third group of data fragments is damaged, the damaged data fragment can be reconstructed through the local check fragment c and the undamaged data fragments. Where data slice e, data slice f, and local parity slice c may be stored on different storage devices in DC 3.
It should be noted that, if the data storage device 1 further obtains global parity fragments corresponding to the data fragments a, b, c, d, e, and f, the global parity fragments include the global parity fragment a, the global parity fragment b, and the global parity fragment c. The data storage device 1 may also store the 3 global parity chunks in one DC each. That is, as shown in fig. 6, the data storage device 1 stores the global parity slice a in DC1, the global parity slice b in DC2, and the global parity slice c in DC 3. When any 3 or less than 3 of the data fragments a, b, c, d, e and f are damaged, the damaged data fragments can be recovered through the global check fragment a, the global check fragment b, the global check fragment c and the undamaged data fragments.
Alternatively, the data storage device 1 may also store the data fragments a, b, c, d, e, and f, the local parity fragments a, the local parity fragments b, and the local parity fragments c, and the global parity fragments a, the global parity fragments b, and the global parity fragments c according to a data storage manner as shown in fig. 7. The data fragments a, b and c are a first group of data fragments and correspond to the local verification fragment a; the data fragments d, e and f are data fragments of the second group and correspond to the local verification fragment b; and the global check fragment a, the global check fragment b and the global check fragment c correspond to the local check fragment c.
Data slices a, b, c and corresponding local parity slice a are stored in different storage devices in DC 1. When any one data fragment in the first group of data fragments is damaged, the damaged data fragment can be reconstructed through the local verification fragment a and the undamaged data fragments. And temporarily storing the second group of data slices in DC1, wherein data slices d, e, f in the second group of data slices may be temporarily stored in different storage devices in DC1 or in the same storage device.
The data storage device 1 sends the second set of data slices d, e, f and the corresponding local parity slice b to DC2, storing the data slices d, e, f and the corresponding local parity slice b in a different storage device in DC 2. When any one data fragment in the second group of data fragments is damaged, the damaged data fragment can be reconstructed through the local check fragment b and the undamaged data fragments.
The data storage device 1 sends the global parity shard a, the global parity shard b, the global parity shard c and the corresponding local parity shard c to the DC3, and stores the global parity shard a, the global parity shard b, the global parity shard c and the corresponding local parity shard c in different storage devices in the DC 3. When any one of the global check fragment a, the global check fragment b and the global check fragment c is damaged, the damaged global check fragment is reconstructed by using the undamaged global check fragment and the local check fragment c.
In one possible design, the data storage device may also determine which storage mode to implement based on the total duty of the first DC. Wherein the total occupation ratio is a proportion of the total capacity occupied by the temporarily stored data slices in the first DC to the capacity of the first DC. Illustratively, based on fig. 5 and as shown in fig. 8, after the step 501, the method further includes:
in step 503, the data storage device determines whether the total duty of the first DC is greater than or equal to a preset second threshold.
The data storage device may count the total occupancy of the first DC after each completion of the storage. The total occupancy of the current first DC may also be counted when new data needs to be stored. Before storing the new data, the data storage device determines which storage process is to be executed according to the latest total occupancy ratio.
In this application, when the total ratio is smaller than the second threshold, the storage procedure provided in this application and described in step 504 is executed. When the total ratio is greater than or equal to the second threshold, the storing process described in step 504 may be performed.
Step 504, the data storage device stores the data fragment and the local check fragment of the first data in a conventional EC redundancy storage manner.
The conventional EC redundant storage manner may be a storage flow of EC redundant storage performed in a manner that data fragments are stored across DC. The data storage device may also store the arrangement corresponding to the EC redundancy storage manner.
Illustratively, a data shard, a local parity shard, and a distributed storage system of the first data as shown in fig. 6 or fig. 7 are taken as examples. Before storing 6 data slices of the first data, the data storage device 1 determines that the total occupancy of DC1 is greater than the second threshold, i.e., that DC1 is currently not suitable for full storage (i.e., all data slices of the first data are stored in DC1 as described in step 502). Then, as shown in fig. 9, the data storage device 1 may store the 6 data slices and the respective parity slices in 3 DCs on average. That is, DC1 stores data slice a, data slice b, local parity slice a, and global parity slice a, DC2 stores data slice c, data slice d, local parity slice b, and global parity slice b, and DC3 stores data slice e, data slice f, local parity slice c, and global parity slice c.
Alternatively, as shown in fig. 10, the data storage device 1 may collectively store the 6 data fragments in DC1 and DC 2. That is, DC1 stores data slices a, b, and c and corresponding local parity slice a, and DC2 stores data slices d, e, and f and corresponding local parity slice b. Global parity slice a, global parity slice b, global parity slice c, and corresponding local parity slice c are stored in DC 3.
Optionally, in this application, a second threshold may be further set, and it is determined whether data deleting operations need to be performed, that is, the data segments temporarily stored in the first DC are deleted, by determining whether the total ratio of the first DC is greater than or equal to the first threshold.
For example, the data storage device may periodically detect the current total occupancy of the first DC, or after a new data storage operation is completed. And when the total occupation ratio of the first DC is greater than or equal to a preset first threshold value, triggering deletion operation to delete the data fragments temporarily stored in the first DC.
The data storage device may randomly delete the data slices temporarily stored in the first DC, or delete the data slices according to the storage time of the data slices. For example, among all the data slices currently and temporarily stored in the first DC, N-J data slices of the first data are the data slices with the longest storage time, and then the data storage device may select to delete the N-J data slices.
Optionally, after the data is stored in the storage system, the access frequency of the data may change with time. For example, the data is accessed more frequently during a certain period of time. It is assumed that when the access frequency is higher than a preset second access threshold, the data is called thermal data. During another time period, the data is accessed less frequently. It is assumed that the data may be referred to as cold data when the access frequency is below a preset first access threshold. Wherein the second access threshold is greater than the first access threshold.
Then, based on the access frequency of the data, the data storage device may select to delete the data slice that is cold data according to the access frequency of the data slice temporarily stored in the first DC when performing the data deleting operation, so as to reduce the additional data storage space consumption.
Exemplarily, taking the first data as an example, as shown in fig. 11, a flowchart of an embodiment of another data storage method provided by the present application describes how to delete data fragments of cold data during data storage after the first data is stored in the above-mentioned manner in step 502. Specifically, the method comprises the following steps:
in step 1101, when the total occupancy is greater than or equal to the preset first threshold, the data storage device determines whether the access frequency of the first data is less than or equal to the preset first access threshold.
When the total occupation ratio is larger than or equal to a preset first threshold value, the data storage device determines that the data deleting operation needs to be executed. The data storage device can obtain the access frequency of each data slice that is stored in the first DC to determine which data become cold data.
It should be noted that, after the data storage device completes storing each data received by it, it may periodically count and record the access frequency of each data.
Taking the first data as an example, after the first data is stored, the data storage device periodically counts and records the access frequency of the first data. When it is determined that the data deleting operation needs to be performed, the data storage device may acquire the latest record of the first data to make a judgment. If the latest access frequency of the first data is less than or equal to the first access threshold, the first data is cold data. If the latest access frequency of the first data is greater than or equal to a second access threshold, the first data is thermal data.
Then, when data deletion operation needs to be performed, the data storage device can determine which of the stored data is cold data and which is hot data by judging the access frequency of each stored data.
In step 1102, if the access frequency is less than or equal to the first access threshold, the data storage device deletes N-J data slices temporarily stored in the first DC.
It is to be understood that the data storage device may delete the N-J data slices temporarily stored in the first DC when the data storage device determines that the access frequency of the first data is less than or equal to the first access threshold, i.e., determines that the first data is cold data.
Exemplarily, the example shown in fig. 6 or fig. 7 is taken as an example. After the first data is stored in the distributed storage system in the manner shown in fig. 6, when the data storage apparatus 1 determines that the data deleting operation needs to be performed and determines that the first data is cold data, the data storage apparatus 1 may delete the data slices c, d, e, and f of the first data temporarily stored in the DC 1. After deletion, the first data is stored as shown in fig. 9.
Alternatively, after the first data is stored in the distributed storage system in the manner shown in fig. 7, when the data storage apparatus 1 determines that the data deleting operation needs to be performed and determines that the first data is cold data, the data storage apparatus 1 may delete the data slices d, e, and f of the first data temporarily stored in the DC 1. After deletion, the first data is stored as shown in fig. 10.
Optionally, the first threshold value for determining whether to perform the deleting operation may be calculated by the following formula:
X=(n/K/(n+m))*h%;
wherein X represents a first threshold value, K is larger than or equal to 1, K is an integer, m represents the total number of the verification fragments allowed to be stored in the K second DCs, n represents the total number of the data fragments allowed to be stored in the K second DCs, and h% represents the proportion of the hot data. That is, the first threshold value may be obtained by multiplying the proportion of the data shards stored on each second DC to all shards (including data shards and check shards) allowed to be stored on the K second DCs by the hot data proportion.
For example, it is assumed that the hot data ratio h% is 10%, n is 18, and m is 12, where 12 parity fragments include 3 local parity fragments and 9 global parity fragments. Then the first threshold value X is (18/2/(18+ 12)). 10%. 3%. I.e., for the first DC, temporary data slices (i.e., data slices temporarily stored in the first DC) are stored with 3% capacity space. That is, by occupying 3% of the capacity space, the frequency of reading across the DC is reduced, as well as the latency of reading the first data.
In one example, after N-J data slices in which the first data is temporarily stored in the first DC are deleted, or after the first data is stored in a conventional EC redundant storage manner, the access frequency of the first data is increased to become hot data. Then the data storage device may restore the N-J data slices to the first DC so that the reading of the first data can be completed in the first DC without a cross-DC read in order to avoid excessive network consumption due to frequent cross-DC reading of the first data.
Illustratively, as shown in fig. 12, there is provided a flow chart of an embodiment of another data storage method provided by the present application. The method comprises the following steps:
step 1201, when the access frequency is greater than or equal to a preset second access threshold and the total occupancy is less than a first threshold, the data storage device copies the N-J data slices from the second DC stored in the N-J data slices.
It is understood that the data storage device may determine the access frequency of the first data at the same time when periodically counting and recording the access frequency of the first data. If the access frequency of the first data is found to be greater than or equal to the second access threshold, i.e., the first data is determined to be hot data, the data storage device may determine whether the total occupancy of the first DC at that time is less than a first threshold value. If the total occupancy of the first DC is smaller than the first threshold, the data storage device may copy the N-J data slices from the second DC where the N-J data slices are stored, according to the recorded storage locations of the N-J data slices.
At step 1202, the data storage device temporarily stores the copied N-J data slices in the first DC.
Exemplarily, the example shown in fig. 9 or fig. 10 is taken as an example. In the case where all the data fragments of the first data are not stored in the DC1, if the data storage device 1 determines that the access frequency of the first data is greater than or equal to the second access threshold and the total occupancy of the DC1 is less than the first threshold during the process of counting and recording the access frequency of the first data. The data storage device 1 may determine the storage locations of the data slices not stored in DC1 based on the storage locations of the respective slices of recorded first data.
As shown in fig. 9, if the data storage device 1 determines that the data slices c, d of the first data are stored in the DC2 and the data slices e, f are stored in the DC3 according to the storage locations of the respective slices of the recorded first data. The data storage device 1 may copy the data slices c, d, e, and f from DC2 and DC3 and then store the data slices c, d, e, and f in DC 1. So that the first data is stored in the manner shown in fig. 6. Thus, when reading the first data, all data slices of the first data may be read from DC1 without the need to frequently read the respective slices of the first data across DC, i.e. read data slices a, b in DC1, read data slices c, d in DC2 and read data slices e, f in DC 3. Thereby avoiding the problem of excessive network consumption due to frequent cross-DC reads.
As shown in fig. 10, if the data storage device 1 determines that the data segments d, e, f of the first data are stored in the DC2 according to the storage positions of the respective segments of the recorded first data. The data storage device 1 may copy the data slices d, e, and f from DC2 and then store the data slices d, e, and f in DC 1. So that the first data is stored in the manner shown in fig. 7. Thus, when reading the first data, all data slices of the first data may be read from DC1 without the need to frequently read the individual slices of the first data across DC, i.e. data slices a, b, c in DC1 and data slices d, e, f in DC 2. Thereby avoiding the problem of excessive network consumption due to frequent cross-DC reads.
Based on the data storage method provided by the present application, in a case where the data storage device stores J data slices of the first data in the first DC and temporarily stores the remaining N-J data slices, as shown in fig. 13, the present application provides a further data reading method, including:
in step 1301, the client server reads the stored J data fragments and the temporarily stored N-J data fragments from the first DC.
In step 1302, the client server combines the J data slices and the N-J data slices into a first data.
Illustratively, in connection with the example shown in fig. 6, since the first data is written to DC1, DC2 and DC3 through the data storage device 1 corresponding to DC1, when the first data is accessed, the respective data slices of the first data also need to be read from DC 1. Then, based on the data storage method provided in the present application, since the first data is temporarily stored in the DC1 in full, that is, the data slices a and b are stored in the DC1, the remaining data slices c, d, e, and f are also temporarily stored. Thus, the client server 1 can read the data slices a, b, c, d, e, and f directly from the DC1 and combine them into the first data without reading the individual slices of the first data across the DC, avoiding network consumption due to the transmission of data slices of the first data between the DCs.
Similarly, when the first data in the storage system needs to be deleted, control information may be directly sent to the first DC and each second DC to instruct each DC to delete the data shards of the first data and the corresponding check shards, so as to release the corresponding storage space, and each data shard of the first data does not need to be performed between the DCs, thereby avoiding network consumption.
As can be seen from the above embodiments, based on the data storage method provided by the present application, no matter data is read or deleted, no data fragment is transmitted between the first DC and the K second DCs, so that network consumption caused by reading or deleting across DCs is avoided, and data storage performance is improved.
The above-mentioned scheme provided by the present application is mainly introduced from the perspective of interaction between network elements. It will be appreciated that the data storage device, in order to carry out the above-described functions, may comprise corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art would readily appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as hardware or combinations of hardware and computer software. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The present application may divide the data storage device into functional modules according to the above method examples, for example, each functional module may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, the division of the modules in the present application is schematic, and is only a logical function division, and there may be another division manner in actual implementation.
In the case of dividing each functional module by corresponding functions, fig. 14A shows a schematic diagram of a possible structure of the data storage device according to the above embodiment, and the data storage device includes: acquisition section 1401, storage section 1402, and deletion section 1403. An acquiring unit 1401 is used for the data storage device to execute step 501 in fig. 5 and 8; the storage unit 1402 is used to support the data storage device to execute step 502 in fig. 5, step 502-; the deletion unit 1403 is used for supporting the data storage device to execute steps 1101-1102 in fig. 11. All relevant contents of each step related to the above method embodiment may be referred to the functional description of the corresponding functional module, and are not described herein again.
In the case of an integrated unit, fig. 14B shows a schematic diagram of a possible structure of the data storage device according to the above-described embodiment. The data storage device includes: a processing module 1411 and a communication module 1412. The processing module 1411 is used for controlling and managing the actions of the data storage device. For example, the processing module 1411 is configured to support the data storage device to perform steps 501-502 in fig. 5, steps 501-504 in fig. 8, steps 1101-1102 in fig. 11, and steps 1201-1202 in fig. 12, and/or other processes for the techniques described herein. The communication module 1412 is used to support communication of the data storage device with other network entities, such as other DCs shown in fig. 3. The data storage device may also include a storage module 1413 for storing program codes and data for the data storage device.
The Processing module 1411 may be a Processor or a controller, such as a Central Processing Unit (CPU), a general purpose Processor, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, DSPs, and microprocessors, among others. The communication module 1412 may be a transceiver, a transceiving circuit or a communication interface, etc. The storage module 1413 may be a memory.
When the processing module 1411 is a processor, the communication module 1412 is a communication interface, and the storage module 1413 is a memory, the data storage device according to the present application may be the data storage device shown in fig. 14C.
Referring to fig. 14C, the data storage device includes: a processor 1421, a communication interface 1422, a memory 1423, and a bus 1424. The communication interface 1422, the processor 1421, and the memory 1423 are connected to each other via a bus 1424; the bus 1424 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, fig. 14C is shown with only one thick line, but does not show only one bus or one type of bus.
The steps of a method or algorithm described in connection with the disclosure herein may be embodied in hardware or in software instructions executed by a processor. The software instructions may be comprised of corresponding software modules that may be stored in Random Access Memory (RAM), flash Memory, Read Only Memory (ROM), Erasable Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, a hard disk, a removable disk, a compact disc Read Only Memory (CD-ROM), or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such the processor can read information from, and write information to, the storage medium. Of course, the storage medium may also be integral to the processor. The processor and the storage medium may reside in an ASIC. Additionally, the ASIC may reside in a core network interface device. Of course, the processor and the storage medium may reside as discrete components in a core network interface device.
In specific implementation, the present application further provides a computer storage medium, where the computer storage medium may store a program, and the program may include some or all of the steps in the embodiments of the data storage method provided in the present application when executed. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM) or a Random Access Memory (RAM).
The present application also provides a computer program product containing instructions which, when run on a computer, cause the computer to perform some or all of the steps in the embodiments of the data storage method provided herein.
Those skilled in the art will readily appreciate that the techniques of this application may be implemented in software plus any necessary general purpose hardware platform. Based on such understanding, the technical solutions in the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a VPN gateway, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present invention.
The same and similar parts in the various embodiments in this specification may be referred to each other. In particular, as for the apparatus embodiment, since it is substantially similar to the method embodiment, the description is simple, and the relevant points can be referred to the description in the method embodiment.
The above-described embodiments of the present invention should not be construed as limiting the scope of the present invention.

Claims (13)

1. A method of storing data, comprising:
acquiring data fragments and local check fragments of first data, wherein the data fragments of the first data comprise N data fragments, N is not less than 2, and N is an integer;
storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the rest N-J data fragments in the first data center DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments;
the N-J data fragments are divided into at least one group, each group of data fragments comprises at least one data fragment, each group of data fragments corresponds to one second data center DC, J is larger than or equal to 1, and J is an integer.
2. The method of claim 1, further comprising:
and when the total occupation ratio is greater than or equal to a preset first threshold value, deleting the N-J data fragments temporarily stored in the first data center DC, wherein the total occupation ratio is the proportion of the total capacity occupied by the data fragments temporarily stored in the first data center DC to the capacity of the first data center DC.
3. The method according to claim 2, wherein before said deleting the N-J data slices temporarily stored in the first data center DC, the method further comprises:
judging whether the access frequency of the first data is less than or equal to a preset first access threshold or not;
the deleting the N-J data slices temporarily stored in the first data center DC includes:
and if the access frequency is less than or equal to the first access threshold, deleting the N-J data slices temporarily stored in the first data center DC.
4. The method according to claim 3, wherein after deleting the N-J data slices temporarily stored in the first data center DC if the access frequency is less than or equal to the first access threshold, the method further comprises:
when the access frequency is greater than or equal to a preset second access threshold and the total occupancy is smaller than the first threshold, copying the N-J data fragments from a second data center DC stored in the N-J data fragments;
temporarily storing the copied N-J data slices in the first data center DC.
5. The method according to any of claims 2-4, wherein the first threshold value is calculated as follows:
X=(n/K/(n+m))*h%;
wherein X represents the first threshold value, K represents the number of the second data center DCs, K is greater than or equal to 1, K is an integer, m represents the total number of check fragments allowed to be stored in the K second data center DCs, n represents the total number of data fragments allowed to be stored in the K second data center DCs, and h% represents a hot data ratio.
6. The method of claim 5, wherein after obtaining the data slice and the local check slice of the first data, the method further comprises:
judging whether a total occupancy is greater than or equal to a preset second threshold, wherein the total occupancy is the proportion of the total capacity occupied by the data fragments temporarily stored in the first data center DC to the capacity of the first data center DC;
the storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the remaining N-J data fragments in the first data center DC, and sending each group of data fragments of the N-J data fragments and local check fragments corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments includes:
when the total proportion is determined to be smaller than the second threshold value, storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the rest N-J data fragments in the first data center DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments.
7. The method of claim 6, wherein after storing J of the N data slices and the local check slices to which the J data slices correspond in different storage devices in a first data center DC, and temporarily storing remaining N-J data slices in the first data center DC, the method further comprises:
the client server reads the stored J data fragments and the temporarily stored N-J data fragments from the first data center DC;
and the client server combines the J data fragments and the N-J data fragments into the first data.
8. A data storage device, comprising:
the device comprises an acquisition unit, a verification unit and a verification unit, wherein the acquisition unit is used for acquiring data fragments and local verification fragments of first data, the data fragments of the first data comprise N data fragments, N is not less than 2, and N is an integer;
a storage unit, configured to store J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily store the remaining N-J data fragments in the first data center DC, and send each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments;
the N-J data fragments are divided into at least one group, each group of data fragments comprises at least one data fragment, each group of data fragments corresponds to one second data center DC, J is larger than or equal to 1, and J is an integer.
9. The data storage device of claim 8, further comprising a deletion unit:
the deleting unit is configured to delete the N-J data segments temporarily stored in the first data center DC when a total occupancy is greater than or equal to a preset first threshold, where the total occupancy is a ratio of a total capacity occupied by the data segments temporarily stored in the first data center DC to a capacity of the first data center DC.
10. The data storage device of claim 9,
the deleting unit is further configured to determine whether an access frequency of the first data is less than or equal to a preset first access threshold before deleting the N-J data slices temporarily stored in the first data center DC;
the deleting unit deletes the N-J data pieces temporarily stored in the first data center DC, which specifically includes:
and if the access frequency is less than or equal to the first access threshold, deleting the N-J data slices temporarily stored in the first data center DC.
11. The data storage device of claim 9 or 10,
the storage unit is further configured to, after the deleting unit deletes the N-J data shards temporarily stored in the first data center DC, copy the N-J data shards from the second data center DC stored in the N-J data shards when the access frequency is greater than or equal to a preset second access threshold and the total occupancy is smaller than the first threshold, and temporarily store the copied N-J data shards in the first data center DC.
12. The data storage device of claim 11, wherein the first threshold value is calculated as follows:
x = (n/K/(n + m)). h%; the total amount of data slices to be processed,
wherein X represents the first threshold value, K represents the number of the second data center DCs, K is greater than or equal to 1, K is an integer, m represents the total number of check fragments allowed to be stored in the K second data center DCs, n represents the total number of data fragments allowed to be stored in the K second data center DCs, and h% represents a hot data ratio.
13. The data storage device of claim 12,
the storage unit is further configured to determine whether a total ratio is greater than or equal to a preset second threshold after the obtaining unit obtains the data fragments of the first data, where the total ratio is a ratio of a total capacity occupied by the data fragments temporarily stored in the first data center DC to a capacity of the first data center DC;
the storing unit stores J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily stores the remaining N-J data fragments in the first data center DC, and sends each group of data fragments of the N-J data fragments and the local check fragment corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments, and specifically includes:
when the total proportion is determined to be smaller than the second threshold value, storing J data fragments of the N data fragments and local check fragments corresponding to the J data fragments in different storage devices in a first data center DC, temporarily storing the rest N-J data fragments in the first data center DC, and sending each group of data fragments of the N-J data fragments and the local check fragments corresponding to the group of data fragments to a second data center DC corresponding to the group of data fragments.
CN201710900323.XA 2017-09-28 2017-09-28 Data storage method and device Active CN107704336B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710900323.XA CN107704336B (en) 2017-09-28 2017-09-28 Data storage method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710900323.XA CN107704336B (en) 2017-09-28 2017-09-28 Data storage method and device

Publications (2)

Publication Number Publication Date
CN107704336A CN107704336A (en) 2018-02-16
CN107704336B true CN107704336B (en) 2021-08-13

Family

ID=61175122

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710900323.XA Active CN107704336B (en) 2017-09-28 2017-09-28 Data storage method and device

Country Status (1)

Country Link
CN (1) CN107704336B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109003216A (en) * 2018-08-20 2018-12-14 西安创艺教育培训中心有限公司 Cloud education and study platform based on two-dimensional code scanning
CN111435323B (en) * 2019-01-15 2023-06-20 阿里巴巴集团控股有限公司 Information transmission method, device, terminal, server and storage medium
CN113194117A (en) * 2021-03-22 2021-07-30 海南视联通信技术有限公司 Data processing method and device based on video network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7562203B2 (en) * 2006-09-27 2009-07-14 Network Appliance, Inc. Storage defragmentation based on modified physical address and unmodified logical address
CN101510223A (en) * 2009-04-03 2009-08-19 成都市华为赛门铁克科技有限公司 Data processing method and system
CN105573680A (en) * 2015-12-25 2016-05-11 北京奇虎科技有限公司 Storage method and device for replicated data
CN106201338A (en) * 2016-06-28 2016-12-07 华为技术有限公司 Date storage method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7562203B2 (en) * 2006-09-27 2009-07-14 Network Appliance, Inc. Storage defragmentation based on modified physical address and unmodified logical address
CN101510223A (en) * 2009-04-03 2009-08-19 成都市华为赛门铁克科技有限公司 Data processing method and system
CN105573680A (en) * 2015-12-25 2016-05-11 北京奇虎科技有限公司 Storage method and device for replicated data
CN106201338A (en) * 2016-06-28 2016-12-07 华为技术有限公司 Date storage method and device

Also Published As

Publication number Publication date
CN107704336A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
US9740403B2 (en) Methods for managing storage in a data storage cluster with distributed zones based on parity values and devices thereof
US10127166B2 (en) Data storage controller with multiple pipelines
US8266475B2 (en) Storage management device, storage management method, and storage system
US8214620B2 (en) Computer-readable recording medium storing data storage program, computer, and method thereof
CN107704336B (en) Data storage method and device
WO2012140695A1 (en) Storage control apparatus and error correction method
US11347653B2 (en) Persistent storage device management
US10826526B2 (en) Memory system and information processing system
CN107391297B (en) System and method for improving data refresh in flash memory
US10996894B2 (en) Application storage segmentation reallocation
CN105677236B (en) A kind of storage device and its method for storing data
CN110941514B (en) Data backup method, data recovery method, computer equipment and storage medium
US10084484B2 (en) Storage control apparatus and non-transitory computer-readable storage medium storing computer program
CN110795272A (en) Method and system for atomicity and latency guarantees facilitated on variable-size I/O
US9292213B2 (en) Maintaining at least one journal and/or at least one data structure by circuitry
JP2015052844A (en) Copy controller, copy control method, and copy control program
KR20170095570A (en) Network traffic recording device and method thereof
JP6052288B2 (en) Disk array control device, disk array control method, and disk array control program
JP5492103B2 (en) Backup apparatus, backup method, data compression method, backup program, and data compression program
CN103064762B (en) Heavily delete restoration methods and the device of Backup Data
CN111666043A (en) Data storage method and equipment
JP6733213B2 (en) Control device, storage device, storage system, control method, and program
CN113424262A (en) Storage verification method and device
CN113821179B (en) Data storage method and device, computing equipment and storage medium
US11907068B2 (en) Read request response for reconstructed data in a degraded drive

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant