CN107153512B - Data migration method and device - Google Patents

Data migration method and device Download PDF

Info

Publication number
CN107153512B
CN107153512B CN201710214393.XA CN201710214393A CN107153512B CN 107153512 B CN107153512 B CN 107153512B CN 201710214393 A CN201710214393 A CN 201710214393A CN 107153512 B CN107153512 B CN 107153512B
Authority
CN
China
Prior art keywords
key
value
storage node
data
migration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710214393.XA
Other languages
Chinese (zh)
Other versions
CN107153512A (en
Inventor
张秦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710214393.XA priority Critical patent/CN107153512B/en
Publication of CN107153512A publication Critical patent/CN107153512A/en
Application granted granted Critical
Publication of CN107153512B publication Critical patent/CN107153512B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0643Management of files
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices

Abstract

A data migration method and device are disclosed, the data migration method comprises: searching a data unit meeting a migration condition from metadata, and obtaining a key of a fragment of the data unit, a source storage node identifier where the key-value is located, and a destination storage pool identifier, wherein each key corresponds to a value; selecting a destination storage node for storing the key-value from the destination storage pool represented by the destination storage pool identifier; and instructing the source storage node to obtain the key-value from the source storage node by using the key, and sending the obtained key-value to the destination storage node. The scheme reduces the occupation of resources by data migration.

Description

Data migration method and device
Technical Field
The present invention relates to the field of computing storage, and in particular, to a data migration method and apparatus.
Background
The distributed storage system may provide a hierarchical storage function: the storage space is divided into different storage levels, the files are stored in the different storage levels according to the priority of the files, and the various requirements of a user on the file processing speed, the transmission bandwidth and the storage capacity are met, so that the purposes of reasonably utilizing the storage space, improving the access performance of a storage system and reducing the overall deployment cost are achieved.
In the prior art, a typical way to write a file to a storage system is; writing the file into a high-performance storage medium; scanning the eligible files (such as creating files that exceed a certain period of time or are less than a certain access amount), and re-reading the eligible files to migrate to a low-cost storage medium.
The process of reading a qualified file and rewriting it to a new storage medium consumes resources of the storage system. For example, in a storage system that stores data using data scatter + EC computational redundancy. The specific method for hierarchical storage is as follows: firstly, reading all fragments of a file from a high-performance memory to a host, then carrying out EC verification, splitting the file into fragments again by the host after the verification is successful, and writing the obtained file fragments into a low-performance memory. Referring to fig. 1, a file fragment located in a first storage pool is read first, a host verifies the read fragment and generates a file, and then breaks up the file into fragments again and writes the fragments into a second storage pool.
In the migration scheme, the file fragments are firstly converged, then EC verification is carried out, and then the file is disassembled into the file fragments again, so that the consumption of the operations on the storage system resources is larger.
Disclosure of Invention
A first possible implementation manner of the present invention provides a data migration method, including: searching a data unit meeting a migration condition from metadata, and obtaining a key of a fragment of the data unit, a source storage node identifier where the key-value is located, and a destination storage pool identifier, wherein each key corresponds to a value; selecting a destination storage node for storing the key-value from the destination storage pool represented by the destination storage pool identifier; and instructing the source storage node to obtain the key-value from the source storage node by using the key, and sending the obtained key-value to the destination storage node.
By using the data migration method, the direct migration of the fragments of one file on different storage nodes can be realized on the basis of the KV protocol. The method avoids that the file fragments are firstly converged, then EC verification is carried out, and then the file is disassembled into the file fragments again, and the consumption of the operations on the storage system resources is larger.
In a first possible implementation manner, optionally, the migration condition includes at least one of the following conditions: the creation time of the data unit, the size of the data unit, the name of the data unit, and the storage pool in which the data unit is located.
In a first possible implementation manner, optionally, wherein: the storage pool in which the source storage node is located and the destination storage pool provide different storage performances.
In a first possible implementation manner, optionally, wherein: and when the source storage node needs to read the data adjacent to the key-value, the source storage node executes the operation of sending the key-value to the destination storage node.
In a first possible implementation manner, optionally, the data unit is one of the following: a file, an object, a block, or a portion of a file.
In a second possible implementation manner of the present invention, a migration apparatus is provided, which includes: the data unit migration module is used for searching the data units meeting the migration conditions from the metadata, and obtaining key keys of the fragments of the data units, source storage node identifiers where the key-value keys-values are located, and destination storage pool identifiers, wherein each key corresponds to a value; a destination storage node determining module, configured to select a destination storage node for storing the key-value from a destination storage pool represented by the destination storage pool identifier; and the migration module is used for instructing the source storage node to obtain the key-value from the source storage node by using the key and sending the obtained key-value to the destination storage node.
In a second possible implementation manner, optionally, the migration condition includes at least one of the following conditions: the creation time of the data unit, the size of the data unit, the name of the data unit, and the storage pool in which the data unit is located.
In a second possible implementation manner, optionally: the storage pool in which the source storage node is located and the destination storage pool provide different storage performances.
In a second possible implementation manner, it is optional: the source storage node is to: and when data adjacent to the key-value needs to be read, sending the key-value to the destination storage node.
In a second possible implementation manner, optionally, the data unit is one of the following: a file, an object, a block, or a portion of a file.
Drawings
In order to illustrate the solution of the embodiments of the invention more clearly, the drawings that are needed in the description of the embodiments are briefly introduced below, the drawings in the following description being only some embodiments of the invention, and further drawings may be obtained from these drawings.
FIG. 1 is a schematic diagram of a prior art data migration scheme.
FIG. 2 is a schematic diagram of a data migration scheme according to an embodiment of the present invention.
FIG. 3 is a flow chart of a data migration method according to an embodiment of the present invention.
FIG. 4 is a block diagram of a data migration apparatus according to an embodiment of the present invention.
Detailed Description
The technical solutions of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other examples obtained based on the examples in the present invention are within the scope of the present invention.
The distributed storage system in the embodiment of the invention can be used for storing a file, an object (object), a block (block) or a part of the file and a part of the object. For convenience of description, a file is exemplified below, and it is understood that "file" mentioned later may be replaced with "object", block, "part of file" or "part of object" without being particularly stated.
A distributed storage system based on Key Value (KV) storage is one existing form of the distributed storage system. A distributed storage system may include storage pools (pools), which are logical units providing storage space, including storage nodes, and the performance levels of different storage pools may vary. Storing a file in a storage pool, specifically: the file splitting is performed by dividing the file into file fragments (data fragments + check fragments), storing different file fragments into different storage nodes, or storing different file fragments into different memories (in this case, the storage nodes to which the memories belong are not distinguished, and since one storage node may have a plurality of memories, it is possible that some file fragments are stored in the same storage node).
For example: the specific storage of file fragments is determined using a Distributed Hash Table (DHT) algorithm or a range partition (range partition) algorithm, and then the fragments are stored in different storages. In addition, there is a multi-copy storage method, that is, the same file/file fragment is stored in multiple storages. In the former storage mode, if a file fragment is lost, the lost file fragment can be recovered by using the rest fragments through a checking algorithm. In the latter storage mode, because the contents of the copies are the same, if any copy is lost, the other copies can be read.
According to the scheme provided by the embodiment of the invention, the files in one storage pool can be migrated to another resource pool. Because the performances of different resource pools are different, the embodiment of the invention can realize the migration of the file from the high-performance memory to the low-performance memory; or vice versa, to migrate low performance memory to high performance memory. The performance of the storage pool can be described by the read-write speed per second, the stability of the storage medium, the redundancy protection level of the storage pool and the like. The performance of a storage pool, for example, consisting of a Solid State Disk (SSD), is typically greater than the performance of a storage pool consisting of disks; and the performance of the storage pool consisting of high-speed disks is greater than that of the storage pool consisting of low-speed disks. Generally, high performance storage pools cost more than low performance storage pools.
Referring to fig. 2, in the file migration scheme according to the embodiment of the present invention, a file fragment located in a first storage pool is directly migrated to a storage node located in a second storage pool without being aggregated at a host or other storage nodes. The scheme avoids the reunion and the scattering of the file fragments, reduces EC calculation, and reduces the influence of data migration on foreground services.
Referring to the flowchart of fig. 3 and the following steps, a data migration method provided by the embodiment of the invention is described. It should be noted that the following embodiments are described by taking data migration between storage pools as an example, and actually, migration between file fragments between storage nodes may be directly performed without the concept of a storage pool.
And 11, pre-storing a data migration strategy in the storage system.
The storage system is comprised of a plurality of storage pools, each storage pool including at least one storage node. The storage node is, for example, a server (server), a host (host), or a combination of controller + memory. The storage nodes are used for storing data and managing the stored data.
The migration policy can be stored in a certain storage node, for example, in a management storage node; it may also be stored in a plurality of storage nodes, for example, each storage node storing a portion of the storage policy, or each storage node storing the complete storage policy.
The migration policy may describe criteria for files that meet the migration criteria. The migration condition may include the storage pool ID where the file is located, and may further include: at least one of a creation time of the file, a time since the file was last accessed, a size of the file, a file name prefix, a file name suffix, and conditions. Each migration condition includes specific parameter values, such as: the size of the file is larger than 10Mbytes, the creation time of the file is ultrafast for 10 days, the prefix of the file name is aaa, and the storage pool in which the file is located is storage pool A. Thus, the migration policy includes: migration conditions, storage pool ID where the file is located, and target storage pool ID. The storage pool ID where the file is located may also be one of the parameters of the migration policy, instead of being an option of the migration condition. In other words, the migration policy includes: migration conditions, storage pool ID where the file is located, and target storage pool ID. The two descriptions are not contradictory, and the former one is described as an example below.
Illustratively, one migration policy is to satisfy: the file name is the file beginning with the prefix aaa, the creation time is earlier than 2017/1/1, the last modification time is earlier than 2017/2/1, the last access time is earlier than 2017/2/1, and a file fragment of the file located in high-performance storage pool 1 (the source storage pool) needs to be migrated to storage pool 2 (the destination storage pool) at the end cost.
This step is a preset step and is not required to be performed every time migration is performed. Therefore, this step is optional in the normal migration flow as long as the migration removal policy already exists in the storage system.
And step 12, searching the file meeting the migration condition, and obtaining the key of the fragment of the file. And obtaining the storage node identification of the source storage node where the key-value (KV) is located, and the destination storage pool identification. This step may be performed by the management storage node, or may be performed by other storage nodes in the storage system.
The query operation may be performed periodically or may be forcibly triggered by an administrator.
The metadata describes the actual parameters of the file, such as the creation time of the file, the time since the last time the file was accessed, the size of the file, the name of the file, the prefix of the name of the file, and the suffix of the name of the file, as described above. Therefore, the files meeting the migration conditions can be found by comparing the migration conditions with the metadata.
After the file meeting the migration condition is found, the key of each file fragment (fragment for short) of the file can be obtained through the metadata of the file meeting the migration condition.
For example, one key naming rule is: filename prefix + natural number, and the size of each slice is fixed. Suppose that for a file with a prefix abc, it is known from the metadata that the size of the file is 10Mbytes and the size of each slice is 2 Mbytes. The naming of the key that we can get the data slice of the file is: aaa1, aaa2, aaa3, aaa4, and aaa 5. Assuming there are 2 redundant check-fragments, then the naming of the check-fragment is: aaa6 and aaa 7. Thereby the keys of all file fragments (check fragment and data fragment) of this file can be obtained. These are value and key one-to-one corresponding to data and check fragments, and need to be migrated from the source storage pool in subsequent steps.
It should be noted that the above naming rule for generating keys according to file names is a common practice because it is simple and convenient. Other arrangements are possible in addition to the example above. In addition, there are other ways to generate keys using file names. In addition, keys may be generated using other algorithms, instead of file names. For example, a pseudo-random number generated by a pseudo-random number algorithm is used as a key, and a combination of the size of a file and an english alphabet may be used as a key. It is sufficient that keys and values are in one-to-one correspondence, and it is ensured (sometimes not necessary) that keys corresponding to different values are not repeated.
The storage node where the key-value is located may be recorded in the metadata. Then, reading the metadata can know the source storage node where the key-value is located.
Another scheme for obtaining the source storage node where the key-value is located is described as follows:
it is easy to think of: before migrating the key-value, the key-value first needs to be stored in the storage node. And when the key-value is stored in the source storage pool, operating the key according to a distributed algorithm to select a specific storage node for storing the key-value. Therefore, in this step, after the key of the segment is obtained, the storage node where the key-value is located can be known according to the same distributed algorithm. An alternative distributed algorithm is DHT: and carrying out Hash operation on the key, carrying out modulo operation on the Hash value obtained by calculation according to the number of the storage nodes in the storage pool, wherein the obtained value is the storage node for storing the key-value. For example: the source storage pool has a total of: storage node 1, storage node 2, storage node 3 and storage node 4. After the hash operation, the value obtained by modulo 4 the total number of storage nodes is 2, and then the key-value is located in the second-ranked storage node 2.
The destination storage pool identification may be preset. The destination storage pool may form a mapping relationship with the source storage pool. For example: when the storage pool A is used as a source storage pool, the corresponding target storage pool is a storage pool B; when the storage pool B is used as a source storage pool, the corresponding target storage pool is a storage pool C; when storage pool D is the source storage pool, its corresponding destination storage pool is storage pool E. Of course, the decision may be determined by a specific algorithm, and may even be specified by an administrator, which is not limited herein.
Consider that each key-value migrates in the same manner. These key-value migrations may be performed in parallel to improve migration efficiency. Therefore, for the sake of simplicity of description, the operation flow of only one key-value is exemplified in steps 13 to 15 without specific description.
And step 13, selecting a destination storage node for storing the key-value from the destination storage pool represented by the destination storage pool identifier.
This step may be performed by the managing storage node, or by other storage nodes in the storage system, such as one of the storage nodes in the destination storage pool.
Illustratively, a DHT scheme similar to step 12 may be used to select one of the storage nodes in the destination storage pool as the destination storage node for storing the key-value. In addition, one storage node can be selected as the destination storage node in turn. In addition, one storage node can be randomly selected as a destination storage node. Thus, the selection scheme may be various. Only one storage node is selected from the destination storage pool.
And step 14, instructing the source storage node to obtain the key-value from the source storage node by using the key, and sending the obtained key-value to the destination storage node for storage. And the source storage node acquires the key-value from the source storage node by using the key according to the instruction, and sends the acquired key-value to the destination storage node for storage.
After receiving the key-value, the destination storage node may store the key-value in a local memory or storage, thereby completing the key-value migration. By analogy, the remaining key-value can be migrated in the same way. And if all the file fragments of the same file are migrated, completing the migration of the whole file.
In this step, if the storage node that issues the instruction and the source storage node are the same storage node, the instruction is optional, or the source storage node may also be considered to issue the instruction to itself. And directly obtaining the key-value from a local memory by the source storage node, and sending the obtained key-value to the destination storage node for storage.
In this step, if the storage node that issues the instruction and the source storage node are not the same storage node, the storage node that issues the instruction to the source storage node, so that the source storage node of the source storage node obtains the key-value from the local storage, and sends the obtained key-value to the destination storage node for storage.
If the source storage node has a plurality of storages, the hard disk where the key-value is specifically located needs to be determined. The determination method thereof may be determined by the manner in which the key-value is stored in the memory at the beginning. Determining that the key-value is stored to the memory based on hashing and modulo the key. In addition, a mapping table may also be maintained by the source storage node, and the relationship between the key and the storage location of the key-value is recorded in the mapping table, so that the memory where the key-value is located may also be obtained by using the key to search in the mapping table.
Optionally, the key-value is not read temporarily by the source storage node after receiving the instruction. But the operation of sending the key-value to the destination storage node is performed when the source storage node needs to read the data adjacent to the key-value. This has the advantage that two adjacent data can be read successively, which can improve the reading efficiency for the medium that reads data from a hard disk (especially a machine, a floppy disk, an optical disk, etc.) by rotation.
Alternatively, when the hard disk is idle (for example, when there is no foreground service to read/write the source storage node), the data to be migrated is read in sequence, which can reduce the impact of the migration operation on the normal service.
And step 15, after the key-value is written into the target storage node, the target storage node informs the management storage node that the key-value writing is completed.
The foregoing describes a migration method of a specific key-value (which may be named as a first key-value for distinguishing from other key-values of the file), and other key-values (data slices or check slices) except the first key-value may use the same migration scheme, and after all the key-values of the file are written into the corresponding destination memories (according to the above method), the migration of the entire file is completed. And after the migration of the whole file is completed, the management storage node modifies the storage position information in the metadata and updates the storage position information into the ID of the target storage pool.
Optionally, after each key-value is written into the corresponding destination memory according to the above method, the management storage node may notify the source storage node to delete the local key-value.
By using the data migration method, point-to-point data migration between the storage nodes is realized for the distributed storage nodes with the KV interfaces. The occupation and consumption of system resources caused by operations such as data convergence, scattering and EC calculation related to the migration data in the prior art are eliminated, and the reliability of the system is greatly improved.
In addition, the specific content of the migration depends on the way the key-value is stored in the source storage pool. Thus, the key-value migration may be a key and a value migration together from a source storage pool to a destination storage pool; or may refer to only the migration of value from the source storage pool to the destination storage pool.
The present invention also provides a storage node, comprising: the processor may also include a memory. The memory has a computer program, and the processor executing the program in the memory may perform the operations that were not performed by the management storage node in steps 11-16 above.
In addition, in addition to the migration method embodiments above, the present invention also provides an embodiment of a data migration apparatus, which corresponds to the foregoing method and can perform the foregoing method. Referring to fig. 4, the data migration apparatus includes a lookup module 21, a destination storage node determination module 22, and a migration module 23.
The searching module 21 is configured to search a data unit meeting a migration condition from metadata, and obtain a key of a fragment of the data unit, a source storage node identifier where the key-value is located, and a destination storage pool identifier, where each key corresponds to a value;
a destination storage node determining module 22, configured to select a destination storage node for storing the key-value from the destination storage pool represented by the destination storage pool identifier;
and the migration module 23 is configured to instruct the source storage node to obtain the key-value from the source storage node by using the key, and send the obtained key-value to the destination storage node for storage. If the data migration apparatus is integrated with the source storage node, the migration module may further perform: and obtaining the key-value from the source storage node by using the key, and sending the obtained key-value to the destination storage node. For storage by the destination storage node.
Wherein the migration conditions include at least one of the conditions: the creation time of the data unit, the size of the data unit, the name of the data unit, and the storage pool in which the data unit is located.
Wherein the storage pool in which the source storage node is located and the destination storage pool provide different storage performances.
Wherein the source storage node is to: and when data adjacent to the key-value needs to be read, sending the key-value to the destination storage node.
Wherein the data unit is one of the following: a file, an object, a block, or a portion of a file.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present invention may be implemented by software plus necessary general hardware, and certainly may also be implemented by hardware, but in many cases, the former is a better embodiment. Based on such understanding, the technical solutions of the present invention may be substantially implemented or a part of the technical solutions contributing to the prior art may be embodied in the form of a software product, which is stored in a readable storage medium, such as a floppy disk, a hard disk, or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method of data migration, the method comprising:
searching a data unit meeting a migration condition from metadata, and obtaining a key of a fragment of the data unit, a source storage node identifier where the key-value is located, and a destination storage pool identifier, wherein each key corresponds to a value;
selecting a destination storage node for storing the key-value from the destination storage pool represented by the destination storage pool identifier;
when the source storage node needs to read data adjacent to the key-value, instructing the source storage node to obtain the key-value from the source storage node by using the key, and sending the obtained key-value to the destination storage node.
2. The data migration method of claim 1, wherein the migration condition comprises at least one of the conditions:
the creation time of the data unit, the size of the data unit, the name of the data unit, and the storage pool in which the data unit is located.
3. The data migration method of claim 1, wherein:
the storage pool in which the source storage node is located and the destination storage pool provide different storage performances.
4. The data migration method of claim 1, wherein:
and when the source storage node needs to read the data adjacent to the key-value, the source storage node executes the operation of sending the key-value to the destination storage node.
5. The data migration method of claim 1, wherein the data unit is one of:
a file, an object, a block, or a portion of a file.
6. A data migration apparatus, the apparatus comprising:
the data unit migration module is used for searching the data units meeting the migration conditions from the metadata, and obtaining key keys of the fragments of the data units, source storage node identifiers where the key-value keys-values are located, and destination storage pool identifiers, wherein each key corresponds to a value;
a destination storage node determining module, configured to select a destination storage node for storing the key-value from a destination storage pool represented by the destination storage pool identifier;
and the migration module is used for instructing the source storage node to obtain the key-value from the source storage node by using the key and sending the obtained key-value to the destination storage node when the source storage node needs to read the data adjacent to the key-value.
7. The data migration method of claim 1, wherein the migration condition comprises at least one of the conditions:
the creation time of the data unit, the size of the data unit, the name of the data unit, and the storage pool in which the data unit is located.
8. The data migration method of claim 1, wherein:
the storage pool in which the source storage node is located and the destination storage pool provide different storage performances.
9. The data migration method of claim 1, wherein:
the source storage node is to: and when data adjacent to the key-value needs to be read, sending the key-value to the destination storage node.
10. The data migration method of claim 1, wherein the data unit is one of:
a file, an object, a block, or a portion of a file.
CN201710214393.XA 2017-04-01 2017-04-01 Data migration method and device Active CN107153512B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710214393.XA CN107153512B (en) 2017-04-01 2017-04-01 Data migration method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710214393.XA CN107153512B (en) 2017-04-01 2017-04-01 Data migration method and device

Publications (2)

Publication Number Publication Date
CN107153512A CN107153512A (en) 2017-09-12
CN107153512B true CN107153512B (en) 2020-05-08

Family

ID=59793515

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710214393.XA Active CN107153512B (en) 2017-04-01 2017-04-01 Data migration method and device

Country Status (1)

Country Link
CN (1) CN107153512B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108647270A (en) * 2018-04-28 2018-10-12 尚谷科技(天津)有限公司 A method of the Data Migration based on fault-tolerant time daily record
CN111381770B (en) * 2018-12-30 2021-07-06 浙江宇视科技有限公司 Data storage switching method, device, equipment and storage medium
CN110855737B (en) * 2019-09-24 2020-11-06 中国科学院软件研究所 Consistency level controllable self-adaptive data synchronization method and system
CN114415977B (en) * 2022-03-29 2022-10-04 阿里云计算有限公司 Method for accessing storage pool and distributed storage system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997911A (en) * 2010-10-21 2011-03-30 中兴通讯股份有限公司 Data migration method and system
CN103718533A (en) * 2013-06-29 2014-04-09 华为技术有限公司 Zoning balance subtask issuing method, apparatus and system
CN104348862A (en) * 2013-07-31 2015-02-11 华为技术有限公司 Data migration processing method, apparatus, and system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6439475B2 (en) * 2015-02-09 2018-12-19 富士通株式会社 Information processing apparatus, information processing system, and control program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101997911A (en) * 2010-10-21 2011-03-30 中兴通讯股份有限公司 Data migration method and system
CN103718533A (en) * 2013-06-29 2014-04-09 华为技术有限公司 Zoning balance subtask issuing method, apparatus and system
CN104348862A (en) * 2013-07-31 2015-02-11 华为技术有限公司 Data migration processing method, apparatus, and system

Also Published As

Publication number Publication date
CN107153512A (en) 2017-09-12

Similar Documents

Publication Publication Date Title
US11379142B2 (en) Snapshot-enabled storage system implementing algorithm for efficient reclamation of snapshot storage space
US11461027B2 (en) Deduplication-aware load balancing in distributed storage systems
US10169365B2 (en) Multiple deduplication domains in network storage system
US9792306B1 (en) Data transfer between dissimilar deduplication systems
US10824512B2 (en) Managing journaling resources with copies stored in multiple locations
US9141621B2 (en) Copying a differential data store into temporary storage media in response to a request
US9116803B1 (en) Placement of virtual machines based on page commonality
US8793227B2 (en) Storage system for eliminating duplicated data
US8347050B2 (en) Append-based shared persistent storage
EP3108371B1 (en) Modified memory compression
US10715622B2 (en) Systems and methods for accelerating object stores with distributed caching
US8904136B2 (en) Optimized shrinking of virtual disks
US9817865B2 (en) Direct lookup for identifying duplicate data in a data deduplication system
US20170091232A1 (en) Elastic, ephemeral in-line deduplication service
CN107153512B (en) Data migration method and device
US8386717B1 (en) Method and apparatus to free up cache memory space with a pseudo least recently used scheme
CN105027069A (en) Deduplication of volume regions
US10242021B2 (en) Storing data deduplication metadata in a grid of processors
US20170199894A1 (en) Rebalancing distributed metadata
US11169968B2 (en) Region-integrated data deduplication implementing a multi-lifetime duplicate finder
WO2019183958A1 (en) Data writing method, client server, and system
US10255288B2 (en) Distributed data deduplication in a grid of processors
US9696936B2 (en) Applying a maximum size bound on content defined segmentation of data
US9244830B2 (en) Hierarchical content defined segmentation of data
US20150212847A1 (en) Apparatus and method for managing cache of virtual machine image file

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant