CN109344143A - A kind of distributed type assemblies Data Migration optimization method based on Ceph - Google Patents
A kind of distributed type assemblies Data Migration optimization method based on Ceph Download PDFInfo
- Publication number
- CN109344143A CN109344143A CN201811253132.XA CN201811253132A CN109344143A CN 109344143 A CN109344143 A CN 109344143A CN 201811253132 A CN201811253132 A CN 201811253132A CN 109344143 A CN109344143 A CN 109344143A
- Authority
- CN
- China
- Prior art keywords
- ceph
- flag bit
- osd
- equipment
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The invention discloses a kind of distributed type assemblies Data Migration optimization method based on Ceph, belongs to the distributed type assemblies field of Ceph;It includes step 1: the flag bit of PG is arranged;Step 2: stopping the equipment OSD process of malfunctioning node, the weight of equipment OSD is zeroed according to flag bit algorithm CRUSH;Step 3: according to OSD described in flag bit algorithm CRUSH removing step 2;Step 4: increasing new OSD based on step 3;Step 5: releasing the flag bit of PG;The present invention, which solves the problems, such as that the migration number of PG when existing Ceph distributed storage breaks down is excessive, replacement malfunctioning node trigger data migrates, causes system consumption big, the Data Migration load excessive for overcoming Ceph storage caused is reached, data loss problem caused by node failure is avoided, the effect of the resource consumption of Ceph distributed storage is effectively reduced.
Description
Technical field
The invention belongs to the distributed type assemblies field of Ceph, especially a kind of distributed type assemblies Data Migration based on Ceph
Optimization method.
Background technique
In cybertimes, with the continuous development of cloud computing, there is explosive growth, big data storage in global data volume
Great variety has occurred in demand;In terms of storage, Ceph is one of outstanding open source solution generally acknowledged at present, is implemented
Thinking is software definition storage i.e. SDS, Ceph by organizing the resource of more machines, externally provides unification, great Rong
Amount, high-performance, highly reliable file service, meet the demand of large-scale application, and architecture design can be with easy expansion to PB grades
Not;The logic storage unit of Ceph is cluster PG (Placement Groups), abbreviation PG.
Ceph develops CRUSH (Controlled Replication Under Scalable Hashing) algorithm, it
The copy of object can be effectively distributed in the storage cluster of hierarchical structure.Algorithm CRUSH realizes a kind of pseudorandom function,
Its parameter is object id or object group id, and returns to one group of storage equipment, for saving object copy
OSD, OSD (Object Storage Device), its function include storing data, replicate data, equilibrium data and restore number
According to progress heartbeat inspection etc. between other OSD.The realization of CRUSH algorithm needs to describe the hierarchical structure Cluster of storage cluster
Map and copy Distribution Strategy rule.
While with high-performance, high scalability, data Ceph distributed storage occur when being also faced with equipment additions and deletions
Unnecessary secondary migration and the problem of cause system consumption to increase, detail: the equipment of Ceph during whole service
OSD finger daemon checks the heartbeat of each equipment OSD, and reports to the Monitor of Ceph;If the equipment OSD of some node
The state of this equipment OSD can be set as Down by failure, Monitor.For example, working as equipment when PG is mapped as [0,8,3]
OSD.0 data are damaged, and need to migrate equipment OSD.0, but due to the form of 3 copy of the standby redundancy of Ceph, work as equipment
After OSD.0 state is Down, need to select an others equipment OSD i.e. from (equipment OSD.3, equipment on host0
OSD.8) selection one is copied next, since the data of other PG on equipment OSD.3 and equipment OSD.8 data may move together
It moves, therefore increases the additional migration of 30%-70%, cause system energy consumption excessive.
On the other hand, when the cluster of Ceph replaces malfunctioning node in the prior art, the step of use: a) arrestment OSD
Process;B) node state is labeled as Out;C) node is removed in algorithm CRUSH;D) deletion of node;E) deletion of node authenticates.With
Upper step can be triggered and be migrated twice, another secondary after CRUSH Remove once after node device OSD, extra twice
Migration increase the consumption of cluster.
To sum up, a kind of Data Migration optimization method is needed, the migration number mistake of PG when breaking down in the prior art is overcome
More, replacement malfunctioning node trigger data migrates the problem for causing system consumption big.
Summary of the invention
It is an object of the invention to: the present invention provides a kind of distributed type assemblies Data Migration optimization side based on Ceph
Method, the migration number of PG is excessive when solving the failure of existing Ceph distributed storage, replaces the migration of malfunctioning node trigger data
Lead to the problem that system consumption is big.
The technical solution adopted by the invention is as follows:
A kind of distributed type assemblies Data Migration optimization method based on Ceph, includes the following steps:
Step 1: the flag bit of PG is set;
Step 2: stopping the equipment OSD process of malfunctioning node, returned the weight of equipment OSD according to flag bit algorithm CRUSH
Zero;
Step 3: according to equipment OSD described in flag bit algorithm CRUSH removing step 2;
Step 4: increasing new equipment OSD based on step 3;
Step 5: releasing the flag bit of PG.
Preferably, flag bit includes flag bit norebalance, flag bit nobackfill and mark in the step 1
Position norecover, wherein
Flag bit norebalance, for marking Ceph cluster not do any cluster rebalancing;
Flag bit nobackfill, for marking Ceph cluster not do data backfill;
Flag bit norecover, for marking Ceph cluster not do cluster recovery.
Preferably, the step 1 includes the following steps:
Step 1.1: defining flag bit;
Step 1.2: the flag bit of PG is set.
Preferably, the step 2 includes the following steps:
Step 2.1: stopping the equipment OSD process of malfunctioning node;
Step 2.2: the weight of equipment OSD being zeroed according to flag bit algorithm CRUSH.
Preferably, the step 3 includes the following steps:
Step 3.1: deleting the equipment OSD that weight has been zeroed;
Step 3.2: deleting the corresponding malfunctioning node of step 3.1 equipment OSD.
In conclusion by adopting the above-described technical solution, the beneficial effects of the present invention are:
1. the present invention avoids migration redundant data, simultaneously by the way that flag bit is arranged to cluster device in data migration process
Data Migration when replacement malfunctioning node is avoided, the migration number mistake of PG when existing Ceph distributed storage breaks down is solved
More, replacement malfunctioning node trigger data migrates the problem for causing system consumption big, has reached the number for overcoming Ceph storage caused
According to migration load excessive, data loss problem caused by node failure is avoided, the resource that Ceph distributed storage is effectively reduced disappears
The effect of consumption;
2. the present invention is arranged flag bit, avoids replacement malfunctioning node bring Data Migration, reduce by distributing rationally
The amount of migration of 30%-40% avoids data loss problem caused by node failure, improves the available of Ceph object storage cluster
Property;
3. flag bit is arranged to cluster device in data migration process in the present invention, optimize Data Migration, data occurs in promotion
The resource consumption of Ceph distributed storage is effectively reduced in resource utilization ratio when migration, prevents invalid, excessive data from moving
It moves.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 is flow chart of the method for the present invention;
Fig. 2 is 1 contrast table 1 of test of the invention;
Fig. 3 is 1 comparison diagram of test of the invention;
Fig. 4 is 2 contrast table 2 of test of the invention;
Fig. 5 is 2 comparison diagram of test of the invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that described herein, specific examples are only used to explain the present invention, not
For limiting the present invention, i.e., described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is logical
The component for the embodiment of the present invention being often described and illustrated herein in the accompanying drawings can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed
The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiment of the present invention, those skilled in the art
Member's every other embodiment obtained without making creative work, shall fall within the protection scope of the present invention.
It should be noted that the relational terms of term " first " and " second " or the like be used merely to an entity or
Operation is distinguished with another entity or operation, and without necessarily requiring or implying between these entities or operation, there are any
This actual relationship or sequence.Moreover, the terms "include", "comprise" or its any other variant be intended to it is non-exclusive
Property include so that include a series of elements process, method, article or equipment not only include those elements, but also
Further include other elements that are not explicitly listed, or further include for this process, method, article or equipment it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described
There is also other identical elements in the process, method, article or equipment of element.
Technical problem: the migration number of PG is excessive when solving the failure of existing Ceph distributed storage, replaces failure section
Point trigger data migrates the problem for causing system consumption big;
Technological means: as shown in Figs. 1-5, a kind of distributed type assemblies Data Migration optimization method based on Ceph, including such as
Lower step:
Step 1: the flag bit of PG is set;
Step 2: stopping the equipment OSD process of malfunctioning node, returned the weight of equipment OSD according to flag bit algorithm CRUSH
Zero;
Step 3: according to equipment OSD described in flag bit algorithm CRUSH removing step 2;
Step 4: increasing new equipment OSD based on step 3;
Step 5: releasing the flag bit of PG.
Preferably, flag bit includes flag bit norebalance, flag bit nobackfill and mark in the step 1
Position norecover, wherein
Flag bit norebalance, for marking Ceph cluster not do any cluster rebalancing;
Flag bit nobackfill, for marking Ceph cluster not do data backfill;
Flag bit norecover, for marking Ceph cluster not do cluster recovery.
Step 1 includes the following steps:
Step 1.1: defining flag bit;
Step 1.2: the flag bit of PG is set.
Step 2 includes the following steps:
Step 2.1: stopping the equipment OSD process of malfunctioning node;
Step 2.2: the weight of equipment OSD being zeroed according to flag bit algorithm CRUSH.
Step 3 includes the following steps:
Step 3.1: deleting the equipment OSD that weight has been zeroed;
Step 3.2: deleting the corresponding malfunctioning node of step 3.1 equipment OSD.
Technical effect: the present invention avoids migration redundant digit by the way that flag bit is arranged to cluster device in data migration process
According to, while Data Migration when replacing malfunctioning node is avoided, solve the migration of PG when existing Ceph distributed storage breaks down
Number is excessive, replacement malfunctioning node trigger data migrates the problem for causing system consumption big, has reached and Ceph storage is overcome to be drawn
The Data Migration load excessive risen, avoids data loss problem caused by node failure, Ceph distributed storage is effectively reduced
The effect of resource consumption.
Feature and performance of the invention are described in further detail with reference to embodiments.
Embodiment 1
As shown in Figs. 1-5, basic environment is made of 3 nodes, and each node has 3 equipment OSD (50G), number of copies setting
664 are set as 3, PG number, is 420 when an equipment OSD deletes the amount of migration generated with conventional method later, uses this method
The amount of migration of generation is 263, and test result is shown in Fig. 2.
Process is as follows: (explanation: being to execute order behind #)
Multiple flag bits are arranged to equipment OSD in advance: the flag bit norebalance- flag bit will be so that Ceph cluster
Any cluster rebalancing is not done;The flag bit nobackfill- flag bit will be so that Ceph cluster do data backfill;Mark
The norecover- flag bit in position makes Ceph cluster not do cluster recovery.
#Ceph osd set balance
#Ceph osd set nobackfill
#Ceph osd set norecover
The distribution situation for recording current PG will need migrating data to save to file pg1.txt:
#Ceph pg dump pgs|awk'{print$1,$15}'|grep-v pg>pg1.txt
Test equipment OSD.4 is formulated, stops the equipment and the CRUSH weighted value for modifying its storage is 0:
#/etc/init.d/Ceph stop osd.4
#Ceph osd crush reweight osd.4 0
Test equipment OSD.4 is stopped algorithm CRUSH, and notice this equipment of PG OSD does not re-map data, no longer provides clothes
Business;MAP can migrate its data, but due to being provided with flag bit to equipment OSD, it is inreal only to will appear state change
Migration;Since weight is zeroed, whole distribution is not interfered with, Data Migration is also avoided.
The distribution situation for recording current PG will need migrating data to save to file pg2.txt:
#Ceph pg dump pgs|awk'{print$1,$15}'|grep-v pg>pg2.txt
According to the file pg1.txt and file pg2.txt relatively situation of change of front and back PG, changing unit is only listed:
#diff-y-W 100pg1.txt pg2.txt--suppress-common-lines|wc–l
Changing unit is 98
Delete test equipment OSD.4:
#remove osd
#Ceph osd rm osd.4
Check file pg2.txt, if changing without PG, in addition new OSD:
#add osd
#Ceph-deploy osd prepare--zap-disk Ceph1:/dev/vdd
#Ceph-deploy osd activate-all
Remove all flag bits:
#ceph osd unset norebalance
#Ceph osd unset nobackfill
#Ceph osd unset norecover
The data variation of migration is recorded to file pg3.txt:
#Ceph pg dump pgs|awk'{print$1,$15}'|grep-v pg>pg3.txt
Obtaining total the amount of migration variation according to file pg1.txt and file pg3.txt is 263:
#diff-y-W 100pg1.txt pg3.txt--suppress-common-lines|wc-l
263
all-->263
Data Migration amount contrast table 1 i.e. Fig. 2 and comparison diagram i.e. Fig. 3 is obtained by test 1, the present invention is obtained from data
Save 37% the amount of migration.In Fig. 3, the column of original method two are respectively the mobile sum of pg1.txt, pg2.txt, pg3.txt and PG,
The column of improved method two are respectively the mobile sum of pg3.txt and PG, avoid picture gray proces that from can not removing and compare, illustrate hereby;
The present invention is arranged flag bit, avoids replacement malfunctioning node bring Data Migration, reduce 30%-40%'s by distributing rationally
The amount of migration avoids data loss problem caused by node failure, improves the availability of Ceph object storage cluster;It is effectively reduced
The resource consumption of Ceph distributed storage, prevents invalid, excessive Data Migration, and the present invention is applied to storage effect when cloud platform
Fruit is splendid, if project number is " the common point service cloud platform research and development based on the Beidou " project and project of 15ZC1189
" the cloud computing experiment porch research based on novel distributed storage Ceph " project that number is 2015GZ0107.
Embodiment 2
As shown in Figs. 1-5, the basic environment that test 2 uses is made of 2 nodes, and each node has 4 equipment OSD
(50G), number of copies are set as 2, PG number and are set as 664, and test result is shown in Fig. 4 and Fig. 5.
A) flag bit of cluster is set, prevent from migrating:
The flag bit norebalance- flag bit will be so that Ceph cluster do any cluster rebalancing;
The flag bit nobackfill- flag bit will be so that Ceph cluster do data backfill;
The flag bit norecover- flag bit makes Ceph cluster not do cluster recovery.
B) stop the equipment OSD process of malfunctioning node, the weights resetting of the equipment OSD of malfunctioning node is 0 i.e. CRUSH
Reweight:
Stopping the equipment OSD process of malfunctioning node, notice this equipment of cluster OSD does not re-map data, no longer provides service,
Since weight is zeroed, whole distribution is not interfered with, Data Migration is also avoided.
C) algorithm CRUSH removes equipment OSD, that is, CRUSH Remove of malfunctioning node:
It deletes the equipment OSD of malfunctioning node: being deleted from CRUSH, because it is 0, therefore do not influence the weight of host, avoid
Data Migration.
Malfunctioning node: Ceph OSD rm OSD.0 is deleted, the record of this node is deleted from cluster the inside.
D) increase new equipment OSD;
E) flag bit of cluster is released.
Data Migration amount contrast table 2 i.e. Fig. 4 and comparison diagram i.e. Fig. 5 is obtained by test 2, the present invention is obtained from data
Save 43% the amount of migration.In Fig. 5, the column of original method two are respectively the mobile sum of pg1.txt, pg2.txt, pg3.txt and PG,
The column of improved method two are respectively the mobile sum of pg3.txt and PG, avoid picture gray proces that from can not removing and compare, illustrate hereby;
The present invention avoids migration redundant data, while avoiding replacement event by the way that flag bit is arranged to cluster device in data migration process
Hinder Data Migration when node, the migration number of PG is excessive when solving the failure of existing Ceph distributed storage, replaces failure
Node trigger data migrates the problem for causing system consumption big, by distributing rationally, flag bit is arranged, avoids replacement malfunctioning node
Bring Data Migration reduces the amount of migration of 30%-40%, has reached the Data Migration for overcoming Ceph storage caused and had loaded
Greatly, data loss problem caused by node failure is avoided, the resource consumption of Ceph distributed storage is effectively reduced, prevents in vain
, the effect of excessive Data Migration.
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention
Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.
Claims (5)
1. a kind of distributed type assemblies Data Migration optimization method based on Ceph, characterized by the following steps:
Step 1: the flag bit of PG is set;
Step 2: stopping the equipment OSD process of malfunctioning node, the weight of equipment OSD is zeroed according to flag bit algorithm CRUSH;
Step 3: according to equipment OSD described in flag bit algorithm CRUSH removing step 2;
Step 4: increasing new equipment OSD based on step 3;
Step 5: releasing the flag bit of PG.
2. a kind of distributed type assemblies Data Migration optimization method based on Ceph according to claim 1, it is characterised in that:
Flag bit includes flag bit norebalance, flag bit nobackfill and flag bit norecover in the step 1, wherein
Flag bit norebalance, for marking Ceph cluster not do any cluster rebalancing;
Flag bit nobackfill, for marking Ceph cluster not do data backfill;
Flag bit norecover, for marking Ceph cluster not do cluster recovery.
3. a kind of distributed type assemblies Data Migration optimization method based on Ceph according to claim 1 or 2, feature
Be: the step 1 includes the following steps:
Step 1.1: defining flag bit;
Step 1.2: the flag bit of PG is set.
4. a kind of distributed type assemblies Data Migration optimization method based on Ceph according to claim 3, it is characterised in that:
The step 2 includes the following steps:
Step 2.1: stopping the equipment OSD process of malfunctioning node;
Step 2.2: the weight of equipment OSD being zeroed according to flag bit algorithm CRUSH.
5. a kind of distributed type assemblies Data Migration optimization method based on Ceph according to claim 4, it is characterised in that:
The step 3 includes the following steps:
Step 3.1: deleting the equipment OSD that weight has been zeroed;
Step 3.2: deleting the corresponding malfunctioning node of step 3.1 equipment OSD.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811253132.XA CN109344143A (en) | 2018-10-25 | 2018-10-25 | A kind of distributed type assemblies Data Migration optimization method based on Ceph |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811253132.XA CN109344143A (en) | 2018-10-25 | 2018-10-25 | A kind of distributed type assemblies Data Migration optimization method based on Ceph |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109344143A true CN109344143A (en) | 2019-02-15 |
Family
ID=65312367
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811253132.XA Pending CN109344143A (en) | 2018-10-25 | 2018-10-25 | A kind of distributed type assemblies Data Migration optimization method based on Ceph |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344143A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059068A (en) * | 2019-04-11 | 2019-07-26 | 厦门网宿有限公司 | Data verification method and data verification system in a kind of distributed memory system |
CN111752483A (en) * | 2020-05-28 | 2020-10-09 | 苏州浪潮智能科技有限公司 | Method and system for reducing reconstruction data by changing storage medium in storage cluster |
CN111880747A (en) * | 2020-08-01 | 2020-11-03 | 广西大学 | Automatic balanced storage method of Ceph storage system based on hierarchical mapping |
CN111966291A (en) * | 2020-08-14 | 2020-11-20 | 苏州浪潮智能科技有限公司 | Data storage method, system and related device in storage cluster |
CN113282241A (en) * | 2021-05-26 | 2021-08-20 | 上海仪电(集团)有限公司中央研究院 | Ceph distributed storage-based hard disk weight optimization method and device |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101534550A (en) * | 2008-03-14 | 2009-09-16 | 中兴通讯股份有限公司 | Method for realizing synchronizing scanning group data of cluster terminal |
CN102821411A (en) * | 2011-06-08 | 2012-12-12 | 中兴通讯股份有限公司 | Method, base station and system for achieving fail soft in broadband clustering system |
-
2018
- 2018-10-25 CN CN201811253132.XA patent/CN109344143A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101534550A (en) * | 2008-03-14 | 2009-09-16 | 中兴通讯股份有限公司 | Method for realizing synchronizing scanning group data of cluster terminal |
CN102821411A (en) * | 2011-06-08 | 2012-12-12 | 中兴通讯股份有限公司 | Method, base station and system for achieving fail soft in broadband clustering system |
Non-Patent Citations (1)
Title |
---|
HIUBUNTU: "Openstack之Ceph集群操作", 《HTTPS://BLOG.51CTO.COM/QUJUNORZ/1878411》 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110059068A (en) * | 2019-04-11 | 2019-07-26 | 厦门网宿有限公司 | Data verification method and data verification system in a kind of distributed memory system |
CN110059068B (en) * | 2019-04-11 | 2021-04-02 | 厦门网宿有限公司 | Data verification method and data verification system in distributed storage system |
CN111752483A (en) * | 2020-05-28 | 2020-10-09 | 苏州浪潮智能科技有限公司 | Method and system for reducing reconstruction data by changing storage medium in storage cluster |
CN111880747A (en) * | 2020-08-01 | 2020-11-03 | 广西大学 | Automatic balanced storage method of Ceph storage system based on hierarchical mapping |
WO2022028033A1 (en) * | 2020-08-01 | 2022-02-10 | 广西大学 | Hierarchical mapping-based automatic balancing storage method for ceph storage system |
CN111880747B (en) * | 2020-08-01 | 2022-11-08 | 广西大学 | Automatic balanced storage method of Ceph storage system based on hierarchical mapping |
CN111966291A (en) * | 2020-08-14 | 2020-11-20 | 苏州浪潮智能科技有限公司 | Data storage method, system and related device in storage cluster |
CN113282241A (en) * | 2021-05-26 | 2021-08-20 | 上海仪电(集团)有限公司中央研究院 | Ceph distributed storage-based hard disk weight optimization method and device |
CN113282241B (en) * | 2021-05-26 | 2024-04-09 | 上海仪电(集团)有限公司中央研究院 | Hard disk weight optimization method and device based on Ceph distributed storage |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344143A (en) | A kind of distributed type assemblies Data Migration optimization method based on Ceph | |
US10956286B2 (en) | Deduplication replication in a distributed deduplication data storage system | |
US11016696B2 (en) | Redundant distributed data storage system | |
JP5607059B2 (en) | Partition management in partitioned, scalable and highly available structured storage | |
US5317728A (en) | Storage management of a first file system using a second file system containing surrogate files and catalog management information | |
JP5124551B2 (en) | Computer system for managing volume allocation and volume allocation management method | |
CN106446126B (en) | Mass spatial information data storage management method and storage management system | |
JP2019101703A (en) | Storage system and control software arrangement method | |
US8024536B2 (en) | Method of constructing replication environment and storage system | |
US20200265068A1 (en) | Replicating Big Data | |
US20100023564A1 (en) | Synchronous replication for fault tolerance | |
KR20110050452A (en) | Recovery of a computer that includes virtual disks | |
CN111078121A (en) | Data migration method, system and related components of distributed storage system | |
CN105027069A (en) | Deduplication of volume regions | |
JP2009003719A (en) | Computer and method for setting backup environment of data used for a plurality of applications to be operated in cooperation | |
US11199972B2 (en) | Information processing system and volume allocation method | |
WO2021057108A1 (en) | Data reading method, data writing method, and server | |
US11740925B2 (en) | Method and apparatus for online migration of multi-disk virtual machine into different storage pools | |
US20200117381A1 (en) | Storage system and storage control method | |
WO2018169040A1 (en) | Difference management device, storage system, difference management method, and program | |
US11422904B2 (en) | Identifying fault domains for delta components of a distributed data object | |
JPH04107750A (en) | File managing system | |
US20200042221A1 (en) | Shuffle Manager in a Distributed Memory Object Architecture | |
CN113568567B (en) | Method for seamless migration of simple storage service by index object, main device and storage server | |
US20240176707A1 (en) | Storage system and storage control method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20190215 |
|
WD01 | Invention patent application deemed withdrawn after publication |