CN107704212B - A kind of data processing method and device - Google Patents

A kind of data processing method and device Download PDF

Info

Publication number
CN107704212B
CN107704212B CN201711047852.6A CN201711047852A CN107704212B CN 107704212 B CN107704212 B CN 107704212B CN 201711047852 A CN201711047852 A CN 201711047852A CN 107704212 B CN107704212 B CN 107704212B
Authority
CN
China
Prior art keywords
bucket
pool
utilization rate
pool group
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711047852.6A
Other languages
Chinese (zh)
Other versions
CN107704212A (en
Inventor
杨潇
顾雷雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Information Technologies Co Ltd
Original Assignee
New H3C Information Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Information Technologies Co Ltd filed Critical New H3C Information Technologies Co Ltd
Priority to CN201711047852.6A priority Critical patent/CN107704212B/en
Publication of CN107704212A publication Critical patent/CN107704212A/en
Application granted granted Critical
Publication of CN107704212B publication Critical patent/CN107704212B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/14Error detection or correction of the data by redundancy in operation
    • G06F11/1402Saving, restoring, recovering or retrying
    • G06F11/1446Point-in-time backing up or restoration of persistent data
    • G06F11/1458Management of the backup or restore process
    • G06F11/1464Management of the backup or restore process for networked environments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The present invention provides a kind of data processing method and device, this method comprises: for any storage pool pool of the Ceph cluster, when the number of copies of the pool is equal with the quantity of bucket bucket node of failure domain is designated as, which is added the corresponding first kind pool group of the bucket node;For any first kind pool group, when the first theoretical storage utilization rate of first kind pool group is less than default utilization rate threshold value, trigger the storage unit migration between the corresponding bucket node of first kind pool group, so that the second of first kind pool group the theoretical storage utilization rate is greater than the described first theoretical storage utilization rate after migration, and the controlled copying CRUSH hum pattern under the corresponding expansible Hash of first kind pool group is updated after the completion of migration, the theoretical storage utilization rate of first kind pool group can be improved using the present invention, and then the storage utilization rate of Ceph cluster can be improved.

Description

A kind of data processing method and device
Technical field
The present invention relates to network communication technology field more particularly to a kind of data processing method and devices.
Background technique
Ceph (distributed memory system) be one kind have excellent performance, high reliability and high scalability distributed memory system, It is widely used in all kinds of large, medium and small storage environments.Pass through File (file) and object (object), object and PG (Placement Group, put in order group), PG and OSD (Object Storage Device, object storage device) cubic map Complete the addressing process of ceph system.It is realized between object and PG by hash algorithm, and passes through CRUSH between PG and OSD (Controlled Replication Under Scalable Hashing, the controlled copying under expansible Hash) algorithm is complete At mapping.
Failure domain is that another key concept is combined by the introducing of failure domain with redundancy strategy in Ceph cluster, The copy of a data or fragment can be preferentially stored in different failure domains by cluster.When single failure domain occurs abnormal, still Storage service so normally can be externally provided.
However practice discovery, it is saved in the number of copies that data redundancy strategy is specified with the bucket (bucket) for being designated as failure domain In the case that the quantity of point is equal, PG is uniformly mapped to each and is designated as on the bucket node of failure domain, but works as Memory capacity differs in biggish situation between different bucket nodes, and the lesser bucket node of memory capacity can become quickly Memory capacity bottleneck causes to will appear huge storage utilization rate difference between each bucket node of Ceph cluster.When there are OSD Storage utilization rate when reaching the upper limit (such as 95%), at this time Ceph cluster cannot externally provide service, and other large capacities Node still has biggish memory capacity more than needed, causes the space utilisation of entire Ceph cluster lower.
Summary of the invention
The present invention provides a kind of data processing method and device, to solve to refer in existing Ceph cluster in data redundancy strategy Fixed number of copies is equal with the quantity of bucket node of failure domain is designated as, and memory capacity between difference bucket node Differ in biggish situation, the lesser bucket node of memory capacity quickly can become memory capacity bottleneck the problem of.
According to a first aspect of the embodiments of the present invention, a kind of data processing method is provided, distributed memory system is applied to The monitor of Ceph cluster, this method comprises:
For any storage pool pool of the Ceph cluster, when number of copies and the bucket that is designated as failure domain of the pool When the quantity of bucket node is equal, which is added the corresponding first kind pool group of the bucket node;
For any first kind pool group, preset when the first theoretical storage utilization rate of first kind pool group is less than When utilization rate threshold value, the storage unit migration between the corresponding bucket node of first kind pool group is triggered, so that migration The second of first kind pool group the theoretical storage utilization rate is greater than the described first theoretical storage utilization rate afterwards, and completes in migration The controlled copying CRUSH hum pattern under the corresponding expansible Hash of first kind pool group is updated afterwards.
According to a second aspect of the embodiments of the present invention, a kind of data processing equipment is provided, distributed memory system is applied to The monitor of Ceph cluster, the device include:
Pool group administrative unit, for any storage pool pool for the Ceph cluster, when the number of copies of the pool When equal with the quantity of bucket bucket node of failure domain is designated as, which is added the bucket node corresponding first Type pool group;
Migration units are used for for any first kind pool group, when the first theoretical storage of first kind pool group When utilization rate is less than default utilization rate threshold value, the storage unit between the corresponding bucket node of first kind pool group is triggered Migration, so that the second of first kind pool group the theoretical storage utilization rate is used greater than the described first theoretical storage after migration Rate;
Maintenance unit, it is controlled under the corresponding expansible Hash of first kind pool group for being updated after the completion of migration Replicate CRUSH hum pattern map.
Using the embodiment of the present invention, for any pool of Ceph cluster, when the pool number of copies and be designated as therefore When the quantity of the bucket node in barrier domain is equal, which is added the corresponding first kind pool group of the bucket node;It is right In any first kind pool group, when the first theoretical storage utilization rate of first kind pool group is less than default utilization rate threshold value When, the storage unit migration between the corresponding bucket node of first kind pool group is triggered, so that the first kind after migration The theoretical storage utilization rate of the second of type pool group is greater than the first theoretical storage utilization rate, and the first kind is updated after the completion of migration The corresponding CRUSH hum pattern of type pool group, improves the theoretical storage utilization rate of first kind pool group, and then can be improved The storage utilization rate of Ceph cluster.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of data processing method provided in an embodiment of the present invention;
Fig. 2A~2B is the weight schematic diagram of the bucket node under concrete application scene provided in an embodiment of the present invention;
Fig. 3 is a kind of structural schematic diagram of data processing equipment provided in an embodiment of the present invention;
Fig. 4 is a kind of structural schematic diagram of data processing equipment provided in an embodiment of the present invention;
Fig. 5 is a kind of hardware structural diagram of data processing equipment provided in an embodiment of the present invention.
Specific embodiment
Technical solution in embodiment in order to enable those skilled in the art to better understand the present invention, below first to this hair The bright part name word concept being related to is briefly described.
1, pool (storage pool): pool is the set of PG, and data redundancy strategy is needed to configure when pool is established and is specified Bucket node as failure domain;Wherein, data redundancy strategy specifies the number of copies of pool;After pool is established, if The specified number of copies of the data redundancy strategy of pool is equal with the quantity of bucket node of failure domain for being designated as the pool, The PG that the pool is then added can uniformly be mapped to each bucket node for being designated as failure domain;
It should be noted that the number of copies that the data redundancy strategy of pool is specified is saved with the bucket for being designated as failure domain The quantity of point can also be unequal, and in this case, according to CRUSH algorithm, PG can be also mapped in a relatively uniform manner It is designated as the bucket node of failure domain.
2, it first kind pool group: is referred to as equal bucket group, the specified copy of data redundancy strategy herein The number pool equal with the quantity of bucket node of failure domain is designated as can be added to first kind pool group;Its In, first kind pool group is created based on bucket node, and the bucket node of same type corresponds to the same first kind Type pool group, different types of bucket node correspond to different first kind pool groups;When the failure domain of multiple pool is same The bucket node of one type, and the bucket node of the specified number of copies and the type of data redundancy strategy of multiple pool Quantity it is equal when, same first kind pool group need to be added in multiple pool;
Wherein, the type of bucket node can include but is not limited to server, rack, computer room or data center etc..
Wherein, multiple bucket nodes corresponding for first kind pool group, allow in logic in each bucket node Storage unit (such as OSD) migrated between bucket node, and do not change the deployment of actual physics;
For example, it is assumed that migrating the OSD1 on bucket A to bucket B, then the OSD1 belongs to bucket in logic B, and actually still belong to (i.e. physical home in) bucket A;
3, Second Type pool group: being referred to as non-equal group herein, non-equal group of entire Ceph cluster meeting and only It can safeguard one, default belongs to Second Type pool group when each pool is initial in Ceph cluster;
4, weight: the memory capacity of bucket node is also known as the weight of bucket node, holds for example, storage can be defined The weight for the bucket node that amount is 1T is 1, then the weight for the bucket node that memory capacity is 100T is 100, memory capacity Weight for the bucket node of 500G is 0.5;
5, the theoretical storage utilization rate of first kind pool group: the corresponding multiple bucket nodes of first kind pool group In, the ratio of the average value of the weight of the weight and multiple bucket node of the smallest bucket node of weight;
For example, it is assumed that the corresponding bucket node of first kind pool group includes bucket A, bucket B and bucket C, and the weight of each bucket node is respectively 30,30 and 18, then the theoretical storage utilization rate of first kind pool group is 70% (18/ [(30+30+18)/3]=70%);
6, reality OSD: logic ownership and physical home are in the OSD of same bucket node;
7, void OSD: logic ownership and physical home are in the OSD of different bucket nodes;
For example, it is assumed that the OSD1 in bucket A is migrated to bucket B, then for OSD1, belong in logic In bucket B, and physical home belongs to empty OSD in bucket A, i.e. OSD1;For what is do not migrated on bucket A OSD2 belongs to bucket A in logic, and physical home is also bucket A, i.e. OSD2 belongs to real OSD.
In order to keep the above objects, features, and advantages of the embodiment of the present invention more obvious and easy to understand, with reference to the accompanying drawing Technical solution in the embodiment of the present invention is described in further detail.
It referring to Figure 1, is a kind of flow diagram of data processing method provided in an embodiment of the present invention, wherein the number It can be applied to the monitor of Ceph cluster according to processing method, as shown in Figure 1, the data processing method may include:
Step 101, any pool for Ceph cluster, when the pool number of copies and be designated as failure domain When the quantity of bucket node is equal, which is added the corresponding first kind pool group of the bucket node.
In the embodiment of the present invention, for any pool group in Ceph cluster, monitor may determine that the copy of the pool Number (i.e. the specified number of copies of the data redundancy strategy of the pool) whether the quantity with the bucket node for being designated as failure domain It is equal.
For example, when newly creating pool in Ceph cluster, monitor may determine that the pool number of copies whether with referred to The quantity for being set to the bucket node of failure domain is equal.
In the embodiment of the present invention, when monitor determines the number of copies of pool and the bucket node for being designated as failure domain When quantity is equal, which can be added to the corresponding first kind pool group of the bucket node by monitor.
The present invention in one embodiment, it is above-mentioned that the corresponding first kind of bucket node is added in the pool Pool group, comprising:
Judge whether there is the corresponding first kind pool group of bucket node;
If it exists, then the corresponding first kind pool group of the bucket node is added in the pool;
Otherwise, the corresponding first kind pool group of the bucket node is created, it is corresponding which is added the bucket First kind pool group, and create the CRUSH map (hum pattern) of corresponding first kind pool group.
In this embodiment, when monitor determines the number of copies and the number for the bucket node for being designated as failure domain of pool When measuring equal, monitor, which determines to need the pool being added to the bucket node, (is appointed as the failure domain of the pool Bucket node) corresponding first kind pool group.It is corresponded at this point, monitor can first judge whether there is the bucket node First kind pool group;If it exists, then directly the corresponding first kind of bucket node can be added in the pool by monitor Type pool group;Otherwise, monitor can create the corresponding first kind pool group of the bucket node, which is added should The corresponding first kind pool group of bucket, and the CRUSH map of corresponding first kind pool group is created, specific implementation can With referring to the related realization for creating CRUSH map in existing Ceph cluster, this will not be repeated here for the embodiment of the present invention.
Step 102, for any first kind pool group, when the first theoretical storage utilization rate of first kind pool group When less than default utilization rate threshold value, the storage unit migration between the corresponding bucket node of first kind pool group is triggered, So that the theoretical storage utilization rate of second of the first kind pool group after migration, which is greater than the first theory, stores utilization rate, and migrating The corresponding CRUSH hum pattern of first kind pool group is updated after the completion.
In the embodiment of the present invention, it is contemplated that equal with the quantity of bucket node of failure domain is designated as number of copies The case where, when the memory capacity difference between each bucket node for being designated as failure domain is more, memory capacity is lesser Bucket node can store memory capacity bottleneck, therefore, can be by being designated as multiple bucket nodes of failure domain at this Between carry out storage unit migration, keep the memory capacity of each bucket node more balanced, improve storage utilization rate.
Correspondingly, in embodiments of the present invention, for any first kind pool group, monitor can calculate the first kind The theoretical storage utilization rate (the referred to herein as first theoretical storage utilization rate) of type pool group, and judge that the first theoretical storage makes Whether it is less than default utilization rate threshold value with rate, if being less than, illustrates depositing for the first kind corresponding each bucket node of pool group Storing up utilization rate will be lower, will lead to storage resource waste, thus triggers the corresponding bucket node of first kind pool group Between storage unit migration.
Wherein, above-mentioned default utilization rate threshold value can be set according to acceptable storage utilization rate under actual scene, For example, can be set to 80%, 90% etc..
In the embodiment of the present invention, monitor carries out storage unit migration to the corresponding bucket node of first kind pool group When, the theoretical of first kind pool group stores utilization rate (the referred to herein as second theoretical utilization rate) greatly after needing to guarantee migration In the first theoretical storage utilization rate, i.e., the theoretical storage utilization rate of first kind pool group is improved by the migration of storage unit.
For example, the storage unit for the certain storage capacity that can move out from the biggish bucket node of weight, and moved Enter the lesser bucket node of weight;Alternatively, the storage unit that memory capacity is M1 is moved from the biggish bucket node of weight The lesser bucket node of weight is moved to, and the storage unit that memory capacity is M2 is migrated from the lesser bucket node of weight To the biggish bucket node of weight, weight, M2 is less than M1.
Wherein, after storage unit migrates, memory capacity (i.e. weight)=bucket node of each bucket node is initial The memory capacity for the storage unit that the memory capacity-of the storage unit for memory capacity+move into is moved out.
For example, it is assumed that the corresponding bucket node of first kind pool group includes bucket A, bucket B and bucket C, and the weight of each bucket node is respectively 30,30 and 18, monitor control bucket A and bucket B is migrated respectively The storage unit of 2T memory capacity is to bucket C, then after the completion of migrating, the weight variation of bucket A and bucket B are 28, The weight variation of bucket is 22, and the theoretical storage utilization rate variation of first kind pool group is 85% (22/ [(28+28+ 22)/3]=85%).
In the embodiment of the present invention, after completing the storage unit migration between bucket node, monitor can update The corresponding CRUSH map of first kind pool group, specific implementation may refer to that storage unit occurs in existing Ceph cluster The update of CRUSH map realizes that details are not described herein for the embodiment of the present invention when additions and deletions.
Further, in embodiments of the present invention, monitor needs to safeguard the members list under first kind pool group, i.e., The corresponding relationship of each first kind pool group with the pool group for belonging to first kind pool group is recorded, and by the first kind Members list's (corresponding relationship including pool Yu first kind pool group) under pool group is handed down to Ceph clustered node (i.e. Server node in Ceph cluster) and client, so that the client is corresponding with first kind pool group according to pool Relationship carries out reading and writing data processing.
Correspondingly, when client receives the reading and writing data request for target pool, self record can be inquired The corresponding relationship of pool and first kind pool group, it is determined whether there are the corresponding target first kind pool groups of target pool (judging whether target pool belongs to first kind pool group);If inquiring the corresponding target first kind of target pool Pool group, then client can determine the mesh of read-write requests hit according to the corresponding CRUSH map of target first kind pool group OSD is marked, and reading and writing data processing is carried out to target OSD.
It should be noted that the corresponding CRUSH map of first kind pool group can be by existing in the embodiment of the present invention There is CRUSH map to be extended to obtain, i.e., increases type (type) field newly in existing CRUSH map to identify the belonging to it The type of one type pool group, type field bucket node corresponding with first kind pool group is consistent, for example, working as When the type of bucket node is computer room, the type field in CRUSH map can be type1;When bucket node is rack When, the type field in CRUSH map can be type2.
As it can be seen that in method flow shown in Fig. 1, by the way that first kind pool group is arranged, item will be met in ceph cluster The pool of part is added to corresponding first kind pool group, and when the theoretical storage utilization rate there are first kind pool group is less than When default utilization rate threshold value, storage unit migration is carried out to the corresponding bucket node of first kind pool group, is somebody's turn to do with improving The theoretical storage utilization rate of first kind pool group, and then the storage utilization rate of Ceph cluster can be improved.
The present invention in one embodiment, between the corresponding bucket node of above-mentioned triggering first kind pool group Storage unit migration, comprising:
It is more than or equal to the principle of default utilization rate threshold value to first kind pool group pair with the second theoretical storage utilization rate The storage unit between bucket node answered is migrated;
When there is no making the second theoretical storage utilization rate be more than or equal to the migration scheme of default utilization rate threshold value, with second The smallest principle of absolute value of theory storage utilization rate and default utilization rate threshold value difference between the two is to the first kind Storage unit between the corresponding bucket node of pool group is migrated;
When there are multiple migration schemes for making the second theoretical storage utilization rate be more than or equal to default utilization rate threshold value, or presence Multiple the smallest migration schemes of absolute value for making the second theoretical storage utilization rate and default utilization rate threshold value difference between the two When, the migration scheme of actual use is determined by the one or more in following principle:
The few migration scheme of number of memory cells of moving out is preferential, has migration more than the bucket number of nodes of storage unit of moving out The migration scheme that the storage unit distribution that scheme is preferential, identical bucket node is moved out is concentrated is preferential.
In this embodiment, in order to optimize the effect that storage unit migrates, need to guarantee as far as possible that the second theoretical storage uses Rate is more than or equal to the first theoretical storage utilization rate.
And migrated as unit of storage unit in view of being in actual scene, and the memory capacity of storage unit Be not it is arbitrarily variable, therefore, in some scenarios, the second theoretical storage possibly can not be made to make by the migration of storage unit It is more than or equal to the first theoretical storage utilization rate with rate.
For example, it is assumed that the corresponding bucket node for being designated as failure domain of first kind pool group include bucket A and Bucket B, weight are respectively 3 and 1.8 (i.e. memory capacity is respectively 3T and 1.8T), and the storage unit in bucket A Memory capacity be the 1T storage unit of 3 1T (include), the memory capacity of the storage unit in bucket B is respectively 1T and 0.8T (including the storage unit of 1 1T and the storage unit of 1 0.8T), then first kind pool group passes through storage The mode attainable highest theory storage utilization rate of unit migration is 83% (20/ [(28+20)/2]=83%), i.e., from The storage unit of a 1T is migrated in bucket A to bucket B, and the storage unit of a 0.8T is migrated from bucket B Therefore in this scenario, it is higher than 83%, such as 85%, 90% to bucket A when presetting storage utilization rate threshold value, then can not So that the second theoretical storage utilization rate is more than or equal to the first theory in such a way that storage unit migrates and stores utilization rate.
In this embodiment, when there is no the migrations for making the second theoretical storage utilization rate be more than or equal to default utilization rate threshold value When scheme, it can make the second theoretical storage utilization rate as far as possible close to default storage utilization rate threshold value by migrating storage unit.
Further, in this embodiment, when making the second theoretical storage utilization rate be more than or equal to default use there are multiple The migration scheme of rate threshold value, or make the second theoretical storage utilization rate and default utilization rate threshold value difference between the two there are multiple Absolute value the smallest migration scheme when, monitor can it is preferential according to the few migration scheme of number of memory cells of moving out, move The storage unit point that the migration scheme more than the bucket number of nodes of storage unit is preferential out or/and identical bucket node is moved out The migration scheme preferential principle that cloth is concentrated determines the migration scheme of actual use.
For example, it is assumed that there are multiple migration sides for making the second theoretical storage utilization rate be more than or equal to default utilization rate threshold value Case, monitor can more each migration scheme quantity of storage unit that needs to move out, and the number of memory cells that will move out is minimum Migration scheme be determined as actual use migration scheme;If the quantity for the storage unit that each migration scheme needs to move out is identical, There is the bucket number of nodes for storage unit of moving out in more each migration scheme that can then improve, there will be storage unit of moving out The most scheme of moving out of bucket number of nodes is determined as the migration scheme of actual use;If moving out in each migration scheme stores list The bucket number of nodes of member is also identical, then most concentrates the storage unit distribution that bucket node identical in each migration scheme is moved out Migration scheme be determined as actual use migration scheme.
For example, need respectively to migrate the storage unit of bucket A in migration scheme 1 to bucket B and bucket C, And the storage unit by bucket A is only needed to migrate to bucket C in migration scheme 2, then identical bucket in migration scheme 1 The storage unit distribution that node is moved out more is concentrated.
It in this embodiment, can be from multiple if being still not determined by the migration scheme of actual use according to mentioned above principle A migration scheme as actual use is selected in migration scheme, or is further determined that according to other strategies, is implemented This will not be repeated here.
It should be appreciated that the determination principle of above-mentioned migration scheme is only several specific examples in the embodiment of the present invention, And be not limiting the scope of the present invention, i.e., it can also be determined and be actually used by other principles in the embodiment of the present invention Migration scheme, if any the few migration scheme of the bucket number of nodes for storage unit of moving out is preferential or random selection etc., tool Body realizes that this will not be repeated here.
Further, the present invention in one embodiment, above-mentioned triggering first kind pool group is corresponding After storage unit migration between bucket node, can also include:
When detecting the storage unit additions and deletions operation for the corresponding bucket node of first kind pool group, determine In the case that first kind pool group does not carry out storage unit migration, the third theory after carrying out the additions and deletions operation, which stores, to be used Rate;
When third theory storage utilization rate is greater than the second theoretical storage utilization rate, first kind pool group is deleted, and Second Type pool group is added in each pool in first kind pool group, and redefines and has been added to Second Type pool The mapping relations of the PG and OSD in each pool in the former first kind pool group of group.
In this embodiment, it is contemplated that administrator may be single by increasing storage in the lesser bucket node of weight Member, or delete the mode of storage unit in the biggish bucket node of weight to improve storage utilization rate, in this case, on Storage utilization rate can may be reduced instead by stating storage unit migration, therefore, when monitor deposits first kind pool group After storage unit migration, if monitor detects the storage unit increasing for the corresponding bucket node of first kind pool group When deleting operation, monitor can determine that first kind pool in the case where not carrying out storage unit migration, carries out the additions and deletions Theoretical storage utilization rate after operation (referred to herein as third theory stores utilization rate).
For example, it is assumed that the corresponding bucket node of first kind pool group includes bucket A, bucket B and bucket C, the weight of each bucket node are respectively 30,30 and 18, and monitor carries out between each bucket node in the manner described above After storage unit migration, a certain moment, administrator increases storage unit (the i.e. bucket of 9T in bucket C node Actual weight (not considering that storage unit migrates) variation of C is 27), monitor determines the case where not carrying out storage unit migration Under, the third theory storage utilization rate after carrying out the additions and deletions operation is 93% (27/ [(30+30+27)/3]=93%).
In this embodiment, when third theory storage utilization rate is greater than the second theoretical storage utilization rate, monitor can be with First kind pool group is deleted, each pool in first kind pool group is added to Second Type pool group, and again Determine the mapping relations for having been added to the PG and OSD of each pool in the former first kind pool group of Second Type pool group, i.e., It is adjusted according to the corresponding crushmap of the second class pool group by mapping relations of the CRUSH algorithm to PG and OSD.
As an example it is assumed that including pool1 and pool2 in former first kind pool group, as deletion first kind pool Group, and after each pool (i.e. pool1 and pool2) in first kind pool group is added to Second Type pool group, it needs Redefine each pool having been added in the former first kind pool group of Second Type pool group (i.e. pool1 and Pool2 the mapping relations of PG and OSD).
Further, in embodiments of the present invention, when the quantity of the corresponding bucket node of first kind pool group increases When, each member pool will be unsatisfactory for the number of number of copies with the bucket node for being designated as failure domain in first kind pool group Equal condition is measured, at this point, monitor is also required to delete first kind pool group, it will be each in first kind pool group Second Type pool group is added in pool, and redefines and have been added in the former first kind pool group of Second Type pool group The mapping relations for putting in order group PG Yu object storage device OSD in each pool, specific implementation may refer to that storage unit occurs Respective handling when additions and deletions operate, details are not described herein for the embodiment of the present invention.
Further, in embodiments of the present invention, it is contemplated that between the corresponding each bucket node of first kind pool group After carrying out storage unit migration, the corresponding multiple OSD of PG in the member pool of first kind pool group may belong to In same bucket node.For example, PG is respectively mapped to reality an OSD, bucket B of bucket A under 3 transcript scenes A void OSD and bucket C a void OSD, and the empty OSD mapped on bucket B and bucket C be from Bucket A is migrated out, i.e., equal physical home is in bucket A, at this point, 3 OSD of PG mapping belong to bucket A can have Single Point of Faliure risk, i.e., when bucket A failure, the data of the PG will be unable to restore.
Correspondingly, the present invention in one embodiment, the corresponding bucket of above-mentioned triggering first kind pool group After storage unit migration between node, further includes:
For any PG in first kind pool group, when the corresponding whole OSD physical home of the PG is in same When bucket node, a void OSD is selected from the corresponding whole OSD of the PG, deletes the mapping relations of the PG Yu void OSD, And an OSD, the OSD other OSD objects corresponding with the PG are recalculated in the bucket node of void OSD logic ownership Reason belongs to different bucket nodes.
In this embodiment, it for any PG in first kind pool group, is reflected calculating the PG by CRUSH algorithm After the OSD penetrated, the quantity of the empty OSD of the OSD of PG mapping can be counted, when quantity=(the bucket number of nodes -1) of empty OSD When, then need further to read empty osd information with judge empty OSD physical home bucket node whether with only reality OSD The bucket node of physical home is identical, when all void OSD physical homes bucket node with real OSD physical home When bucket node is identical, a void OSD can be selected from the empty OSD of the PG OSD mapped, for example, can choose empty OSD The smallest void OSD of ID (mark) of the bucket node of logic ownership, determines that the mapping relations of the PG and void OSD are invalid, deletes It except the mapping relations of the PG and void OSD, reselects parameter r and is calculated, in the bucket node of void OSD logic ownership In recalculate an OSD, if the OSD and the PG mapping other OSD physical homes in same bucket node, need Continue to reselect parameter r and be calculated, until calculating an other OSD physical homes corresponding with the PG in difference Bucket node, or the number recalculated reach preset upper limit.
It wherein, is to calculate the distribution of data object by storing the weight of equipment in crush algorithm.In calculating process The final storage location of data object is determined by cluster map, data distribution strategy and random number.Above-mentioned parameter r, that is, refer to It is random number.
Wherein, when the number recalculated reaches preset upper limit, and an other OSDs corresponding with the PG are not calculated yet Physical home in the OSD of different bucket nodes, then can repeatedly select in calculated OSD one as finally using OSD, for example, can choose the calculated OSD of last time as the OSD finally used.
In order to make those skilled in the art more fully understand technical solution provided in an embodiment of the present invention, below with reference to specific Application scenarios are illustrated technical solution provided in an embodiment of the present invention.
Fig. 2A is referred to, is the bucket section for being designated as failure domain in a kind of application scenarios provided in an embodiment of the present invention Point schematic diagram, as shown in Figure 2 A, in the application scenarios, be designated as the bucket node of failure domain quantity be 3 (assuming that Respectively bucket A, bucket B and bucket C), wherein the weight of bucket A, bucket B and bucket C difference It is 30,30 and 18.Assuming that default utilization rate threshold value (V1) is 85%.
Based on the application scenarios, data processing provided in an embodiment of the present invention is accomplished by
1, when newly creating a pool (being assumed to be pool1) in Ceph cluster, and the data redundancy strategy of the pool is specified Number of copies be 3, i.e. the number of copies of the pool is equal with the quantity of bucket node of failure domain is designated as, therefore, monitor The pool can be added to the corresponding equal bucket group (being assumed to be equal bucket group 1) of above-mentioned bucket node by device;
Where it is assumed that not creating the corresponding equal bucket group of above-mentioned bucket node also, then it is equal can to create this The pool is added to the equal bucket group by bucket group, and creates the corresponding CRUSH map of the equal bucket group, should CRUSH map is provided with the type field of corresponding above-mentioned bucket node;The pool is added to equal bucket group by monitor Later, need to safeguard pool (i.e. member pool) list under the equal bucket group, and will be under the equal bucket group Pool list is handed down to Ceph clustered node and client, issue mode and current cluster map to issue mode identical, no longer It is described in detail;
2, monitor calculate theoretical storage the utilization rate V2, V2=18/ [(30+30+18)/3] of equal bucket group 1= 70%, it determines V2 < V1, therefore, between monitor triggers bucket A, bucket B and bucket C carries out storage unit and move It moves.
Wherein, since the weight of bucket A and bucket B are all larger than bucket C, and bucket A and bucket B Weight it is identical, therefore, can respectively from bucket A and bucket B migrate certain storage capacity storage unit to Bucket C, schematic diagram can be as shown in Figure 2 B, wherein dotted line cylinder is from bucket A or bucket B in bucket C Middle to migrate to the storage unit of bucket C, the corresponding OSD of these storage units is void OSD.
In this embodiment it is assumed that migrating the memory capacity of 4T respectively from bucket A and bucket B respectively to bucket The weight variation of C, then bucket A, bucket B and bucket C after migrating are 26,26 and 26, the theoretical storage after migration Utilization rate V3=26/ [(26+26+26)/3]=100%.
3, after completing said memory cells migration, monitor can update the corresponding CRUSH map of equal bucket group 1;
4, when client receives the reading and writing data request for pool1, monitor is according to each equal of record Member's pool list of bucket group, determines the corresponding equal bucket group of pool1, i.e., equal bucket group 1, and according to equal The corresponding CRUSH hum pattern of bucket group 1 determines the OSD of reading and writing data request hit, and carries out reading and writing data processing.
Through above description as can be seen that in technical solution provided in an embodiment of the present invention, for appointing for Ceph cluster The pool is added when the number of copies of the pool is equal with the quantity of bucket node of failure domain is designated as by one pool The corresponding first kind pool group of the bucket node;For any first kind pool group, when first kind pool group When first theoretical storage utilization rate is less than default utilization rate threshold value, trigger the corresponding bucket node of first kind pool group it Between storage unit migration so that the second of first kind pool group the theoretical storage utilization rate is deposited greater than the first theory after migration Utilization rate is stored up, and updates first kind pool group corresponding CRUSH hum pattern after the completion of migration, improves the first kind The theoretical storage utilization rate of pool group, and then the storage utilization rate of Ceph cluster can be improved.
Fig. 3 is referred to, is a kind of structural schematic diagram of data processing equipment provided in an embodiment of the present invention, wherein the dress The monitor that can be applied in above method embodiment is set, as shown in figure 3, the data processing equipment may include:
Pool group administrative unit 310, for any storage pool pool for the Ceph cluster, when the copy of the pool When number is equal with the quantity of bucket bucket node of failure domain is designated as, which is added the bucket node corresponding the One type pool group;
Migration units 320 are used for for any first kind pool group, when the first theory of first kind pool group is deposited When storing up utilization rate less than default utilization rate threshold value, the storage list between the corresponding bucket node of first kind pool group is triggered Member migration, so that the second theory storage utilization rate of first kind pool group is greater than the described first theoretical storage use after migration Rate;
Maintenance unit 330, for being updated after the completion of migration under the corresponding expansible Hash of first kind pool group Controlled copying CRUSH hum pattern.
In an alternative embodiment, the migration units 320, be specifically used for being greater than with the described second theoretical storage utilization rate etc. In the default utilization rate threshold value principle to the storage unit between the corresponding bucket node of first kind pool group into Row migration;
When there is no the migration schemes for making the described second theoretical storage utilization rate be more than or equal to the default utilization rate threshold value When, with the smallest original of absolute value of the described second theoretical storage utilization rate and default utilization rate threshold value difference between the two Then the storage unit between the corresponding bucket node of first kind pool group is migrated.
In an alternative embodiment, the migration units 320, being also used to work as uses the described second theoretical storage there are multiple Rate is more than or equal to the migration scheme of the default utilization rate threshold value, or makes the described second theoretical storage utilization rate and institute there are multiple When stating the smallest migration scheme of absolute value of default utilization rate threshold value difference between the two, by one in following principle or The migration scheme of multiple determining actual uses:
The few migration scheme of number of memory cells of moving out is preferential, has migration more than the bucket number of nodes of storage unit of moving out The migration scheme that the storage unit distribution that scheme is preferential, identical bucket node is moved out is concentrated is preferential.
In an alternative embodiment, the pool group administrative unit 310 is also used to detect for first kind pool When the storage unit additions and deletions operation of the corresponding bucket node of group, determine that first kind pool group does not carry out storage unit migration In the case where, the third theory after carrying out the additions and deletions operation stores utilization rate;
The pool group administrative unit 310 is also used to be greater than second theory when third theory storage utilization rate When storing utilization rate, first kind pool group is deleted, Second Type is added in each pool in first kind pool group Pool group;Wherein, each pool group original state default belongs to the Second Type pool group in the Ceph cluster;
The maintenance unit 330 is also used to redefine the former first kind for having been added to Second Type pool group The mapping relations for putting in order group PG Yu object storage device OSD in each pool in pool group.
In an alternative embodiment, the maintenance unit 330, is also used to for any PG in first kind pool group, when The corresponding multiple OSD physical homes of the PG select a void when the same bucket node from the corresponding whole OSD of the PG OSD deletes the mapping relations of the PG Yu void OSD, and recalculates one in the bucket node of void OSD logic ownership The corresponding other OSD physical homes of a OSD, the OSD and the PG are in different bucket nodes.
In an alternative embodiment, the maintenance unit 330 is also used to safeguard that pool is corresponding with first kind pool group and closes System;
Such as scheme please also refer to Fig. 4 for the structural schematic diagram of another data processing equipment provided in an embodiment of the present invention Shown in 4, on the basis of data processing equipment shown in Fig. 3, data processing equipment shown in Fig. 4 further include:
Issuance unit 340, for the corresponding relationship of the pool and first kind pool group to be handed down to Ceph cluster section Point and client, so that the client carries out at reading and writing data according to the pool and the corresponding relationship of first kind pool group Reason.
Fig. 5 is a kind of hardware structural diagram for data processing equipment that disclosure example provides.The data processing equipment It may include processor 501, the machine readable storage medium 502 for being stored with machine-executable instruction.Processor 501 with it is machine readable Storage medium 502 can be communicated via system bus 503.Also, by read and execute in machine readable storage medium 502 with number According to the corresponding machine-executable instruction of processing logic, above-described data processing method is can be performed in processor 501.
Machine readable storage medium 502 referred to herein can be any electronics, magnetism, optics or other physical stores Device may include or store information, such as executable instruction, data, etc..For example, machine readable storage medium may is that RAM (Radom Access Memory, random access memory), volatile memory, nonvolatile memory, flash memory, storage are driven Dynamic device (such as hard disk drive), solid state hard disk, any kind of storage dish (such as CD, dvd) or similar storage are situated between Matter or their combination.
The function of each unit and the realization process of effect are specifically detailed in the above method and correspond to step in above-mentioned apparatus Realization process, details are not described herein.
For device embodiment, since it corresponds essentially to embodiment of the method, so related place is referring to method reality Apply the part explanation of example.The apparatus embodiments described above are merely exemplary, wherein described be used as separation unit The unit of explanation may or may not be physically separated, and component shown as a unit can be or can also be with It is not physical unit, it can it is in one place, or may be distributed over multiple network units.It can be according to actual The purpose for needing to select some or all of the modules therein to realize the present invention program.Those of ordinary skill in the art are not paying Out in the case where creative work, it can understand and implement.
As seen from the above-described embodiment, for any pool of Ceph cluster, when the pool number of copies and be designated as therefore When the quantity of the bucket node in barrier domain is equal, which is added the corresponding first kind pool group of the bucket node;It is right In any first kind pool group, when the first theoretical storage utilization rate of first kind pool group is less than default utilization rate threshold value When, the storage unit migration between the corresponding bucket node of first kind pool group is triggered, so that the first kind after migration The theoretical storage utilization rate of the second of type pool group is greater than the first theoretical storage utilization rate, and the first kind is updated after the completion of migration The corresponding CRUSH hum pattern of type pool group, improves the theoretical storage utilization rate of first kind pool group, and then can be improved The storage utilization rate of Ceph cluster.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its Its embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or Person's adaptive change follows general principle of the invention and including the undocumented common knowledge in the art of the present invention Or conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by following Claim is pointed out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is limited only by the attached claims.

Claims (12)

1. a kind of data processing method, the monitor applied to distributed memory system Ceph cluster, which is characterized in that this method Include:
For any storage pool pool of the Ceph cluster, when number of copies and the bucket that is designated as failure domain of the pool When the quantity of bucket node is equal, which is added the corresponding first kind pool group of the bucket node;
For any first kind pool group, when the first theoretical storage utilization rate of first kind pool group is less than default use When rate threshold value, the storage unit migration between the corresponding bucket node of first kind pool group is triggered, so that should after migration The theoretical storage utilization rate of the second of first kind pool group is greater than the described first theoretical storage utilization rate, and after the completion of migration more The newly controlled copying CRUSH hum pattern under the corresponding expansible Hash of first kind pool group;
Wherein, the theoretical storage utilization rate of first kind pool group is the corresponding multiple bucket nodes of first kind pool group In, the ratio of the average value of the weight of the weight and multiple bucket node of the smallest bucket node of weight.
2. the method according to claim 1, wherein the corresponding bucket of the triggering first kind pool group Storage unit migration between node, comprising:
It is more than or equal to the principle of the default utilization rate threshold value to first kind pool with the described second theoretical storage utilization rate Storage unit between the corresponding bucket node of group is migrated;
When there is no making the described second theoretical storage utilization rate be more than or equal to the migration scheme of the default utilization rate threshold value, with The smallest principle pair of absolute value of described second theoretical storage utilization rate and default utilization rate threshold value difference between the two Storage unit between the corresponding bucket node of first kind pool group is migrated.
3. according to the method described in claim 2, it is characterized in that, the corresponding bucket of the triggering first kind pool group Storage unit migration between node, further includes:
When there are multiple migration schemes for making the described second theoretical storage utilization rate be more than or equal to the default utilization rate threshold value, or Make the absolute value of the described second theoretical storage utilization rate and default utilization rate threshold value difference between the two most there are multiple When small migration scheme, pass through the migration schemes that the one or more in following principle determines actual use:
The few migration scheme of number of memory cells of moving out is preferential, has migration scheme more than the bucket number of nodes of storage unit of moving out Preferentially, the migration scheme that the storage unit distribution that identical bucket node is moved out is concentrated is preferential.
4. the method according to claim 1, wherein the corresponding bucket of the triggering first kind pool group After storage unit migration between node, further includes:
When detecting the storage unit additions and deletions operation for the corresponding bucket node of first kind pool group, determine this In the case that one type pool group does not carry out storage unit migration, the third theory after carrying out the additions and deletions operation stores utilization rate;
When third theory storage utilization rate is greater than the described second theoretical storage utilization rate, first kind pool is deleted Each pool in first kind pool group is added Second Type pool group, and redefines and have been added to Second Type by group The mapping relations for putting in order group PG Yu object storage device OSD in each pool in the former first kind pool group of pool group;Its In, each pool group original state default belongs to the Second Type pool group in the Ceph cluster.
5. the method according to claim 1, wherein the corresponding bucket of the triggering first kind pool group After storage unit migration between node, further includes:
For any PG in first kind pool group, when the corresponding whole OSD physical home of the PG is saved in the same bucket When point, a void OSD is selected from the corresponding whole OSD of the PG, deletes the mapping relations of the PG Yu void OSD, and in the void Recalculate an OSD in the bucket node of OSD logic ownership, the OSD other OSD physical homes corresponding with the PG in Different bucket nodes;
Wherein, empty OSD is logic ownership and physical home in the OSD of different bucket nodes.
6. the method according to claim 1, wherein described be added the bucket node corresponding for the pool After one type pool group, further includes:
Safeguard the corresponding relationship of pool and first kind pool group, and by the corresponding relationship of the pool and first kind pool group It is handed down to Ceph clustered node and client, so that the client is closed according to the pool is corresponding with first kind pool group System carries out reading and writing data processing.
7. a kind of data processing equipment, the monitor applied to distributed memory system Ceph cluster, which is characterized in that the device Include:
Pool group administrative unit, for any storage pool pool for the Ceph cluster, when the number of copies and quilt of the pool Be appointed as the bucket bucket node of failure domain quantity it is equal when, which is added the corresponding first kind of bucket node Pool group;
Migration units, for being used when the first theory of first kind pool group stores for any first kind pool group When rate is less than default utilization rate threshold value, the storage unit triggered between the corresponding bucket node of first kind pool group is moved It moves, so that the second of first kind pool group the theoretical storage utilization rate is greater than the described first theoretical storage utilization rate after migration; Wherein, the theoretical storage utilization rate of first kind pool group is power in the corresponding multiple bucket nodes of first kind pool group The ratio of the average value of the weight of the weight and multiple bucket node of the smallest bucket node of weight;
Maintenance unit, for updating the controlled copying under the corresponding expansible Hash of the first kind pool group after the completion of migration CRUSH hum pattern.
8. device according to claim 7, which is characterized in that
The migration units, specifically for being more than or equal to the default utilization rate threshold value with the described second theoretical storage utilization rate Principle migrates the storage unit between the corresponding bucket node of first kind pool group;
When there is no making the described second theoretical storage utilization rate be more than or equal to the migration scheme of the default utilization rate threshold value, with The smallest principle pair of absolute value of described second theoretical storage utilization rate and default utilization rate threshold value difference between the two Storage unit between the corresponding bucket node of first kind pool group is migrated.
9. device according to claim 8, which is characterized in that
The migration units, be also used to work as makes the described second theoretical storage utilization rate be more than or equal to the default use there are multiple The migration scheme of rate threshold value, or there are it is multiple make the described second theoretical storage both utilization rate and the default utilization rate threshold value it Between difference absolute value the smallest migration scheme when, pass through the migrations that the one or more in following principle determines actual use Scheme:
The few migration scheme of number of memory cells of moving out is preferential, has migration scheme more than the bucket number of nodes of storage unit of moving out Preferentially, the migration scheme that the storage unit distribution that identical bucket node is moved out is concentrated is preferential.
10. device according to claim 7, which is characterized in that
The pool group administrative unit is also used to detect depositing for the first kind corresponding bucket node of pool group When storage unit additions and deletions operate, in the case where determining that first kind pool group does not carry out storage unit migration, additions and deletions behaviour is carried out Third theory after work stores utilization rate;
The pool group administrative unit is also used to use when third theory storage utilization rate is greater than the described second theoretical storage When rate, first kind pool group is deleted, Second Type pool group is added in each pool in first kind pool group;Its In, each pool group original state default belongs to the Second Type pool group in the Ceph cluster;
The maintenance unit, be also used to redefine have been added to it is each in the former first kind pool group of Second Type pool group The mapping relations for putting in order group PG Yu object storage device OSD in pool.
11. device according to claim 7, which is characterized in that
The maintenance unit is also used to for any PG in first kind pool group, when the corresponding multiple OSD physics of the PG are returned When belonging to the same bucket node, a void OSD is selected from the corresponding whole OSD of the PG, deletes the PG's and void OSD Mapping relations, and an OSD is recalculated in the bucket node of void OSD logic ownership, the OSD is corresponding with the PG Other OSD physical homes are in different bucket nodes;
Wherein, empty OSD is logic ownership and physical home in the OSD of different bucket nodes.
12. device according to claim 7, which is characterized in that
The maintenance unit is also used to safeguard the corresponding relationship of pool Yu first kind pool group;
Described device further include:
Issuance unit, for the corresponding relationship of the pool and first kind pool group to be handed down to Ceph clustered node and client End, so that the client carries out reading and writing data processing according to the pool and the corresponding relationship of first kind pool group.
CN201711047852.6A 2017-10-31 2017-10-31 A kind of data processing method and device Active CN107704212B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711047852.6A CN107704212B (en) 2017-10-31 2017-10-31 A kind of data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711047852.6A CN107704212B (en) 2017-10-31 2017-10-31 A kind of data processing method and device

Publications (2)

Publication Number Publication Date
CN107704212A CN107704212A (en) 2018-02-16
CN107704212B true CN107704212B (en) 2019-09-06

Family

ID=61178073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711047852.6A Active CN107704212B (en) 2017-10-31 2017-10-31 A kind of data processing method and device

Country Status (1)

Country Link
CN (1) CN107704212B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108846009B (en) * 2018-04-28 2021-02-05 北京奇艺世纪科技有限公司 Copy data storage method and device in ceph
CN108829738B (en) * 2018-05-23 2020-12-25 北京奇艺世纪科技有限公司 Data storage method and device in ceph
CN108804568B (en) * 2018-05-23 2021-07-09 北京奇艺世纪科技有限公司 Method and device for storing copy data in Openstack in ceph
CN111381770B (en) * 2018-12-30 2021-07-06 浙江宇视科技有限公司 Data storage switching method, device, equipment and storage medium
CN109960470B (en) * 2019-03-28 2022-07-29 新华三技术有限公司 Data processing method and device and leader node
CN112181309A (en) * 2020-10-14 2021-01-05 上海德拓信息技术股份有限公司 Online capacity expansion method for mass object storage

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750624A (en) * 2013-12-27 2015-07-01 英特尔公司 Data Coherency Model and Protocol at Cluster Level
CN107133228A (en) * 2016-02-26 2017-09-05 华为技术有限公司 A kind of method and device of fast resampling

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9519577B2 (en) * 2013-09-03 2016-12-13 Sandisk Technologies Llc Method and system for migrating data between flash memory devices
US9311377B2 (en) * 2013-11-13 2016-04-12 Palo Alto Research Center Incorporated Method and apparatus for performing server handoff in a name-based content distribution system
WO2016023230A1 (en) * 2014-08-15 2016-02-18 华为技术有限公司 Data migration method, controller and data migration device
CN107211003B (en) * 2015-12-31 2020-07-14 华为技术有限公司 Distributed storage system and method for managing metadata
CN106599308B (en) * 2016-12-29 2020-01-31 郭晓凤 distributed metadata management method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104750624A (en) * 2013-12-27 2015-07-01 英特尔公司 Data Coherency Model and Protocol at Cluster Level
CN107133228A (en) * 2016-02-26 2017-09-05 华为技术有限公司 A kind of method and device of fast resampling

Also Published As

Publication number Publication date
CN107704212A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107704212B (en) A kind of data processing method and device
US9361034B2 (en) Transferring storage resources between snapshot storage pools and volume storage pools in a distributed network
CN103354923B (en) A kind of data re-establishing method, device and system
JP4890048B2 (en) Storage control device and data migration method using storage control device
US10652330B2 (en) Object storage in cloud with reference counting using versions
WO2020204880A1 (en) Snapshot-enabled storage system implementing algorithm for efficient reclamation of snapshot storage space
CN108509153A (en) OSD selection methods, data write-in and read method, monitor and server cluster
CN107302561B (en) A kind of hot spot data Replica placement method in cloud storage system
CN107247619B (en) Live migration of virtual machine method, apparatus, system, storage medium and equipment
US10356150B1 (en) Automated repartitioning of streaming data
US20160378846A1 (en) Object based storage cluster with multiple selectable data handling policies
US10572175B2 (en) Method and apparatus of shared storage between multiple cloud environments
US20150200833A1 (en) Adaptive Data Migration Using Available System Bandwidth
US20210216245A1 (en) Method of distributed data redundancy storage using consistent hashing
CN105027069A (en) Deduplication of volume regions
CN104580439B (en) Method for uniformly distributing data in cloud storage system
JP5592493B2 (en) Storage network system and control method thereof
US9733835B2 (en) Data storage method and storage server
CN105740165A (en) Method and apparatus for managing file system of unified storage system
JP6211631B2 (en) Identifying workloads and sizing buffers for volume replication purposes
US11023159B2 (en) Method for fast recovering of data on a failed storage device
WO2014043448A1 (en) Block level management with service level agreement
CN111290699A (en) Data migration method, device and system
US20090006501A1 (en) Zone Control Weights
US20220391411A1 (en) Dynamic adaptive partition splitting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 310052 11th Floor, 466 Changhe Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant after: Xinhua Sanxin Information Technology Co., Ltd.

Address before: 310052 11th Floor, 466 Changhe Road, Binjiang District, Hangzhou City, Zhejiang Province

Applicant before: Huashan Information Technology Co., Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant