CN109960469B - Data processing method and device - Google Patents

Data processing method and device Download PDF

Info

Publication number
CN109960469B
CN109960469B CN201910229406.XA CN201910229406A CN109960469B CN 109960469 B CN109960469 B CN 109960469B CN 201910229406 A CN201910229406 A CN 201910229406A CN 109960469 B CN109960469 B CN 109960469B
Authority
CN
China
Prior art keywords
cluster
ceph cluster
ceph
target
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910229406.XA
Other languages
Chinese (zh)
Other versions
CN109960469A (en
Inventor
顾雷雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New H3C Information Technologies Co Ltd
Original Assignee
New H3C Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New H3C Technologies Co Ltd filed Critical New H3C Technologies Co Ltd
Priority to CN201910229406.XA priority Critical patent/CN109960469B/en
Publication of CN109960469A publication Critical patent/CN109960469A/en
Application granted granted Critical
Publication of CN109960469B publication Critical patent/CN109960469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0629Configuration or reconfiguration of storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0646Horizontal data movement in storage systems, i.e. moving data in between storage devices or systems
    • G06F3/0647Migration mechanisms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application provides a data processing method and device. In the application, part of the monitoring nodes and storage nodes in the first Ceph Cluster are reconfigured to migrate from the first Ceph Cluster to the second Ceph Cluster, the monitoring nodes of the second Ceph Cluster negotiate and elect a Leader of the second Ceph Cluster, the Leader of the second Ceph Cluster is generated, and the business data stored in the designated storage pool in the first Ceph Cluster is stored in the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, so that how to store the business data stored in the designated storage pool in the first Ceph Cluster in the second Ceph Cluster after the second Ceph Cluster is split from the first Ceph Cluster is realized, and the separation and deployment of the Ceph clusters are realized.

Description

Data processing method and device
Technical Field
The present application relates to network communication technologies, and in particular, to a data processing method and apparatus.
Background
The Ceph cluster is a distributed storage system, and is composed of monitoring nodes (monitors) and storage nodes. In the Ceph Cluster, one of the monitoring nodes is selected by a plurality of monitoring nodes as a Leader node (Leader), the Leader generates a Cluster Map (Cluster Map) of the Ceph Cluster and notifies other monitoring nodes, and finally, all the monitoring nodes maintain the same Cluster Map. So-called Cluster Map, which is used to indicate the logical state and storage policy of the Ceph Cluster itself, may specifically include: monitor Map, Object Storage Devices (OSD) Map, Place Group (PG) Map, and CRUSH Map.
In a Ceph cluster, there is at least one OSD for a storage node. The OSD is mainly used for storing data and processing data copying, recovery, backfilling, rebalancing and the like.
The storage pool and PG were also introduced in the Ceph cluster:
pool, a custom namespace, is used to isolate the PGs. In addition to the isolation PG, different optimization strategies, such as the number of copies, the number of data flushes, the size of data blocks and objects, etc., may be set for different POOLs.
PG is a logical concept that is analogous to an index in a database when data is addressed. In the Ceph cluster, for each object to be stored, a PG corresponding to the object to be stored is determined from all PGs in a storage pool (a storage pool to which the object to be stored needs to be stored), and each object to be stored uniquely corresponds to one PG. And then, determining corresponding OSDs for the PG in the storage pool (the number of OSDs also needs to refer to the preset copy number), and storing objects corresponding to the PG to the OSDs corresponding to the PG (if there are a plurality of OSDs, one of the OSDs is the master, and the rest is the slave).
Currently, Ceph clusters only allow for vertical scaling and horizontal scaling.
Disclosure of Invention
The application provides a data processing method and device, which are used for storing service data in an original Ceph cluster into a new Ceph cluster which is newly split from the original Ceph cluster after the Ceph cluster is split.
The technical scheme provided by the application comprises the following steps:
in a first aspect, the present application provides a data processing method, which is applied to a monitoring node Monitor, and includes:
after the node is configured to migrate from the first Ceph cluster and join the second Ceph cluster, negotiating with a target monitoring node to elect a Leader node Leader of the second Ceph cluster; the target monitoring node is other monitoring nodes which are configured to be migrated from the first Ceph cluster and added into the second Ceph cluster;
and when the node is elected as the Leader, generating a second Cluster mapping Cluster Map of a second Ceph Cluster through communication with the target monitoring node and the target storage node, storing the service data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, wherein the target storage node is a storage node configured to be migrated from the first Ceph Cluster and added into the second Ceph Cluster.
With reference to the first aspect, in a first embodiment, the storing the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
when the node is not configured with the data reservation identifier corresponding to the specified storage pool, and the data reservation identifier is used to indicate that the service data stored in the specified storage pool is public data, then:
creating a target storage pool in a second Ceph cluster, wherein the target storage pool is the same as the designated storage pool, and the target storage pool and the designated storage pool have the same arranged group PG;
and aiming at each target homing group PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG which is the same as the target PG in a specified storage pool and a main Primary OSD corresponding to the reference PG in the first Ceph Cluster according to the first Cluster Map, and transferring service data which is stored by the Primary OSD and corresponds to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
With reference to the first aspect, in a second implementation, the storing the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
when the node is configured with a data reservation identifier corresponding to the designated storage pool, the data reservation identifier is used for indicating that the service data stored in the designated storage pool is public data, a clone pool which is the same as the designated storage pool is cloned in a second Ceph cluster, and the clone pool and the designated storage pool have the same grouping PG;
and aiming at each target homing group PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in a specified storage pool according to the first Cluster Map, finding a main Primary OSD corresponding to the reference PG in the first Ceph Cluster, and copying and storing service data corresponding to the reference PG stored on the Primary OSD to the target OSD.
With reference to the first aspect, in a third implementation, after storing the service data stored in the specified storage pool of the first Ceph cluster to the second Ceph cluster, the method further includes:
deleting the recorded first Cluster Map.
In a second aspect, the present application provides a data processing method, which is applied to a monitoring node Monitor, and includes:
when the node is elected as a Leader node Leader of the third Ceph Cluster, a third Cluster mapping Cluster Map of the third Ceph Cluster is generated by communicating with a monitoring node and a storage node in the third Ceph Cluster; the third Ceph cluster is a first Ceph cluster in which a host node changes, and the host node includes: the host node is changed due to the fact that the monitoring node and the storage node in the first Ceph cluster are configured to be migrated from the first Ceph cluster and added into a second Ceph cluster;
and performing data rebalancing operation on each storage pool influenced by the change of the host node in the third Ceph Cluster according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
With reference to the second aspect, in a first embodiment, the performing data rebalancing operations on each storage pool of the third Ceph Cluster that is affected by changes in the master node according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
and aiming at each PG in each storage pool influenced by changes of main machine nodes in the third Ceph Cluster, selecting a target OSD corresponding to the PG from the third Ceph Cluster according to the third Cluster Map, finding a main Primary OSD corresponding to the PG according to the first Cluster Map, and transferring service data corresponding to the PG stored on the Primary OSD to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
In a third aspect, the present application provides a data processing apparatus, which is applied to a monitoring node Monitor, and includes:
the election unit is used for negotiating with a target monitoring node to elect a Leader node Leader of a second Ceph cluster after the node is configured to migrate from the first Ceph cluster and join the second Ceph cluster; the target monitoring node is configured to be migrated from the first Ceph cluster and added to other monitoring nodes of the second Ceph cluster;
and the processing unit is used for generating a second Cluster mapping Cluster Map of a second Ceph Cluster through communication with the target monitoring node and the target storage node when the node is elected as the Leader, storing the business data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, wherein the target storage node is a storage node which is configured to be migrated from the first Ceph Cluster and added into the second Ceph Cluster.
With reference to the third aspect, in a first embodiment, the processing unit, according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, stores the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster includes:
when the node is not configured with the data reservation identifier corresponding to the specified storage pool, and the data reservation identifier is used to indicate that the service data stored in the specified storage pool is public data, then:
creating a target storage pool in a second Ceph cluster, wherein the target storage pool is the same as the designated storage pool, and the target storage pool and the designated storage pool have the same arranged group PG;
and aiming at each target homing group PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG which is the same as the target PG in a specified storage pool and a main Primary OSD corresponding to the reference PG in the first Ceph Cluster according to the first Cluster Map, and transferring service data which is stored by the Primary OSD and corresponds to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
With reference to the third aspect, in a second embodiment, the processing unit, according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, stores the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster includes:
when the node is configured with a data reservation identifier corresponding to the designated storage pool, the data reservation identifier is used for indicating that the service data stored in the designated storage pool is public data, a clone pool which is the same as the designated storage pool is cloned in a second Ceph cluster, and the clone pool and the designated storage pool have the same grouping PG;
and aiming at each target homing group PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in a specified storage pool according to the first Cluster Map, finding a main Primary OSD corresponding to the reference PG in the first Ceph Cluster, and copying and storing service data corresponding to the reference PG stored on the Primary OSD to the target OSD.
With reference to the third aspect, in a third implementation, after storing the service data stored in the designated storage pool of the first Ceph cluster to the second Ceph cluster, the processing unit further includes:
deleting the recorded first Cluster Map.
In a fourth aspect, the present application provides a data processing apparatus, which is applied to a monitoring node Monitor, and includes:
the mapping unit is used for generating a third Cluster mapping Cluster Map of the third Ceph Cluster by communicating with a monitoring node and a storage node in the third Ceph Cluster when the node is elected as a Leader node Leader of the third Ceph Cluster; the third Ceph cluster is a first Ceph cluster in which a host node changes, and the host node includes: the host node is changed due to the fact that the monitoring node and the storage node in the first Ceph cluster are configured to be migrated from the first Ceph cluster and added into a second Ceph cluster;
and the rebalancing unit is used for performing data rebalancing operation on each storage pool in the third Ceph Cluster, which is influenced by the change of the host node, according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
With reference to the fourth aspect, in a first embodiment, the performing, by the rebalancing unit, a data rebalancing operation on each storage pool of the third Ceph Cluster that is affected by the change of the host node according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
and aiming at each PG in each storage pool influenced by changes of main machine nodes in the third Ceph Cluster, selecting a target OSD corresponding to the PG from the third Ceph Cluster according to the third Cluster Map, finding a main Primary OSD corresponding to the PG according to the first Cluster Map, and transferring service data corresponding to the PG stored on the Primary OSD to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
According to the technical scheme, the business data stored in the designated storage pool in the first Ceph Cluster are stored in the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster by reconfiguring part of monitoring nodes and storage nodes in the first Ceph Cluster so as to be migrated from the first Ceph Cluster and added to the second Ceph Cluster, negotiating Leader of the second Ceph Cluster by each monitoring node of the second Ceph Cluster, and generating the second Cluster Map of the second Ceph Cluster by Leader of the second Ceph Cluster.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flowchart of a data processing method provided in this embodiment;
FIG. 2 is a flowchart of an implementation of step 103 provided in this embodiment;
fig. 3 is a flowchart of another implementation of step 103 provided in this embodiment;
FIG. 4 is a flowchart of another method provided in the present embodiment;
FIG. 5 is a flowchart illustrating an implementation of step 402 according to this embodiment;
FIG. 6 is a diagram illustrating a structure of an apparatus according to the present embodiment;
FIG. 7 is a schematic diagram of another embodiment of the present invention;
fig. 8 is a hardware configuration diagram of the apparatus according to the present embodiment.
Detailed Description
After the Ceph cluster is successfully deployed, the Ceph cluster may be split according to service requirements, in addition to considering the longitudinal upgrade and the horizontal expansion of the Ceph cluster. For example, an enterprise Q has both monitoring services and network services, a Ceph cluster (denoted as a Ceph cluster 101) deployed for the enterprise Q has a storage pool 11 corresponding to the monitoring services and a storage pool 12 corresponding to the network services, where the storage pool 11 is used for storing monitoring service data, and the storage pool 12 is used for storing network service data. If the enterprise Q sells the monitoring service to the enterprise B due to the service development requirement, in this case, the Ceph cluster 101 needs to be split.
However, at present, only vertical upgrade and horizontal expansion are considered for a Ceph cluster, and how to split the Ceph cluster and store service data in an original Ceph cluster into a new Ceph cluster newly split from the original Ceph cluster after splitting is not considered yet.
Therefore, this embodiment provides a data processing method to realize how to store service data in an original Ceph cluster into a new Ceph cluster newly split from the original Ceph cluster after splitting the Ceph cluster, which is described below:
referring to fig. 1, fig. 1 is a flowchart of a data processing method provided in this embodiment. The flow is applied to a monitoring node (denoted as monitoring node 100).
As shown in fig. 1, the process may include the following steps:
step 101, after the monitoring node 100 is configured to migrate from the first Ceph cluster and join the second Ceph cluster, negotiating with the target monitoring node about a Leader of the second Ceph cluster.
In an embodiment, when the first Ceph cluster is split according to business requirements, a second Ceph cluster can be newly created on a designated cluster configuration page by a user. There is no node in the second Ceph cluster at this time. And then, selecting the monitoring nodes and the storage nodes from the first Ceph cluster by the user, configuring the selected monitoring nodes and the selected storage nodes to be migrated from the first Ceph cluster and added into a newly created second Ceph cluster. To this point, the architecture of the second Ceph cluster has been established. Here, the first Ceph cluster and the second Ceph cluster are named for convenience of description, and are not intended to be limiting.
In one example, after the architecture of the second Ceph cluster is established, the second Ceph cluster may not be able to operate normally and may not be activated. In one example, the second Ceph cluster may be activated via an activation instruction from a user configuration.
After the second Ceph cluster is activated, if the monitoring node 100 finds that the node is reconfigured to join the second Ceph cluster, it negotiates with other monitoring nodes (denoted as target monitoring nodes) in the first Ceph cluster, which are configured to migrate from the first Ceph cluster and join the second Ceph cluster, to elect a Leader of the second Ceph cluster. In one example, the Leader election mode is similar to the Leader election mode in the existing Ceph cluster, and is not described again.
And 102, when the monitoring node 100 is elected as a Leader, generating a second Cluster Map of a second Ceph Cluster by communicating with the target monitoring node and the target storage node. Step 103 is then performed.
In step 102, the target storage node refers to a storage node configured to migrate from a first Ceph cluster and join a second Ceph cluster.
In an example, the manner in which the monitoring node 100 generates the second Cluster Map is similar to the manner in which the existing Cluster Map is generated, and is not described again. After the monitoring node 100 generates the second Cluster Map, the second Cluster Map is sent to the target monitoring node, so that the target monitoring node is the same as the monitoring node 100, and the first Cluster Map and the second Cluster Map are maintained at the same time.
It should be noted that although the monitoring node 100 generates the second Cluster Map at this time, it may continue to maintain the Cluster Map of the first Ceph Cluster (denoted as the first Cluster Map) since it was in the first Ceph Cluster. So far, the monitoring node 100 simultaneously maintains the first Cluster Map and the second Cluster Map.
The above is the operation performed when the monitoring node 100 is elected as a Leader. If the monitoring node 100 is not elected as a Leader, it will cooperate with the elected Leader in the second Ceph Cluster to make the Leader generate a second Cluster Map.
Step 101 and step 102 describe that the second Ceph cluster is split from the first Ceph cluster from a physical perspective. However, when the second Ceph cluster is split from the first Ceph cluster, the corresponding data may also be changed, for example, business data stored in a storage pool (referred to as a designated storage pool) originally in the first Ceph cluster is stored in the second Ceph cluster, which is described in detail in step 103 below:
and 103, the monitoring node 100 stores the service data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
Here, the identification of the designated storage pool may be preconfigured as each monitoring node (including monitoring node 100) in the second Ceph cluster.
In one embodiment, the service data stored in the storage pool is designated as common data. Still take the example that the enterprise Q sells the monitoring service to the enterprise B due to the service development requirement, but the network service of the enterprise Q still needs to rely on the monitoring service, and at this time, both the enterprise Q and the enterprise B need to monitor the service data. In this case, step 103 can be implemented by a process shown in fig. 2. This is not described in detail.
In another embodiment, the traffic data stored in the designated storage pool is not common data. Still taking the example that the enterprise Q sells the monitoring service to the enterprise B due to the service development requirement, after the enterprise Q sells the monitoring service to the enterprise B, the enterprise Q no longer needs the monitoring service data. In this case, step 103 can be implemented by a process shown in fig. 3. For the moment, this description will not be repeated.
Thus, the description of the flow shown in fig. 1 is completed.
As can be seen from the process shown in fig. 1, by reconfiguring some of the monitoring nodes and storage nodes in the first Ceph Cluster to migrate from the first Ceph Cluster and add to the second Ceph Cluster, negotiating a Leader of the second Ceph Cluster by each monitoring node of the second Ceph Cluster, generating a second Cluster Map of the second Ceph Cluster by the Leader of the second Ceph Cluster, and storing the service data specified to be stored in the first Ceph Cluster in the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, how to store the service data specified to be stored in the first Ceph Cluster in the second Ceph Cluster after the second Ceph Cluster is split from the first Ceph Cluster is realized, and separate deployment of the Ceph clusters is realized.
Referring to fig. 2, fig. 2 is a flowchart of an implementation of step 103 provided in this embodiment. The process shown in fig. 2 is performed on the premise that the service data stored in the designated storage pool is common data. On this premise, as an embodiment, in addition to configuring the identifier of the designated storage pool at each monitoring node (including the monitoring node 100) in the second Ceph cluster, a data reservation identifier may be configured for the designated storage pool, where the data reservation identifier is used to indicate that the service data stored in the designated storage pool is common data.
Based on this, as shown in fig. 2, the process may include the following steps:
at step 201, a clone pool that is the same as the designated storage pool is cloned in the second Ceph cluster, where the clone pool has the same PG as the designated storage pool.
That is, a clone pool having the same structure as the designated storage pool is formed in the second Ceph cluster, via step 201.
Step 202, aiming at each target PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, searching a reference PG same as the target PG in the appointed storage pool according to the first Cluster Map, and copying and storing service data on the Primary OSD corresponding to the reference PG to the target OSD.
In step 202, the service data on the Primary OSD refers to the service data (belonging to the service to which the designated storage pool belongs) stored on the Primary OSD and corresponding to the reference PG.
In an embodiment, how to select the corresponding target OSD for the target PG may be determined according to a mapping method of the existing PG and OSD, which is not described herein again.
Through step 202, the target OSD corresponding to each target PG in the clone pool stores the target service data.
The flow shown in fig. 2 is thus completed.
The process shown in fig. 2 is how to implement, when the service data stored in the designated storage pool is common data, storing the service data stored in the designated storage pool in the first Ceph cluster to the second Ceph cluster by cloning the designated storage pool (this is equivalent to clone pool service data separation).
Referring to fig. 3, fig. 3 is a flowchart of another implementation of step 103 provided in this embodiment. The process shown in fig. 3 is performed on the premise that the service data stored in the designated storage pool is not common data. Based on this premise, as an embodiment, the identifier of the designated storage pool may be configured only at each monitoring node (including the monitoring node 100) in the second Ceph cluster, and one data retention identifier is not configured for the designated storage pool.
Based on this, as shown in fig. 3, the process may include the following steps:
step 301 creates a target storage pool in the second Ceph cluster that is the same as the designated storage pool, the target storage pool having the same PG as the designated storage pool.
That is, a target storage pool having the same structure as the designated storage pool is formed in the second Ceph cluster, via step 301.
And step 302, aiming at each target PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in the appointed storage pool according to the first Cluster Map, and transferring data stored on a Primary OSD corresponding to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
In step 302, the service data on Primary OSD refers to the service data (belonging to the service belonging to the designated storage pool) stored on the Primary OSD and corresponding to the reference PG.
In an embodiment, how to select the corresponding target OSD for the target PG may be determined according to a mapping method of the existing PG and OSD, which is not described herein again.
In step 302, after the data stored on the Primary OSD corresponding to the reference PG is migrated to the target OSD for storage, the corresponding relationship between the reference PG and the Primary OSD may be cancelled.
In addition, in step 302, if the target OSD selected at this time is exactly the same as the Primary OSD corresponding to the reference PG, no operation may be performed.
In step 302, the target OSD corresponding to each target PG in the target storage pool stores the target service data.
The flow shown in fig. 3 is completed.
Fig. 3 illustrates a process for separating and storing service data stored in a specified storage pool of a first Ceph cluster from a first Ceph cluster to a second Ceph cluster when the service data stored in the specified storage pool is not common data (this is equivalent to storage pool service data separation).
It should be noted that the flows shown in fig. 1 to fig. 3 are executed on the premise that the first Ceph cluster satisfies the splitting condition.
In one example, the splitting condition refers to:
condition 1: the first Ceph cluster includes at least 6 monitor nodes. Considering the Ceph cluster integrity, a Ceph cluster should include a minimum of 3 monitor nodes, and since the first Ceph cluster needs to be split, it needs to include at least 6 monitor nodes.
Condition 2: the first Ceph cluster includes at least N storage nodes. Considering the number of copies required by the Ceph cluster, if the number of copies is M (e.g., 3), then a Ceph cluster should include M storage nodes at minimum. Because the first Ceph cluster needs to be split, it needs to include at least N (2M) storage nodes.
It should be noted that, in the flow shown in fig. 2 or fig. 3, regardless of the clone pool service data separation implemented in fig. 2 or the storage pool service data separation implemented in fig. 3, it lasts for a period of time, and during this period of time, as shown in fig. 2 or fig. 3, each monitoring node in the second Ceph Cluster will simultaneously maintain the first Cluster Map and the second Cluster Map, on one hand, to implement that the service data stored in the specified storage pool in the first Ceph Cluster is successfully transferred from the first Ceph Cluster to the second Ceph Cluster, on the other hand, to ensure the consistency of the data in the above duration of time, and to maintain the normal reading and writing of the data.
When the separation of the service data from the first Ceph cluster to the second Ceph cluster is successful to finish the execution of the flow shown in fig. 2 or 3, it means that the service data stored in the specified storage pool in the first Ceph cluster is successfully transferred from the first Ceph cluster to the second Ceph cluster, and at this time, the state of the second Ceph cluster may be denoted as an activation success state, and the previous state may be denoted as an activation process state. When the status of the second Ceph Cluster is the activation success status, in one example, each monitoring node in the second Ceph Cluster may delete the recorded first Cluster Map.
It should be noted that, in the flow shown in fig. 2 or fig. 3, whether the clone pool service data separation is implemented in fig. 2 or the storage pool service data separation is implemented in fig. 3, it may cause the OSD in the first Ceph cluster to change. Wherein the OSD change is caused by the target storage node and the target monitoring node in the first Ceph cluster being reconfigured to join the second Ceph cluster. In this case, the present application also provides the flow shown in fig. 4:
referring to fig. 4, fig. 4 is a flowchart of another method provided in this embodiment. To distinguish the monitoring node 100 in fig. 2 from the monitoring node to which the process is applied, the monitoring node to which the process shown in fig. 4 is applied may be referred to as a monitoring node 400.
As shown in fig. 4, the process may include the following steps:
step 401, when the node is elected as the Leader node Leader of the third Ceph Cluster, the monitoring node 400 generates a third Cluster Map of the third Ceph Cluster by communicating with the monitoring node and the storage node in the third Ceph Cluster.
As described in step 101, some monitoring nodes and storage nodes in the first Ceph cluster are configured to migrate from the first Ceph cluster and join the first Ceph cluster to the second Ceph cluster, and once the monitoring nodes and storage nodes are configured to migrate from the first Ceph cluster and join the second Ceph cluster, it means that a host node (e.g., a monitoring node and a storage node) in the first Ceph cluster changes, and at this time, the first Ceph cluster with the changed host node may be recorded as a third Ceph cluster. That is, in this step 401, the third Ceph cluster is the first Ceph cluster in which the host node is changed. Wherein, as described above, the change in the host node is caused by the storage node and the monitor node in the first Ceph cluster being configured to migrate from the first Ceph cluster and join the second Ceph cluster.
In a special case, the monitoring node that was originally elected as the Leader in the first Ceph cluster is configured to migrate from the first Ceph cluster and join the first Ceph cluster to the second Ceph cluster, and then the third Ceph cluster needs to reselect the Leader at this time. In other cases, the monitoring node in the first Ceph cluster that was elected to be a Leader is not configured to migrate from the first Ceph cluster and join to the second Ceph cluster, at which time the Leader of the third Ceph cluster may still be the Leader that was previously elected in the first Ceph cluster.
Step 402, data rebalancing operation is performed on each storage pool in the third Ceph Cluster that is affected by the change of the host node according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
In one example, performing a data rebalancing operation on PGs in each storage pool of the third Ceph cluster that are affected by changes in the master node may include the process illustrated in fig. 5, which is not described herein again.
Through the process shown in fig. 4, data rebalancing can be finally achieved for each storage pool in the third Ceph cluster that is affected by the change of the host node.
Referring to fig. 5, fig. 5 is a flowchart for implementing step 402 provided in this embodiment. As shown in fig. 5, the process may include:
and step 501, aiming at each PG in each storage pool influenced by changes of host nodes in the third Ceph Cluster, selecting a target OSD corresponding to the PG from the third Ceph Cluster according to the third Cluster Map.
In an embodiment, how to select the corresponding target OSD for the PG may be determined according to a mapping method of the existing PG and OSD, which is not described herein again.
Step 502, finding a main Primary OSD corresponding to the PG according to the first Cluster Map and transferring service data corresponding to the PG, which is stored on the Primary OSD, to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
In one example, when the Primary OSD is the same as the target OSD, no data migration may be performed.
The flow shown in fig. 5 is completed.
Through the process shown in fig. 5, it is finally achieved that a data rebalancing operation is performed on PGs in each storage pool of the third Ceph cluster that are affected by changes of the host node.
The method provided by the present embodiment is described above.
Some special cases in this example are explained below:
(1) if the service data stored in the designated storage pool in the first Ceph cluster is stored in the second Ceph cluster, the second Ceph cluster may have an insufficient storage space, and in such a case, an operation administrator needs to evaluate the storage pool space of the second Ceph cluster to determine whether the second Ceph cluster split from the first Ceph cluster can accommodate all the storage services in the designated storage pool, and if the condition is not met, the first Ceph cluster needs to be expanded to ensure that the finally split second Ceph cluster can accommodate all the storage services in the designated storage pool as far as possible.
(2) In the process of storing the service data stored in the designated storage pool in the first Ceph cluster into the second Ceph cluster, if a target OSD fault in the second Ceph cluster is found, the error detection mechanism (Failure detection) based on the Ceph can be detected, which is only equal to that data rebalancing (the service data stored in the designated storage pool is stored in the second Ceph cluster) and error detection coexist in the second Ceph cluster at the same time.
(3) If the current storage traffic of the first Ceph cluster is large, for example, about 90% of the current storage traffic is already occupied (at a dangerous line), in one example, to prevent the first Ceph cluster from suddenly having an OSD fault, which may cause the first Ceph cluster to crash, the first Ceph cluster may be first expanded, and then the second Ceph cluster may be split from the first Ceph cluster.
(4) After storing the service data stored in the designated storage pool of the first Ceph cluster in the second Ceph cluster, if the sensitive data still exists in the first Ceph cluster and the second Ceph cluster, in an example, the relevant service personnel may delete the corresponding sensitive data from the first Ceph cluster and the second Ceph cluster, respectively.
The methods provided herein are described above. The following describes the apparatus provided in the present application:
referring to fig. 6, fig. 6 is a structural diagram of an apparatus according to the present embodiment. The device is applied to the monitoring node Monitor and comprises the following steps:
the election unit is used for negotiating with a target monitoring node to elect a Leader node Leader of a second Ceph cluster after the node is configured to migrate from the first Ceph cluster and join the second Ceph cluster; the target monitoring node is other monitoring nodes which are configured to be migrated from the first Ceph cluster and added into the second Ceph cluster;
and the processing unit is used for generating a second Cluster mapping Cluster Map of a second Ceph Cluster through communication with the target monitoring node and the target storage node when the node is elected as the Leader, storing the business data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, wherein the target storage node is a storage node which is configured to be migrated from the first Ceph Cluster and added into the second Ceph Cluster.
In a first embodiment, the processing unit storing the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
when the node is not configured with the data reservation identifier corresponding to the designated storage pool, and the data reservation identifier is used to indicate that the service data stored in the designated storage pool is public data, then:
creating a target storage pool at a second Ceph cluster that is the same as the designated storage pool, the target storage pool having the same parked group PG as the designated storage pool;
and aiming at each target homing group PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG which is the same as the target PG in a specified storage pool and a main Primary OSD corresponding to the reference PG in the first Ceph Cluster according to the first Cluster Map, and transferring service data which is stored by the Primary OSD and corresponds to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
In a second embodiment, the processing unit storing the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
when the node is configured with a data reservation identifier corresponding to the designated storage pool, the data reservation identifier is used for indicating that the service data stored in the designated storage pool is public data, a clone pool which is the same as the designated storage pool is cloned in a second Ceph cluster, and the clone pool and the designated storage pool have the same grouping PG;
and aiming at each target homing group PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in a specified storage pool according to the first Cluster Map, finding a main Primary OSD corresponding to the reference PG in the first Ceph Cluster, and copying and storing service data corresponding to the reference PG stored on the Primary OSD to the target OSD.
As an embodiment, after storing the service data stored in the designated storage pool of the first Ceph cluster to the second Ceph cluster, the processing unit further includes: deleting the recorded first Cluster Map.
Thus, the description of the structure of the device shown in fig. 6 is completed.
The present application also provides another data processing apparatus as shown in fig. 7. Referring to fig. 7, fig. 7 is a schematic structural diagram of another apparatus provided in this embodiment, where the apparatus is applied to a monitoring node Monitor, and the apparatus includes:
the mapping unit is used for generating a third Cluster mapping Cluster Map of the third Ceph Cluster by communicating with a monitoring node and a storage node in the third Ceph Cluster when the node is elected as a Leader node Leader of the third Ceph Cluster; the third Ceph cluster is a first Ceph cluster in which a host node changes, and the host node includes: the host node is changed due to the fact that the monitoring node and the storage node in the first Ceph cluster are configured to be migrated from the first Ceph cluster and added into a second Ceph cluster;
and the rebalancing unit is used for performing data rebalancing operation on each storage pool in the third Ceph Cluster, which is influenced by the change of the host node, according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
In a first embodiment, the rebalancing unit performing the data rebalancing operation on each storage pool of the third Ceph Cluster that is affected by the change of the master node according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster includes:
and aiming at each PG in each storage pool influenced by changes of main machine nodes in the third Ceph Cluster, selecting a target OSD corresponding to the PG from the third Ceph Cluster according to the third Cluster Map, finding a main Primary OSD corresponding to the PG according to the first Cluster Map, and transferring service data corresponding to the PG, which is stored on the Primary OSD, to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
Thus, the description of the structure of the apparatus shown in fig. 7 is completed.
In addition, the application also provides a hardware structure diagram of the device. Referring to fig. 8, fig. 8 is a hardware configuration diagram of the apparatus shown in the present embodiment. As shown in fig. 8, the hardware structure includes:
a communication interface 801, a processor 802, a machine-readable storage medium 803, and a bus 804; wherein the communication interface 801, the processor 802 and the machine-readable storage medium 803 communicate with each other via a bus 804. The processor 802 reads and executes the machine-executable instructions in the machine-readable storage medium 803.
Where, corresponding to fig. 6, the machine executable instructions in the machine readable storage medium 803 may be instructions corresponding to the flow illustrated in fig. 1. As such, the processor 802 implements the data processing method shown in fig. 1 by reading and executing machine executable instructions in the machine readable storage medium 803.
Corresponding to FIG. 7, the machine executable instructions in the machine readable storage medium 803 may be instructions corresponding to the process illustrated in FIG. 4. As such, the processor 802 implements the data processing method shown in fig. 4 by reading and executing machine executable instructions in the machine readable storage medium 803.
The machine-readable storage medium 803 referred to herein may be any electronic, magnetic, optical, or other physical storage device that can contain or store information such as executable instructions, data, and the like. For example, the machine-readable storage medium may be: volatile memory, non-volatile memory, or similar storage media. In particular, the machine-readable storage medium 403 may be a RAM (random Access Memory), a flash Memory, a storage drive (e.g., a hard disk drive), a solid state disk, any type of storage disk (e.g., a compact disk, a DVD, etc.), or similar storage medium, or a combination thereof.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (10)

1. A data processing method is applied to a monitoring node Monitor and comprises the following steps:
after the node is configured to migrate from the first Ceph cluster and join in a second Ceph cluster, negotiating with a target monitoring node to select a Leader node of the second Ceph cluster; the target monitoring node is other monitoring nodes which are configured to be migrated from the first Ceph cluster and added into the second Ceph cluster;
and when the node is elected as the Leader, generating a second Cluster mapping Cluster Map of a second Ceph Cluster through communication with the target monitoring node and the target storage node, storing the service data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, wherein the target storage node is a storage node configured to be migrated from the first Ceph Cluster and added into the second Ceph Cluster.
2. The method of claim 1, wherein storing the business data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster comprises:
when the node is not configured with the data reservation identifier corresponding to the specified storage pool, and the data reservation identifier is used to indicate that the service data stored in the specified storage pool is public data, then:
creating a target storage pool in a second Ceph cluster, wherein the target storage pool is the same as the designated storage pool, and the target storage pool and the designated storage pool have the same arranged group PG;
and aiming at each target homing group PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG which is the same as the target PG in a specified storage pool and a main Primary OSD corresponding to the reference PG in the first Ceph Cluster according to the first Cluster Map, and transferring service data which is stored by the Primary OSD and corresponds to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
3. The method of claim 1, wherein storing the business data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Cpeh Cluster comprises:
when the node is configured with a data reservation identifier corresponding to the designated storage pool, the data reservation identifier is used for indicating that the service data stored in the designated storage pool is public data, a clone pool which is the same as the designated storage pool is cloned in a second Ceph cluster, and the clone pool and the designated storage pool have the same grouping PG;
and aiming at each target homing group PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in a specified storage pool according to the first Cluster Map, finding a main Primary OSD corresponding to the reference PG in the first Ceph Cluster, and copying and storing service data corresponding to the reference PG stored on the Primary OSD to the target OSD.
4. The method of claim 1, wherein after storing the business data stored in the specified storage pool of the first Ceph cluster to the second Ceph cluster, the method further comprises:
deleting the recorded first Cluster Map.
5. A data processing method is applied to a monitoring node Monitor and comprises the following steps:
when the node is elected as a Leader node Leader of the third Ceph Cluster, a third Cluster mapping Cluster Map of the third Ceph Cluster is generated by communicating with a monitoring node and a storage node in the third Ceph Cluster; the third Ceph cluster is a first Ceph cluster in which a host node changes, and the host node includes: the host node is changed due to the fact that the monitoring node and the storage node in the first Ceph cluster are configured to be migrated from the first Ceph cluster and added into a second Ceph cluster;
and performing data rebalancing operation on each storage pool influenced by the change of the host node in the third Ceph Cluster according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
6. The method of claim 5, wherein performing a data rebalancing operation on each storage pool of the third Ceph Cluster that is affected by a change in host node according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster comprises:
and aiming at each PG in each storage pool influenced by changes of main machine nodes in the third Ceph Cluster, selecting a target OSD corresponding to the PG from the third Ceph Cluster according to the third Cluster Map, finding a main Primary OSD corresponding to the PG according to the first Cluster Map, and transferring service data corresponding to the PG stored on the Primary OSD to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
7. A data processing device is applied to a monitoring node Monitor, and comprises:
the election unit is used for negotiating with a target monitoring node to elect a Leader node Leader of a second Ceph cluster after the node is configured to migrate from the first Ceph cluster and join the second Ceph cluster; the target monitoring node is configured to be migrated from the first Ceph cluster and added to other monitoring nodes of the second Ceph cluster;
and the processing unit is used for generating a second Cluster mapping Cluster Map of a second Ceph Cluster through communication with the target monitoring node and the target storage node when the node is elected as the Leader, storing the business data stored in the designated storage pool in the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster, wherein the target storage node is a storage node which is configured to be migrated from the first Ceph Cluster and added into the second Ceph Cluster.
8. The apparatus of claim 7, wherein the processing unit is further configured to store the business data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster comprises:
when the node is not configured with the data reservation identifier corresponding to the specified storage pool, and the data reservation identifier is used to indicate that the service data stored in the specified storage pool is public data, then:
creating a target storage pool in a second Ceph cluster, wherein the target storage pool is the same as the designated storage pool, and the target storage pool and the designated storage pool have the same arranged group PG;
and aiming at each target homing group PG in the target storage pool, selecting a target OSD corresponding to the target PG from the second Ceph Cluster according to the second Cluster Map, finding a reference PG which is the same as the target PG in a specified storage pool and a main Primary OSD corresponding to the reference PG in the first Ceph Cluster according to the first Cluster Map, and transferring service data which is stored by the Primary OSD and corresponds to the reference PG to the target OSD for storage, wherein the Primary OSD is different from the target OSD.
9. The apparatus as claimed in claim 7, wherein the processing unit stores the service data stored in the designated storage pool of the first Ceph Cluster to the second Ceph Cluster according to the second Cluster Map and the recorded first Cluster Map of the first Ceph Cluster comprises:
when the node is configured with a data reservation identifier corresponding to the designated storage pool, the data reservation identifier is used for indicating that the service data stored in the designated storage pool is public data, a clone pool which is the same as the designated storage pool is cloned in a second Ceph cluster, and the clone pool and the designated storage pool have the same grouping PG;
and aiming at each target homing group PG in the clone pool, selecting a target OSD corresponding to the target PG from the OSD on each target storage node in the second Ceph Cluster according to the second Cluster Map, finding a reference PG same as the target PG in a specified storage pool according to the first Cluster Map, finding a main Primary OSD corresponding to the reference PG in the first Ceph Cluster, and copying and storing service data corresponding to the reference PG stored on the Primary OSD to the target OSD.
10. A data processing device is applied to a monitoring node Monitor, and comprises:
the mapping unit is used for generating a third Cluster mapping Cluster Map of the third Ceph Cluster by communicating with a monitoring node and a storage node in the third Ceph Cluster when the node is elected as a Leader node Leader of the third Ceph Cluster; the third Ceph cluster is a first Ceph cluster in which a host node changes, and the host node includes: the host node is changed due to the fact that the monitoring node and the storage node in the first Ceph cluster are configured to be migrated from the first Ceph cluster and added into a second Ceph cluster;
and the rebalancing unit is used for performing data rebalancing operation on each storage pool in the third Ceph Cluster, which is influenced by the change of the host node, according to the third Cluster Map and the recorded first Cluster Map of the first Ceph Cluster.
CN201910229406.XA 2019-03-25 2019-03-25 Data processing method and device Active CN109960469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910229406.XA CN109960469B (en) 2019-03-25 2019-03-25 Data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910229406.XA CN109960469B (en) 2019-03-25 2019-03-25 Data processing method and device

Publications (2)

Publication Number Publication Date
CN109960469A CN109960469A (en) 2019-07-02
CN109960469B true CN109960469B (en) 2022-05-31

Family

ID=67025031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910229406.XA Active CN109960469B (en) 2019-03-25 2019-03-25 Data processing method and device

Country Status (1)

Country Link
CN (1) CN109960469B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106844510A (en) * 2016-12-28 2017-06-13 北京五八信息技术有限公司 The data migration method and device of a kind of distributed experiment & measurement system
CN107817951A (en) * 2017-10-31 2018-03-20 新华三技术有限公司 A kind of method and device for realizing the fusion of Ceph clusters
CN109284073A (en) * 2018-09-30 2019-01-29 北京金山云网络技术有限公司 Date storage method, device, system, server, control node and medium
CN109327544A (en) * 2018-11-21 2019-02-12 新华三技术有限公司 A kind of determination method and apparatus of leader node
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10216770B1 (en) * 2014-10-31 2019-02-26 Amazon Technologies, Inc. Scaling stateful clusters while maintaining access
CN106844510A (en) * 2016-12-28 2017-06-13 北京五八信息技术有限公司 The data migration method and device of a kind of distributed experiment & measurement system
CN107817951A (en) * 2017-10-31 2018-03-20 新华三技术有限公司 A kind of method and device for realizing the fusion of Ceph clusters
CN109284073A (en) * 2018-09-30 2019-01-29 北京金山云网络技术有限公司 Date storage method, device, system, server, control node and medium
CN109327544A (en) * 2018-11-21 2019-02-12 新华三技术有限公司 A kind of determination method and apparatus of leader node

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Cehp pg分裂流程及可行性分析;gold叠;《博客园,http://www.cnblogs.com/goldd/p/6610563.html》;20170324;全文 *

Also Published As

Publication number Publication date
CN109960469A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
US10613780B1 (en) Multi-node removal
CN103354923B (en) A kind of data re-establishing method, device and system
US7996611B2 (en) Backup data management system and backup data management method
US7254684B2 (en) Data duplication control method
US8433947B2 (en) Computer program, method, and apparatus for controlling data allocation
US7114094B2 (en) Information processing system for judging if backup at secondary site is necessary upon failover
US7197632B2 (en) Storage system and cluster maintenance
JP4845724B2 (en) Storage system with backup function
JP2005242403A (en) Computer system
JP2010128644A (en) Failure restoration method, program and management server
CN101763321B (en) Disaster-tolerant method, device and system
CN107329859B (en) Data protection method and storage device
KR100922584B1 (en) Distributed object-sharing system and method thereof
JPH11327912A (en) Automatic software distribution system
CN104793981A (en) Online snapshot managing method and device for virtual machine cluster
CN104331344A (en) Data backup method and device
CN109960469B (en) Data processing method and device
CN109165117B (en) Data processing method and system
JP2016212548A (en) Storage control device, storage control method, and storage control program
JP5303935B2 (en) Data multiplexing system and data multiplexing method
JP2004334739A (en) Backup method of data, restoration method of backup data, network storage device, and network storage program
KR100994342B1 (en) Distributed file system and method for replica-based fault treatment
CN115328880B (en) Distributed file online recovery method, system, computer equipment and storage medium
JP6637258B2 (en) Storage system migration method and program
US20080222374A1 (en) Computer system, management computer, storage system and volume management method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20230619

Address after: 310052 11th Floor, 466 Changhe Road, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: H3C INFORMATION TECHNOLOGY Co.,Ltd.

Address before: 310052 Changhe Road, Binjiang District, Hangzhou, Zhejiang Province, No. 466

Patentee before: NEW H3C TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right