WO2010114598A1 - Data redistribution in data replication systems - Google Patents

Data redistribution in data replication systems Download PDF

Info

Publication number
WO2010114598A1
WO2010114598A1 PCT/US2010/000947 US2010000947W WO2010114598A1 WO 2010114598 A1 WO2010114598 A1 WO 2010114598A1 US 2010000947 W US2010000947 W US 2010000947W WO 2010114598 A1 WO2010114598 A1 WO 2010114598A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
originator
nodes
replica
redistribution
Prior art date
Application number
PCT/US2010/000947
Other languages
French (fr)
Inventor
Hua Zhong
Dheer Moghe
Sazzala Venkata Reddy
Original Assignee
Emc Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Emc Corporation filed Critical Emc Corporation
Priority to EP10759144.8A priority Critical patent/EP2414928B1/en
Priority to JP2012503418A priority patent/JP5693560B2/en
Priority to CN201080015183.4A priority patent/CN102439560B/en
Publication of WO2010114598A1 publication Critical patent/WO2010114598A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/27Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Definitions

  • FIG. 1 is a block diagram illustrating an embodiment of a data replication environment.
  • FIG. 2 is a flowchart illustrating an embodiment of a process for data replication.
  • FIG. 3 is a flowchart illustrating another embodiment of a process for data replication.
  • FIG. 4 is a data structure diagram illustrating an embodiment of a container.
  • FIGS. 5A-5C are a series of diagrams illustrating an example scenario in which data is redistributed.
  • the invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor.
  • these implementations, or any other form that the invention may take, may be referred to as techniques.
  • the order of the steps of disclosed processes may be altered within the scope of the invention.
  • a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task.
  • the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
  • FIG. 1 is a block diagram illustrating an embodiment of a data replication environment.
  • data replication system 100 includes an originator system 102 (also referred to as the source system) and a replica system 104 (also referred to as the destination system).
  • the systems are separated by one or more networks, such as a local area network or a wide area network.
  • the originator system includes an originator front end device 110 and a plurality of originator nodes 112a, 112b, and 112c (also referred to as originator back end devices).
  • the replica system includes a replica front end device 120, and a plurality of replica nodes 122a, 122b, and 122c (also referred to as replica back end devices).
  • Different number of nodes and different arrangements of front end device and nodes are possible. For example, the functions of a front end device and a node can be integrated into a single physical device.
  • the nodes are used to store data.
  • the nodes are implemented using any appropriate types of devices, such as storage devices or file servers that include storage components.
  • the front end devices can also be implemented using a variety of devices, such as a general purpose server that runs data replication management software. Each front end device communicates with its respective nodes, coordinating data storage on the nodes to achieve a virtualized file system. In other words, to external devices that access data through the front end device, the front end device appears to be a file system - server managing a single file system. In some embodiments, the front end and the back end nodes co-exist on one physical device with separate storage partitions.
  • the originator and replica systems communicate with each other. More specifically, the originator system can send backup information to the replica front end device, including information regarding new data and information regarding distribution of existing data. Communication may take place between the front end devices, or directly between the nodes.
  • a stream of backup data is received and processed by the front end device, and distributed to the originator nodes to be stored.
  • data on the replica is kept as a mirror image as the data on the originator.
  • new data becomes available, it is stored on the originator and duplicated on the replica.
  • new data on a specific originator node is duplicated on a corresponding replica node (sometimes referred to as the "buddy").
  • new data stored on node 112b is duplicated on buddy node 122b.
  • knowledge about nodes and their buddies is maintained on the front end device.
  • Individual nodes may directly communicate with each other, and the originator node directly sends data that is to be duplicated to its buddy.
  • an originator node communicates with the originator front end device, which in turn communicates with the replica front end device to transfer duplicated data to an appropriate replica node.
  • existing data on the originator can move from one originator node to another originator node. For example, if data distribution becomes uneven, in other words, too much data is stored on certain nodes while too little data is stored on other nodes, the system will rebalance data distribution among the nodes.
  • Another situation that results in data redistribution is when a new node is added to the system - data is redistributed from existing nodes to the new node.
  • information pertaining to the redistributed data is sent from the originator to the replica so that data can be redistributed in the same way on the replica. The data itself, however, is not resent. Since copying replicated data to a new replica node then deleting the same data stored on an old replica node is no longer required, the overall system handles data redistribution efficiently.
  • FIG. 2 is a flowchart illustrating an embodiment of a process for data replication.
  • process 200 is carried out on an originator system such as 102.
  • the process is implemented by front end device 110.
  • one or more originator data subsets are redistributed among a plurality of originator nodes.
  • the originator data subsets are moved from certain originator nodes to other originator nodes. Redistribution may occur when the system performs load balancing, a new node becomes added to the network, an existing node becomes deleted from the network, or for any other appropriate reasons.
  • the data subsets are data containers, which are described in greater detail below.
  • data redistribution information pertaining to how the data subsets are redistributed is determined.
  • the data redistribution information includes information pertaining to the source originator nodes from which the data subsets have been moved, and the destination originator nodes to which the originator data subsets are moved.
  • the data distribution information is sent via a communication interface to a replica system, which uses the data redistribution information to redistribute corresponding replica data subsets among the replica nodes.
  • FIG. 3 is a flowchart illustrating another embodiment of a process for data replication.
  • process 300 is carried out on a replica system such as 104.
  • the process is implemented by front end device 120.
  • data redistribution information is received from an originator.
  • the data redistribution information may be sent from an originator implementing process 200.
  • one or more corresponding replica data subsets are redistributed on the replica system according to the data redistribution information.
  • the data redistribution information includes information pertaining to the source nodes and the destination nodes associated with the redistributed data subsets.
  • the replica system can redistribute its existing data subsets in the same way as the originator system, without incurring duplicative data transmission overhead.
  • the data subsets used in the processes above are containers.
  • a container may be a few megabytes in size.
  • containers of 4.5 MB are used in some embodiments.
  • FIG. 4 is a data structure diagram illustrating an embodiment of a container.
  • container 400 includes a backup data portion 404 and a metadata portion 402.
  • the backup data portion includes actual data that requires backup, and metadata portion includes information pertaining to the backup data portion that is used to facilitate data backup.
  • the backup data portion includes a number of data segments, which are data storage subunits and which may be different in size. While data is received on the originator, for example while a data stream is read by the front end device, the data is divided into data segments and appropriate segment identifiers (IDs) are generated.
  • IDs segment identifiers
  • the front end device also performs functions such as checking the data segments to verify that no duplicated segments are received. A record of how the data segments are arranged in the data stream so that the data stream may be reconstructed later is maintained on the front end device or stored in one or more nodes.
  • the data segments are packed into appropriate containers, and their corresponding offsets and segment IDs are recorded in the metadata portion.
  • the metadata portion includes a number of offset/segment identifier (ID) pairs.
  • An offset indicate the offset of the beginning of a data segment.
  • the segment ID is used to identify a data segment. In some embodiments, a fingerprint or a modified fingerprint that uniquely identifies the data segment is used.
  • Also included in the metadata portion are a container ID for identifying this container, a current node ID for identifying the node on which the container currently resides (i.e., the destination node to which the container is moved), and a previous node ID for identifying the node on which the container previously resided (i.e., the source node from which the container was moved).
  • the container ID, current node ID, and previous node ID are used to facilitate the container redistribution process during replication in some embodiments.
  • FIGS. 5A-5C are a series of diagrams illustrating an example scenario in which data is redistributed as a result of new nodes being added to the system.
  • data replication system 100 is configured to include an originator system 102 and a replica system 104.
  • originator system data containers 115, 117, and 119 are distributed on originator nodes 112a, 112b, and 112c, respectively.
  • Each node further includes additional containers that are not shown in the diagram.
  • corresponding replicated data containers 125, 127, and 129 are distributed on replica nodes 122a, 122b, and 122c. These replica containers were copied from the originator previously.
  • originator components such as the front end device, nodes, and data containers in the originator system are shown to have different labels/IDs than the ones in the replica system in this example, in some embodiments an originator component and its corresponding counterpart on the replica share the same identifier. Various identification schemes can be used so long as the replica system is able to associate an originator component with its counterpart on the replica.
  • a new node 112d is added to the originator system and a corresponding new node 122d is also added to the replica system.
  • a process such as 200 takes place on originator system 102 in this example.
  • containers 115, 117 and 119 are redistributed. Rather than resending these containers to the replica, data distribution information is determined. In this case, containers 115, 117 and 119 have been moved to new node 112d. Thus, data redistribution information is sent to the replica system.
  • the data redistribution information includes a compact set of metadata information pertaining to the containers that are redistributed, including IDs of the containers, IDs of the respective nodes on which the containers previously resided, and IDs of the current nodes to which the containers are redistributed and on which the containers currently reside. Actual backup data such as data segments in the containers is not sent in this example and bandwidth is conserved.
  • a process such as 300 takes place on replica system 104 in this example.
  • data containers on the replica systems are redistributed according to the data redistribution information.
  • front end device 120 receives and parses the redistribution information, and coordinates with the replica nodes to redistribute the data containers in the same way the corresponding containers are redistributed on the originator.
  • data containers 125, 127, and 129 (which correspond to containers 115, 117, and 119, respectively) are moved to new node 122d.
  • nodes 112a-c and 122a-c are existing nodes, and nodes 112d and 122d are also existing nodes rather than newly added nodes. It is determined that too much data is stored on nodes 112a, 112b and 112c and not enough data is stored on nodes 112d and 122d. Thus, a process similar to what is described in FIGS. 5A-5C is carried out to redistribute data and balance the amount of data stored on various nodes. By using data redistribution information, data containers do not have to be sent across the network and load balancing can be achieved quickly and efficiently.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

A system includes one or more processors configured to redistribute one or more originator data subsets among a plurality of originator nodes and determine data redistribution information pertaining to redistribution of the one or more originator data subsets among the plurality of originator nodes. The system further includes a communication interface configured to send data redistribution information to a replica system. The data redistribution information is used by the replica system to redistribute one or more corresponding replica data subsets among a plurality of replica nodes.

Description

DATA REDISTRIBUTION IN DATA REPLICATION SYSTEMS
BACKGROUND OF THE INVENTION
[0001] In many existing data replication systems, data is synchronized between an originator and a replica. Any change on the originator is sent to the replica and mirrored. Frequent data updates consume a lot of bandwidth and lead to inefficiency. The problem is particularly pronounced in environments where the originator and the replica are separated by a Wide Area Network (WAN) and where bandwidth is limited.
BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
[0003] FIG. 1 is a block diagram illustrating an embodiment of a data replication environment.
[0004] FIG. 2 is a flowchart illustrating an embodiment of a process for data replication.
[0005] FIG. 3 is a flowchart illustrating another embodiment of a process for data replication.
[0006] FIG. 4 is a data structure diagram illustrating an embodiment of a container.
[0007] FIGS. 5A-5C are a series of diagrams illustrating an example scenario in which data is redistributed.
DETAILED DESCRIPTION
[0008] The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term 'processor' refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
[0009] A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims and the invention encompasses numerous alternatives, modifications and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
[0010] FIG. 1 is a block diagram illustrating an embodiment of a data replication environment. In this example, data replication system 100 includes an originator system 102 (also referred to as the source system) and a replica system 104 (also referred to as the destination system). The systems are separated by one or more networks, such as a local area network or a wide area network.
[0011] The originator system includes an originator front end device 110 and a plurality of originator nodes 112a, 112b, and 112c (also referred to as originator back end devices). The replica system includes a replica front end device 120, and a plurality of replica nodes 122a, 122b, and 122c (also referred to as replica back end devices). Different number of nodes and different arrangements of front end device and nodes are possible. For example, the functions of a front end device and a node can be integrated into a single physical device.
[0012] The nodes are used to store data. In various embodiments, the nodes are implemented using any appropriate types of devices, such as storage devices or file servers that include storage components. The front end devices can also be implemented using a variety of devices, such as a general purpose server that runs data replication management software. Each front end device communicates with its respective nodes, coordinating data storage on the nodes to achieve a virtualized file system. In other words, to external devices that access data through the front end device, the front end device appears to be a file system - server managing a single file system. In some embodiments, the front end and the back end nodes co-exist on one physical device with separate storage partitions.
[0013] As will be described in greater detail below, the originator and replica systems communicate with each other. More specifically, the originator system can send backup information to the replica front end device, including information regarding new data and information regarding distribution of existing data. Communication may take place between the front end devices, or directly between the nodes.
[0014] In some embodiments, a stream of backup data is received and processed by the front end device, and distributed to the originator nodes to be stored. In the example shown in FIG. 1, data on the replica is kept as a mirror image as the data on the originator. When new data becomes available, it is stored on the originator and duplicated on the replica. In systems such as 100 where the originator and the replica have identical node configuration, new data on a specific originator node is duplicated on a corresponding replica node (sometimes referred to as the "buddy"). For example, new data stored on node 112b is duplicated on buddy node 122b. In some embodiments, knowledge about nodes and their buddies is maintained on the front end device. Individual nodes may directly communicate with each other, and the originator node directly sends data that is to be duplicated to its buddy. Alternatively, an originator node communicates with the originator front end device, which in turn communicates with the replica front end device to transfer duplicated data to an appropriate replica node.
[0015] In some situations, existing data on the originator can move from one originator node to another originator node. For example, if data distribution becomes uneven, in other words, too much data is stored on certain nodes while too little data is stored on other nodes, the system will rebalance data distribution among the nodes. Another situation that results in data redistribution is when a new node is added to the system - data is redistributed from existing nodes to the new node. When data redistribution occurs, information pertaining to the redistributed data is sent from the originator to the replica so that data can be redistributed in the same way on the replica. The data itself, however, is not resent. Since copying replicated data to a new replica node then deleting the same data stored on an old replica node is no longer required, the overall system handles data redistribution efficiently.
[0016] FIG. 2 is a flowchart illustrating an embodiment of a process for data replication. In some embodiments, process 200 is carried out on an originator system such as 102. In some embodiments the process is implemented by front end device 110. At 202, one or more originator data subsets are redistributed among a plurality of originator nodes. In other words, the originator data subsets are moved from certain originator nodes to other originator nodes. Redistribution may occur when the system performs load balancing, a new node becomes added to the network, an existing node becomes deleted from the network, or for any other appropriate reasons. In some embodiments the data subsets are data containers, which are described in greater detail below. At 204, data redistribution information pertaining to how the data subsets are redistributed is determined. In some embodiments, the data redistribution information includes information pertaining to the source originator nodes from which the data subsets have been moved, and the destination originator nodes to which the originator data subsets are moved. At 206, the data distribution information is sent via a communication interface to a replica system, which uses the data redistribution information to redistribute corresponding replica data subsets among the replica nodes.
[0017] FIG. 3 is a flowchart illustrating another embodiment of a process for data replication. In some embodiments, process 300 is carried out on a replica system such as 104. In some embodiments the process is implemented by front end device 120. At 302, data redistribution information is received from an originator. The data redistribution information may be sent from an originator implementing process 200. At 304, one or more corresponding replica data subsets are redistributed on the replica system according to the data redistribution information. As described previously, the data redistribution information includes information pertaining to the source nodes and the destination nodes associated with the redistributed data subsets. Assuming that each originator node has a corresponding buddy replica node, and that initially the same originator data subsets and replica data subsets are stored on the originator nodes and the corresponding replica nodes, respectively, and that the initial distribution of data subsets among the originator nodes is identical to the distribution among the replica nodes. Thus, given the data redistribution information, the replica system can redistribute its existing data subsets in the same way as the originator system, without incurring duplicative data transmission overhead. [0018] In some embodiments, the data subsets used in the processes above are containers. In various embodiments, a container may be a few megabytes in size. For example, containers of 4.5 MB are used in some embodiments. A node may store a number of containers. FIG. 4 is a data structure diagram illustrating an embodiment of a container. In this example, container 400 includes a backup data portion 404 and a metadata portion 402. The backup data portion includes actual data that requires backup, and metadata portion includes information pertaining to the backup data portion that is used to facilitate data backup. The backup data portion includes a number of data segments, which are data storage subunits and which may be different in size. While data is received on the originator, for example while a data stream is read by the front end device, the data is divided into data segments and appropriate segment identifiers (IDs) are generated. The front end device also performs functions such as checking the data segments to verify that no duplicated segments are received. A record of how the data segments are arranged in the data stream so that the data stream may be reconstructed later is maintained on the front end device or stored in one or more nodes.
[0019] The data segments are packed into appropriate containers, and their corresponding offsets and segment IDs are recorded in the metadata portion. The metadata portion includes a number of offset/segment identifier (ID) pairs. An offset indicate the offset of the beginning of a data segment. The segment ID is used to identify a data segment. In some embodiments, a fingerprint or a modified fingerprint that uniquely identifies the data segment is used. Also included in the metadata portion are a container ID for identifying this container, a current node ID for identifying the node on which the container currently resides (i.e., the destination node to which the container is moved), and a previous node ID for identifying the node on which the container previously resided (i.e., the source node from which the container was moved). The container ID, current node ID, and previous node ID are used to facilitate the container redistribution process during replication in some embodiments.
[0020] FIGS. 5A-5C are a series of diagrams illustrating an example scenario in which data is redistributed as a result of new nodes being added to the system. In FIG. 5A, data replication system 100 is configured to include an originator system 102 and a replica system 104. On the originator system, data containers 115, 117, and 119 are distributed on originator nodes 112a, 112b, and 112c, respectively. Each node further includes additional containers that are not shown in the diagram. On the replica system which mirrors the originator system, corresponding replicated data containers 125, 127, and 129 are distributed on replica nodes 122a, 122b, and 122c. These replica containers were copied from the originator previously. Although originator components such as the front end device, nodes, and data containers in the originator system are shown to have different labels/IDs than the ones in the replica system in this example, in some embodiments an originator component and its corresponding counterpart on the replica share the same identifier. Various identification schemes can be used so long as the replica system is able to associate an originator component with its counterpart on the replica.
[0021] In FIG. 5B, a new node 112d is added to the originator system and a corresponding new node 122d is also added to the replica system. Thus, data stored on the originator and replica systems should be rebalanced. A process such as 200 takes place on originator system 102 in this example. Specifically, on the originator system, containers 115, 117 and 119 are redistributed. Rather than resending these containers to the replica, data distribution information is determined. In this case, containers 115, 117 and 119 have been moved to new node 112d. Thus, data redistribution information is sent to the replica system. In this case, the data redistribution information includes a compact set of metadata information pertaining to the containers that are redistributed, including IDs of the containers, IDs of the respective nodes on which the containers previously resided, and IDs of the current nodes to which the containers are redistributed and on which the containers currently reside. Actual backup data such as data segments in the containers is not sent in this example and bandwidth is conserved.
[0022] In FIG. 5C, a process such as 300 takes place on replica system 104 in this example. Upon receiving the data redistribution information from the originator system, data containers on the replica systems are redistributed according to the data redistribution information. In this example, front end device 120 receives and parses the redistribution information, and coordinates with the replica nodes to redistribute the data containers in the same way the corresponding containers are redistributed on the originator. Based on the data redistribution given, data containers 125, 127, and 129 (which correspond to containers 115, 117, and 119, respectively) are moved to new node 122d.
[0023] The above process may also be carried out in response to load balancing. In one example, nodes 112a-c and 122a-c are existing nodes, and nodes 112d and 122d are also existing nodes rather than newly added nodes. It is determined that too much data is stored on nodes 112a, 112b and 112c and not enough data is stored on nodes 112d and 122d. Thus, a process similar to what is described in FIGS. 5A-5C is carried out to redistribute data and balance the amount of data stored on various nodes. By using data redistribution information, data containers do not have to be sent across the network and load balancing can be achieved quickly and efficiently.
[0024] Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
[0025] WHAT IS CLAIMED IS :

Claims

1. A system comprising: one or more processors configured to: redistribute one or more originator data subsets among a plurality of originator 5 nodes; and determine data redistribution information pertaining to redistribution of the one or more originator data subsets among the plurality of originator nodes; and a communication interface configured to send data redistribution information to a replica system; wherein
I0 the data redistribution information is used by the replica system to redistribute one or more corresponding replica data subsets among a plurality of replica nodes.
2. The system of Claim 1, wherein the data redistribution information includes identification information of one or more previous originator nodes from which the one or more data subsets have been moved, and identification information of one or more current is originator nodes on which the originator data subsets currently reside.
3. The system of Claim 1, wherein the one or more originator data subsets include one or more data containers.
4. The system of Claim 1, wherein the redistribution information includes metadata associated with the one or more data containers.
2o 5. The system of Claim 1, wherein the redistribution information includes metadata associated with the one or more data containers, and each of the one or more data containers includes one or more data segments.
6. The system of Claim 1, wherein the one or more originator data subsets and the one or more replica data subsets include identical backup data.
25 7. The system of Claim 1, wherein the plurality of originator nodes are included in a file system.
8. The system of Claim 1, wherein the data redistribution information does not include backup data.
9. The system of Claim 1, wherein the one or more originator data subsets are 30 redistributed from one or more existing nodes to a newly added node.
10. The system of Claim 1, wherein the one or more originator data subsets are redistributed to rebalance load on the plurality of originator nodes.
11. A method for data replication, comprising: redistributing one or more originator data subsets among a plurality of originator 5 nodes; and determining data redistribution information pertaining to redistribution of the one or more originator data subsets among the plurality of originator nodes; and sending data redistribution information to a replica system; wherein the data redistribution information is used by the replica system to redistribute one orQ more corresponding replica data subsets among a plurality of replica nodes.
12. The method of Claim 11, wherein the data redistribution information includes identification information of one or more previous originator nodes from which the one or more data subsets have been moved, and identification information of one or more current originator nodes on which the originator data subsets currently reside. s
13. The method of Claim 11, wherein the one or more originator data subsets include one or more data containers.
14. The method of Claim 11, wherein the redistribution information includes metadata associated with the one or more data containers.
15. The method of Claim 11, wherein the redistribution information includes metadatao associated with the one or more data containers, and each of the one or more data containers includes one or more data segments.
16. The method of Claim 11, wherein the one or more originator data subsets and the one or more replica data subsets include identical backup data.
17. A computer program product for data replication, the computer program product5 being embodied in a computer readable storage medium and comprising computer instructions for: redistributing one or more originator data subsets among a plurality of originator nodes; and determining data redistribution information pertaining to redistribution of the one or0 more originator data subsets among the plurality of originator nodes; and sending data redistribution information to a replica system; wherein the data redistribution information is used by the replica system to redistribute one or more corresponding replica data subsets among a plurality of replica nodes.
18. A system comprising: an interface configured to receive data redistribution information from an originator system, the data redistribution information pertaining to redistribution of the one or more originator data subsets among a plurality of originator nodes; and one or more processors configured to redistribute one or more corresponding replica data subsets among a plurality of replica nodes according to the data redistribution information.
19. A method for data replication, comprising: receiving data redistribution information from an originator system, the data redistribution information pertaining to redistribution of the one or more originator data subsets among a plurality of originator nodes; and redistributing one or more corresponding replica data subsets among a plurality of replica nodes according to the data redistribution information.
20. A computer program product for data replication, the computer program product being embodied in a computer readable storage medium and comprising computer instructions for: receiving data redistribution information from an originator system, the data redistribution information pertaining to redistribution of the one or more originator data subsets among a plurality of originator nodes; and redistributing one or more corresponding replica data subsets among a plurality of replica nodes according to the data redistribution information.
PCT/US2010/000947 2009-03-31 2010-03-29 Data redistribution in data replication systems WO2010114598A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP10759144.8A EP2414928B1 (en) 2009-03-31 2010-03-29 Data redistribution in data replication systems
JP2012503418A JP5693560B2 (en) 2009-03-31 2010-03-29 Data redistribution in data replication systems
CN201080015183.4A CN102439560B (en) 2009-03-31 2010-03-29 Data in data copy system are distributed again

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/384,210 US8325724B2 (en) 2009-03-31 2009-03-31 Data redistribution in data replication systems
US12/384,210 2009-03-31

Publications (1)

Publication Number Publication Date
WO2010114598A1 true WO2010114598A1 (en) 2010-10-07

Family

ID=42784174

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2010/000947 WO2010114598A1 (en) 2009-03-31 2010-03-29 Data redistribution in data replication systems

Country Status (5)

Country Link
US (3) US8325724B2 (en)
EP (1) EP2414928B1 (en)
JP (1) JP5693560B2 (en)
CN (1) CN102439560B (en)
WO (1) WO2010114598A1 (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9632722B2 (en) * 2010-05-19 2017-04-25 International Business Machines Corporation Balancing storage unit utilization within a dispersed storage network
US8768973B2 (en) * 2010-05-26 2014-07-01 Pivotal Software, Inc. Apparatus and method for expanding a shared-nothing system
US8538926B2 (en) * 2011-03-08 2013-09-17 Rackspace Us, Inc. Massively scalable object storage system for storing object replicas
US9996540B2 (en) 2011-03-31 2018-06-12 EMC IP Holding Company LLC System and method for maintaining consistent points in file systems using a prime dependency list
US8713282B1 (en) 2011-03-31 2014-04-29 Emc Corporation Large scale data storage system with fault tolerance
US9990253B1 (en) 2011-03-31 2018-06-05 EMC IP Holding Company LLC System and method for recovering file systems without a replica
US9026499B1 (en) 2011-03-31 2015-05-05 Emc Corporation System and method for recovering file systems by restoring partitions
US10210169B2 (en) * 2011-03-31 2019-02-19 EMC IP Holding Company LLC System and method for verifying consistent points in file systems
US8832394B2 (en) 2011-03-31 2014-09-09 Emc Corporation System and method for maintaining consistent points in file systems
US9916258B2 (en) 2011-03-31 2018-03-13 EMC IP Holding Company LLC Resource efficient scale-out file systems
US9619474B2 (en) 2011-03-31 2017-04-11 EMC IP Holding Company LLC Time-based data partitioning
US8706710B2 (en) 2011-05-24 2014-04-22 Red Lambda, Inc. Methods for storing data streams in a distributed environment
US8738572B2 (en) * 2011-05-24 2014-05-27 Red Lambda, Inc. System and method for storing data streams in a distributed environment
US8732140B2 (en) 2011-05-24 2014-05-20 Red Lambda, Inc. Methods for storing files in a distributed environment
US9390147B2 (en) 2011-09-23 2016-07-12 Red Lambda, Inc. System and method for storing stream data in distributed relational tables with data provenance
US9262511B2 (en) * 2012-07-30 2016-02-16 Red Lambda, Inc. System and method for indexing streams containing unstructured text data
JP2014231830A (en) 2013-05-02 2014-12-11 株式会社電子応用 Engine control device
US10802928B2 (en) 2015-09-10 2020-10-13 International Business Machines Corporation Backup and restoration of file system
US10545990B2 (en) * 2016-03-31 2020-01-28 Veritas Technologies Llc Replication between heterogeneous storage systems
CN110058792B (en) 2018-01-18 2022-08-30 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for expanding storage space
US11960504B2 (en) 2021-09-02 2024-04-16 Bank Of America Corporation Data replication over low-latency network

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647393B1 (en) * 1996-11-22 2003-11-11 Mangosoft Corporation Dynamic directory service
US20040059805A1 (en) * 2002-09-23 2004-03-25 Darpan Dinker System and method for reforming a distributed data system cluster after temporary node failures or restarts
US20050027817A1 (en) * 2003-07-31 2005-02-03 Microsoft Corporation Replication protocol for data stores
US7222119B1 (en) * 2003-02-14 2007-05-22 Google Inc. Namespace locking scheme

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5634125A (en) * 1993-09-02 1997-05-27 International Business Machines Corporation Selecting buckets for redistributing data between nodes in a parallel database in the quiescent mode
US6415373B1 (en) * 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
JP2002500393A (en) * 1997-12-24 2002-01-08 アヴィッド・テクノロジー・インコーポレーテッド Process for scalably and reliably transferring multiple high bandwidth data streams between a computer system and multiple storage devices and multiple applications
WO2002071220A1 (en) * 2001-03-05 2002-09-12 Sanpro Systems Inc. A system and a method for asynchronous replication for storage area networks
US7886298B2 (en) * 2002-03-26 2011-02-08 Hewlett-Packard Development Company, L.P. Data transfer protocol for data replication between multiple pairs of storage controllers on a san fabric
WO2004027650A1 (en) * 2002-09-18 2004-04-01 Netezza Corporation Disk mirror architecture for database appliance
US6928526B1 (en) * 2002-12-20 2005-08-09 Datadomain, Inc. Efficient data storage system
US20040193952A1 (en) 2003-03-27 2004-09-30 Charumathy Narayanan Consistency unit replication in application-defined systems
JP4257834B2 (en) * 2003-05-06 2009-04-22 インターナショナル・ビジネス・マシーンズ・コーポレーション Magnetic disk device, file management system and method thereof
US7305520B2 (en) * 2004-01-30 2007-12-04 Hewlett-Packard Development Company, L.P. Storage system with capability to allocate virtual storage segments among a plurality of controllers
US7590706B2 (en) * 2004-06-04 2009-09-15 International Business Machines Corporation Method for communicating in a computing system
JP2006113927A (en) * 2004-10-18 2006-04-27 Hitachi Ltd Storage device, storage system, snapshot maintenance method and command
JP2008186223A (en) * 2007-01-30 2008-08-14 Nec Corp Information processing system and replicating body changeover method
US20090063807A1 (en) * 2007-08-29 2009-03-05 International Business Machines Corporation Data redistribution in shared nothing architecture
US8751441B2 (en) * 2008-07-31 2014-06-10 Sybase, Inc. System, method, and computer program product for determining SQL replication process

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647393B1 (en) * 1996-11-22 2003-11-11 Mangosoft Corporation Dynamic directory service
US20040059805A1 (en) * 2002-09-23 2004-03-25 Darpan Dinker System and method for reforming a distributed data system cluster after temporary node failures or restarts
US7222119B1 (en) * 2003-02-14 2007-05-22 Google Inc. Namespace locking scheme
US20050027817A1 (en) * 2003-07-31 2005-02-03 Microsoft Corporation Replication protocol for data stores

Also Published As

Publication number Publication date
CN102439560B (en) 2016-02-10
US20100246578A1 (en) 2010-09-30
US8837480B2 (en) 2014-09-16
US8325724B2 (en) 2012-12-04
JP2012522305A (en) 2012-09-20
US20130124476A1 (en) 2013-05-16
US9025602B2 (en) 2015-05-05
EP2414928A1 (en) 2012-02-08
JP5693560B2 (en) 2015-04-01
CN102439560A (en) 2012-05-02
US20140337293A1 (en) 2014-11-13
EP2414928A4 (en) 2016-06-08
EP2414928B1 (en) 2018-05-02

Similar Documents

Publication Publication Date Title
US9025602B2 (en) Data redistribution in data replication systems
US10860457B1 (en) Globally ordered event stream logging
US10185629B2 (en) Optimized remote cloning
US9848043B2 (en) Granular sync/semi-sync architecture
US10628378B2 (en) Replication of snapshots and clones
US11314690B2 (en) Regenerated container file storing
CN102708165B (en) Document handling method in distributed file system and device
CN111352577B (en) Object storage method and device
US11068537B1 (en) Partition segmenting in a distributed time-series database
CN108076090B (en) Data processing method and storage management system
US9864791B2 (en) Flow for multi-master replication in distributed storage
KR20140026517A (en) Asynchronous replication in a distributed storage environment
CN112765182A (en) Data synchronization method and device among cloud server clusters
CN105744001B (en) Distributed cache system expansion method, data access method and device and system
US9977786B2 (en) Distributed code repository with limited synchronization locking
CN105208060A (en) Service data synchronization method, service data synchronization device and service data synchronization system
US20130246568A1 (en) Data storage system
CN113311996A (en) OSD role adjusting method and device
US11606277B2 (en) Reducing the impact of network latency during a restore operation
KR101748913B1 (en) Cluster management method and data storage system for selecting gateway in distributed storage environment
US10521309B1 (en) Optimized filesystem walk for backup operations
JP6506156B2 (en) Node and gravitation suppression method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080015183.4

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10759144

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 6568/CHENP/2011

Country of ref document: IN

Ref document number: 2010759144

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2012503418

Country of ref document: JP

NENP Non-entry into the national phase

Ref country code: DE