WO2014170952A1 - 計算機システム、計算機システム管理方法及びプログラム - Google Patents
計算機システム、計算機システム管理方法及びプログラム Download PDFInfo
- Publication number
- WO2014170952A1 WO2014170952A1 PCT/JP2013/061257 JP2013061257W WO2014170952A1 WO 2014170952 A1 WO2014170952 A1 WO 2014170952A1 JP 2013061257 W JP2013061257 W JP 2013061257W WO 2014170952 A1 WO2014170952 A1 WO 2014170952A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- computer
- sequence number
- control unit
- recovery
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/27—Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
- G06F16/275—Synchronous replication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/16—File or folder operations, e.g. details of user interfaces specifically adapted to file systems
- G06F16/162—Delete operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/219—Managing data history or versioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
- G06F16/2336—Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
- G06F16/2343—Locking methods, e.g. distributed locking or locking implementation details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/90—Details of database functions independent of the retrieved data types
- G06F16/95—Retrieval from the web
- G06F16/951—Indexing; Web crawling techniques
Definitions
- the present invention relates to a distributed database composed of a plurality of computers.
- RDBMS Relational DataBase Management System
- NoSQL Not only SQL
- KVS Key Value Store
- a volatile storage medium capable of accessing data at high speed for example, a configuration for storing data in a memory (memory store), a non-volatile recording medium having excellent data storage durability, for example, SSD (Solid State Disk) ) And HDD, etc., various configurations such as a configuration for storing data (disk store) or a configuration in which the above-described configuration is used together.
- a cluster is configured from a plurality of servers, and the KVS is configured on a memory of a server included in the cluster.
- Each server constituting the distributed KVS stores data of a predetermined management range (for example, key range). Further, in the distributed KVS, in order to ensure data reliability, each server stores duplicate data of data included in the management range managed by other servers.
- Each server executes processing as a master server for data included in the management range. That is, in response to a read request including a predetermined key, a server that manages a management range including data corresponding to the key reads data corresponding to the key. Each server operates as a slave server for replicated data in the management range managed by other servers.
- master data data managed as a master server
- slave data data managed as a slave server
- the distributed KVS also ensures fault tolerance.
- the number of slave servers that is, the number of servers that store the replicated data can be arbitrarily set by the computer system.
- the number of slave servers for one management range is also referred to as multiplicity.
- the multiplicity of the distributed KVS is reduced by one. If a server of multiplicity or higher in the distributed KVS is stopped, it becomes impossible to continue the business using the distributed KVS. Therefore, it is necessary to quickly recover the multiplicity of the distributed KVS. In the following description, restoring the multiplicity of the distributed KVS is referred to as “recovery”.
- start-up processing of a new server is executed as a substitute for the failed server.
- replication processing is performed to write the data held by the failed server to the new server.
- a server that holds duplicate data of data held by a server in which a failure has occurred transmits the duplicate data to the new server.
- the replication source server and the replication destination server must hold the same data. Therefore, when the data held by the replication source server is updated, it is necessary to write the updated data to the replication destination server.
- Patent Document 1 As described above, the technique described in Patent Document 1 is known for the recovery processing of distributed KVS.
- Patent Document 1 states that “(1) Take a snapshot of all data in a memory at a certain point of an active copy source computer, transfer it to the copy destination computer, write it to the memory of the copy destination computer, ( 2) Continuously monitoring data updates to the memory of the copy source computer from the execution of (1), repeatedly transferring the difference data regarding the detected update to the copy destination computer and writing to the memory of the copy destination computer (3) When the size of the difference data is equal to or smaller than the size that can be stored in one transmission message, the difference data is finally transferred once and written to the memory of the copy destination computer, and the processing of the copy destination computer is performed. "Resume in synchronization with the computer”.
- the communication bandwidth usage (communication amount) accompanying the transmission of snapshots may increase, and the communication performance of the entire system may deteriorate.
- the duplication source computer transmits the difference data to the duplication destination computer, which causes a problem that data that does not need to be transmitted is transmitted. For example, when a snapshot is transmitted, if some data is deleted by the update process, there arises a problem that differential data that does not need to be transmitted is transmitted.
- An object of the present invention is to reduce the amount of memory used and the amount of communication bandwidth used, and to restore a system constituting a distributed KVS without stopping an application.
- a typical example of the invention disclosed in the present application is as follows. That is, a computer system in which a plurality of computers are connected via a network, and a business is executed using a database configured from a storage area of each of the plurality of computers, and the data stored in the database is Including the identification information of the data, the value of the data, and a sequence number that is an execution order of events in the database, and each of the plurality of computers applies a distributed arrangement algorithm to the identification information of the data The data is distributed and arranged for each determined management range.
- Each of the plurality of computers includes a data management unit that manages the arranged data, a data control unit that determines the sequence number of an operation on the arranged data, and a newly added computer with a predetermined management range.
- the plurality of computers includes a first computer that transmits a recovery request and a second computer that receives the recovery request.
- the second computer receives a recovery request from the first computer, transitions the state of the second computer to a recovery state, reads one or more data from the database based on the sequence number, A duplication process for transmitting the data as one duplication data to the first computer is executed.
- the second computer determines the sequence number of the update command, updates predetermined data based on the update command, The update process to be transmitted is executed.
- At least one of the first computer and the second computer controls the writing order of the first duplicated data and the second duplicated data in the first computer, and the first computer Based on the order of writing, a writing process for writing the first duplicated data and the second duplicated data into a storage area constituting the database is executed.
- the present invention it is possible to restore a computer system that suppresses memory usage and communication bandwidth usage.
- the system can be restored without stopping the business (application).
- FIG. 1 is a sequence diagram for explaining the outline of the present invention.
- the computer system shown in FIG. 1 includes three servers 100 and one client device 200.
- the three servers 100 constitute a cluster, and a distributed database is constructed on a storage area of the server 100.
- the distributed KVS is used as the distributed database.
- the distributed KVS of this embodiment stores a plurality of data in which keys, values, and sequence numbers are associated with each other.
- the cluster of the servers 100 configuring the distributed KVS is simply referred to as a cluster.
- sequence number is a value indicating the execution order of the distributed KVS event.
- a sequence number is assigned to each event in order from “1”.
- the distributed KVS event indicates an operation (update process) on data and a configuration change of the computer system.
- Each server 100 stores data associated with a key, a value, and a sequence number as data management information 300 in a data store 160 (see FIG. 2).
- the key range represents the range of hash values calculated from each data key. Note that various methods such as the consistent hashing method, the range method, and the list method are used as the distributed arrangement algorithm.
- Each server 100 operates as a master server that manages data (master data) included in a predetermined key range.
- Each server 100 holds duplicate data (slave data) of data included in a key range managed by another server 100 and operates as a slave server.
- FIG. 1 shows the distributed KVS recovery process.
- the key range managed as a master by the failed server 100 is also referred to as a target key range.
- Server 100-1 is a master server of the current target key range.
- the server 100-3 is a server added as a new master server for the target key range.
- the server 100-1 stores master data of the target key range as shown in the data management information 300-1.
- the server 100-2 also stores the same slave data.
- the server 100-3 transmits a recovery request to the server 100-1 (step S101).
- the server 100-1 transitions to a recovery state.
- the server 100-1 stores the largest sequence number among the sequence numbers included in the master data as information for specifying the range of data to be transmitted. That is, the server 100-1 stores the latest sequence number. Thereafter, the server 100-1 starts data replication processing.
- the sequence number that the replication source server 100 stores at the start of the data replication process is also referred to as a replication sequence number.
- the server 100-1 transmits the duplicate data of the data whose key is “A” to the server 100-3 (step S102).
- the server 100-3 stores the received duplicate data in the data store 160 (see FIG. 2).
- the master data held by the server 100-3 is as shown in the data management information 300-2.
- the server 100-1 When the server 100-1 receives an update command for updating the value of the key “C” to “DDD” from the client device 200 in the recovery state (step S103), the server 100-1 is based on the distributed agreement algorithm. 2 determines the sequence number of the update instruction (step S104). At this time, the data replication process is temporarily stopped.
- a plurality of servers 100 determine the execution order of operations for the distributed KVS based on the distributed agreement algorithm.
- the Paxos algorithm is used as the distributed agreement algorithm.
- the server 100-1 transmits a copy of the update command to the server 100-2 and makes a distribution agreement for the update command.
- the sequence number of the received update instruction is determined to be “6”.
- the server 100-2 also executes a similar data update process.
- the server 100-1 updates the master data in accordance with the update command (step S105). Specifically, the server 100-1 stores “DDD” in the value of the data corresponding to the key “C” and stores “6” in the sequence number. At this time, the master data is as shown in the data management information 300-3. The server 100-2 similarly updates the data based on the distributed state machine event information 500 (see FIG. 5) generated by executing the distributed agreement.
- the server 100-1 transmits the replicated data of the updated data to the server 100-3 (Step S106).
- the server 100-3 stores the received duplicate data in the data store 160 (see FIG. 2).
- the master data held by the server 100-3 is as shown in the data management information 300-4.
- the server 100-1 resumes the data replication process after the data update process is completed.
- the server 100-1 transmits the duplicate data of the data whose key is “B” to the server 100-3 (step S107).
- the server 100-3 stores the received duplicate data in the data store 160 (see FIG. 2).
- the master data held by the server 100-3 is as shown in the data management information 300-5.
- the data order of the data management information 300-5 indicates the order of data writing in the server 100-3. Therefore, the server 100-3 holds master data in the same format as the data management information 300-3.
- the server 100-1 ends the data replication process.
- the replication source server 100-1 can transmit all data to the replication destination server 100-3 without acquiring a snapshot. Since data with a small data size is transmitted and the latest data is transmitted, the amount of communication bandwidth used in the recovery process can be suppressed. Further, it is not necessary to stop the system in order to maintain data consistency, and there is no need to synchronize between the server 100-1 and the server 100-3.
- FIG. 2 is a block diagram showing the configuration of the computer system according to the first embodiment of the present invention.
- the computer system includes a plurality of servers 100 and client devices 200.
- the servers 100 or the server 100 and the client device 200 are connected to each other via a network 250.
- the network 250 may have various wired and wireless configurations such as LAN, WAN, and SAN.
- the present invention may be any network as long as the server 100 and the client device 200 can communicate with each other.
- the network 250 includes a plurality of network devices (not shown).
- the network device includes, for example, a switch and a gateway.
- the server 100 includes a processor 110, a main storage device 120, an auxiliary storage device 130, and a network interface 140, and constitutes a distributed KVS.
- the server 100 executes various processes according to the request transmitted from the client device 200. Assume that the configuration of each server 100 is the same.
- the server 100 may include an input device such as a keyboard, a mouse, and a touch panel, and an output device such as a display.
- an input device such as a keyboard, a mouse, and a touch panel
- an output device such as a display.
- the processor 110 executes a program stored in the main storage device 120.
- the functions of the server 100 can be realized by the processor 110 executing the program.
- processing is described with the program as the subject, it indicates that the program is being executed by the processor 110.
- the main storage device 120 stores a program executed by the processor 110 and information necessary for executing the program.
- the main storage device 120 may be a memory, for example.
- the main storage device 120 of this embodiment stores a program for realizing the data management unit 151, the distributed state machine control unit 152, and the recovery control unit 153. Further, configuration information 170 and distributed agreement history information 180 are stored on the main storage device 120 as necessary information.
- a data store 160 that is a database constituting the distributed KVS is stored.
- the data store 160 of this embodiment stores data that includes a key, a value, and a sequence number.
- the data store 160 of each server 100 stores master data and slave data.
- the auxiliary storage device 130 stores various information.
- the auxiliary storage device 130 may be an HDD or an SSD.
- a disk store (not shown) for constructing the distributed KVS may be constructed on the auxiliary storage device 130.
- the network interface 140 is an interface for connecting to other devices via the network 250.
- the data management unit 151 controls various processes for data managed by the server 100.
- the data management unit 151 receives a command transmitted from the client device 200, and controls data read processing, write processing, and the like based on the command.
- the data management unit 151 also executes processing such as data inquiry to other servers 100 and transmission of processing results to the client device 200.
- the distributed state machine control unit 152 controls the consistency of distributed KVS data in each server 100. Specifically, the distributed state machine control unit 152 determines a sequence number that is an execution order of events input to the distributed KVS by communicating with the distributed state machine control unit 152 of the other server 100.
- the state machine is a system in which the behavior of the target is expressed using “state” and “event”.
- the state machine holds the current state inside, and when an event is input from the outside, the state machine changes the state according to a predetermined rule.
- the distributed state machine is a mechanism for causing one or more state machines existing on a plurality of servers to execute the same behavior in a distributed system (see, for example, Patent Document 2).
- a distributed agreement algorithm is used to determine the order in which events are input.
- KVS can handle an operation such as an update command for a key as an event, and data update for the operation as a state transition can be handled as a set of state machines for each key. Therefore, in the distributed KVS, a distributed state machine can be used as a configuration for each server included in the cluster to hold the same data.
- each server 100 includes one distributed state machine control unit 152.
- the restoration control unit 153 controls the restoration process.
- the recovery control unit 153 of the replication destination server 100 transmits a recovery request to the replication source server 100 and stores the data transmitted from the replication source in the data store 160.
- the recovery control unit 153 of the replication source server 100 transmits data to the replication destination server 100.
- the restoration control unit 153 holds restoration information 154 used for restoration processing. Details of the recovery information 154 will be described later with reference to FIG.
- the configuration information 170 stores information indicating a data storage destination. That is, information indicating the master server and slave server of each key range is stored. Details of the configuration information 170 will be described later with reference to FIG.
- the distributed agreement history information 180 stores information related to the agreement content of the event. Details of the distributed agreement history information will be described later with reference to FIG.
- the client device 200 includes a processor 210, a main storage device 220, an auxiliary storage device 230, and a network interface 240, and transmits an update command for requesting the server 100 to execute various processes.
- the processor 210 executes a program stored in the main storage device 220.
- the functions of the client device 200 can be realized by the processor 210 executing the program.
- the processing is described with the program as the subject, it indicates that the program is being executed by the processor 210.
- the main storage device 220 stores a program executed by the processor 210 and information necessary for executing the program.
- the main storage device 220 may be a memory, for example.
- main storage device 220 of the present embodiment a program for realizing the application 251 and the configuration information management unit 252 is stored. Further, configuration information 260 is stored on the main storage device 220 as necessary information.
- the auxiliary storage device 230 stores various information.
- the auxiliary storage device 130 may be an HDD or an SSD.
- the network interface 240 is an interface for connecting to other devices via the network 250.
- the application 251 transmits an update command to the server 100. Further, the application 251 receives the processing result for the access request transmitted from the server 100.
- the update command is a command for requesting an operation for data, that is, execution of update processing for data.
- the update process of this embodiment includes data writing, data overwriting, and data deletion.
- the configuration information management unit 252 manages the configuration information 260 for managing the data storage destination.
- the configuration information 260 stores information indicating a data storage destination.
- the configuration information 260 is the same as the configuration information 170.
- the functions of the server 100 and the client device 200 are implemented using software, but the same functions may be implemented using dedicated hardware.
- FIG. 3 is an explanatory diagram showing a format of data stored in the data store 160 according to the first embodiment of the present invention.
- the data store 160 stores data management information 300.
- the data management information 300 includes a plurality of data composed of keys, values, and sequence numbers.
- data including a key, a value, and a sequence number is also referred to as key-value type data.
- the data management information 300 includes Key 301, Value 302, and sequence number 303.
- Key 301 stores an identifier (key) for identifying data.
- the Value 302 stores actual data (value).
- the sequence number 303 stores a value indicating the execution order of the update process (event) for the Key 301.
- the user who operates the client device 200 can store data in the distributed KVS by designating the Key 301, and can acquire desired data from the distributed KVS by designating the Key 301.
- Each server 100 manages key-value data for each predetermined Key 301 range (key range). That is, key value type data is distributed and arranged in each server 100 for each key range.
- the server 100 executes processing as a master server for data in the designated management range 400. As a result, a large amount of data can be processed in parallel and at high speed.
- the format of data stored in the data store 160 is not limited to that shown in FIG. 3, and may be data in a format in which the hash value, value, and sequence number of a key are associated with each other.
- FIG. 4 is an explanatory diagram showing an example of the configuration information 170 according to the first embodiment of the present invention.
- the configuration information 170 stores information related to the key range of data arranged in each server 100. Specifically, the configuration information 170 includes a server ID 401 and a key range 402.
- the server ID 401 stores an identifier for uniquely identifying the server 100.
- the server ID 401 stores, for example, an identifier, an IP address, a MAC address, and the like of the server 100.
- the key range 402 stores a hash value range for specifying the key range.
- the key range 402 includes a master 403 and a slave 404.
- the master 403 stores a hash value that identifies the key range of the master data.
- the slave 404 stores a hash value that specifies the key range of the slave data of each server 100.
- the multiplicity is 1 distributed KVS.
- FIG. 5 is an explanatory diagram illustrating an example of the distributed agreement history information 180 according to the first embodiment of this invention.
- the distributed agreement history information 180 includes a plurality of distributed state machine event information 500.
- the distributed state machine event information 500 stores information on events in the distributed KVS. Specifically, the distributed state machine event information 500 includes a sequence number 501 and proposal content 502.
- Sequence number 501 stores a value indicating the execution order of events.
- the proposal content 502 stores the specific content of the event.
- the proposal content 502 illustrated in FIG. 5 stores a Put instruction 503 including a Key 504 and a Value 505.
- FIG. 6 is an explanatory diagram illustrating an example of the recovery information 154 according to the first embodiment of this invention.
- the restoration information 154 includes a replication sequence number 601, a target key range 602, destination information 603, and a merge sequence number 604.
- the replication sequence number 601 stores a replication sequence number.
- the target key range 602 stores a hash value that identifies the target key range.
- the destination information 603 stores information for specifying the replication destination server 100.
- the destination information 603 stores, for example, the IP address and port number of the server 100.
- the merge sequence number 604 stores a sequence number indicating the execution order of events for adding a new server 100 to the cluster.
- an event for adding a new server 100 to the cluster is also referred to as a member joining event.
- FIG. 7 is a sequence diagram for explaining the outline of the present invention.
- FIG. 7 shows processing executed after step S107 of FIG.
- the server 100-1 transmits all data to be replicated to the server 100-3 (step S107), and then makes a distribution agreement of the member merge event with the server 100-2 (step S108). As a result, the sequence number of the member merge event is determined. In the following description, the sequence number of the member merge event is also referred to as a merge sequence number.
- the server 100-1 stores the determined joining sequence number in the recovery information 154. Until the member merge event is executed, the server 100-1 performs processing as the master server of the target key range. That is, an update command to which a sequence number smaller than the merge sequence number is assigned is processed by the server 100-1.
- the server 100-1 and the server 100-2 After the server 100-1 and the server 100-2 have agreed to distribute the member merge event, the server 100-1 and the server 100-2 start executing the member merge event. However, since the sequence number is not “15” at this time, the process waits for a certain period. If the sequence number is smaller than the merging sequence number after a certain period of time, the server 100-1 and the server 100-2 make a NOOP instruction distribution agreement and add the value of the sequence number.
- the client device 200 transmits an update command for deleting the data of the key “A” to the server 100-1 (step S109). At this time, since the server 100-3 has not been added to the cluster, the update command is transmitted to the server 100-1.
- the server 100-1 Upon receiving the update command, the server 100-1 makes a distribution agreement with the server 100-2 (step S110). In the example illustrated in FIG. 7, the sequence number of the received update command is determined as “7”.
- the server 100-1 updates the master data in accordance with the update command (step S111). Specifically, the server 100-1 deletes the data of the key “A”. At this time, the master data is as shown in the data management information 300-6. Note that the server 100-2 similarly updates the data based on the distributed state machine event information 500 generated by executing the distributed agreement.
- the server 100-1 transmits data instructing data deletion to the server 100-3 (step S112).
- the server 100-3 deletes the data.
- the master data held by the server 100-3 is as shown in the data management information 300-7.
- the server 100-1 executes the processing from step S110 to step S112 until the sequence number reaches “15”.
- the server 100-1 transmits the restoration completion data including the merge sequence number to the server 100-3 (step S113), and then releases the restoration state.
- step S114 the server 100-1 and the server 100-2 execute a member joining event.
- the configuration information 170 is updated so that the server 100-3 becomes a new master server.
- the server 100-1 adds an entry for the server 100-3 to the configuration information 170, and sets the server 100-3 as a master server.
- the server 100-2 and the server 100-3 execute the same processing. Further, the server 100-1 transmits the updated configuration information 170 to the server 100-3 and the client device 200.
- the server 100-3 is added to the cluster, and the server 100-3 processes an update command for data included in the target key range.
- the client apparatus 200 transmits an update command for adding data having a key “D” and a value “EEE” to the server 100-3 (step S115).
- the server 100-3 makes a distribution agreement between the server 100-1 and the server 100-2 (step S116), and determines the sequence number of the received update command as “16”.
- the server 100-3 updates the master data in accordance with the update command (step S117). Specifically, the server 100-3 stores data having a key “D”, a value “EEE”, and a sequence number “16”. Each of the server 100-1 and the server 100-2 similarly updates the data based on the distributed state machine event information 500 transmitted when the distributed agreement is executed. At this time, the master data is as shown in the data management information 300-8.
- FIG. 8 is a flowchart for explaining the recovery process executed by the replication source server 100 according to the first embodiment of the present invention.
- the server 100 receives a recovery request from the other server 100 (step S201). Specifically, the recovery control unit 153 receives the recovery request.
- the recovery request includes a hash value for specifying the target key range and destination information of the copy destination server 100.
- the server 100 generates recovery information 154 based on the received recovery request (step S202). Specifically, the following processing is executed.
- the restoration control unit 153 acquires the hash value of the target key range and the destination information included in the restoration request.
- the recovery control unit 153 outputs a replication sequence number acquisition request to the distributed state machine control unit 152.
- the acquisition request includes the hash value of the target key range.
- the distributed state machine control unit 152 searches the distributed state machine event information 500 corresponding to the target key range based on the hash value of the target key range included in the acquisition request. Furthermore, the distributed state machine control unit 152 refers to the sequence number 501 of the searched distributed state machine event information 500 and acquires the largest sequence number. That is, the latest sequence number is acquired.
- the distributed state machine control unit 152 outputs the acquired sequence number to the recovery control unit 153 as a duplicate sequence number.
- the restoration control unit 153 generates the restoration information 154 based on the hash value of the key range, the destination information, and the replication sequence number. Thereafter, the recovery control unit 153 transitions to a recovery state.
- the recovery information 154 does not include the merge sequence number 604.
- step S202 The above is the description of the processing in step S202.
- step S203 the server 100 executes data replication processing. Details of the data replication process will be described later with reference to FIG.
- the server 100 determines the sequence number of the member merging event, that is, the merging sequence number based on the distributed agreement algorithm (step S204). Specifically, the recovery control unit 153 instructs the distributed state machine control unit 152 to distribute the member merging event. Specific processing will be described later with reference to FIG.
- the server 100 stores the determined merge sequence number in the recovery information 154 (step S205), and ends the process.
- the replication destination server 100 When the difference between the merge sequence number and the replication sequence number is large, in a system that is not frequently updated, the replication destination server 100 is added to the cluster until an event with the merge sequence number occurs. I can't. In this case, the distributed state machine control unit 152 waits for the occurrence of a member joining event for a certain period after the process of step S205 is executed. If the sequence number after a certain period has elapsed is smaller than the merged sequence number, the distributed state machine control unit 152 performs a NOOP instruction distribution agreement a predetermined number of times.
- FIG. 9 is a flowchart for explaining data replication processing executed by the replication source server 100 according to the first embodiment of the present invention.
- the data replication process is executed mainly by the recovery control unit 153.
- the restoration control unit 153 acquires an exclusive lock of data included in the target key range (step S301). As a result, the data update process for the target key range is not executed. Therefore, it can be avoided that the data replication process and the data update process occur simultaneously.
- the recovery control unit 153 continues to wait until the exclusive lock is released.
- the restoration control unit 153 searches for data to be copied from the data included in the target key range (step S302).
- the recovery control unit 153 refers to the data management information 300 and the recovery information 154 of the data store 160, includes a sequence number older than the replication sequence number from the data included in the target key range, and Search for unsent data. In this embodiment, the recovery control unit 153 searches for data that is greater than the sequence number of the transmitted data and includes a sequence number that is equal to or less than the duplicate sequence number.
- the restoration control unit 153 determines whether there is data to be replicated based on the search result (step S303). That is, it is determined whether or not all data to be copied has been transmitted.
- the recovery control unit 153 reads the retrieved data, and transmits the read data to the replication destination server 100 as replication data (step S304). Specifically, the following processing is executed.
- the restoration control unit 153 selects data to be transmitted from the retrieved data.
- a selection method a method of selecting data in ascending order of the sequence number or a method of selecting data in the order of registration in the key dictionary can be considered.
- the number of data to be selected may be one or two or more. In this embodiment, it is assumed that the number of selected data is one.
- the information regarding the data selection method and the number of pieces of data to be selected may be preset in the recovery control unit 153 or included in the recovery request.
- the restoration control unit 153 reads the selected data from the data store 160, and transmits the read data to the restoration control unit 153 of the replication destination server 100 as replication data.
- step S304 The above is the description of the processing in step S304.
- step S305 the recovery control unit 153 releases the exclusive lock of the target key range (step S305), and returns to step S301.
- step S303 If it is determined in step S303 that there is no data to be copied, the recovery control unit 153 releases the exclusive lock of the target key range (step S306). Thereafter, the recovery control unit 153 instructs the distributed state machine control unit 152 to execute the distribution agreement of the member merge event (step S307), and ends the process. At this time, the distributed state machine control unit 152 executes the following processing.
- the distributed state machine control unit 152 Upon receiving the instruction, the distributed state machine control unit 152 communicates with the distributed state machine control unit 152 of the other server 100 in accordance with the distributed agreement algorithm, distributes the processing contents of the member merge event, and sets the merge sequence number. decide.
- the distributed state machine control unit 152 of each server 100 stores the distributed state machine event information 500 including the proposal content 502 of the member joining event in the distributed agreement history information 180.
- the proposal content 502 includes information for updating the configuration information 170 and information for calculating the merge sequence number as the process content of the member merge event.
- Information for updating the configuration information 170 includes information corresponding to the target key range 602 and the destination information 603.
- the information for calculating the merge sequence number includes a conditional expression.
- conditional expression is conceivable in which a predetermined value is added to the sequence number 501 given to the distributed state machine event information 500 corresponding to the member merge event, and the calculated value is calculated as the merge sequence number. Further, a conditional expression is conceivable in which the sequence number 501 is multiplied by a predetermined value, and the calculated value is calculated as a merging sequence number. The present invention is not limited to the conditional expression for calculating the merge sequence number.
- the distributed state machine control unit 152 of each server 100 calculates a merge sequence number based on the proposal content 502 of the member merge event after the distribution agreement of the member merge event is performed. Further, the distributed state machine control unit 152 of each server 100 outputs the calculated merging sequence number to the recovery control unit 153.
- the recovery control unit 153 holds the input merging sequence number.
- the distributed state machine control unit 152 of each server 100 is in a waiting state for a certain period.
- the distributed state machine control unit 152 of each server 100 executes a member joining event that has been in a waiting state.
- the distributed state machine control unit 152 instructs the data management unit 151 to update the configuration information 170.
- the instruction includes information corresponding to the target key range 602 and the destination information 603.
- the data management unit 151 refers to the configuration information 170 and searches for the entry of the replication source server 100.
- the data management unit 151 updates the hash value so that the target key range 602 is removed from the master 403 of the searched entry. Further, the data management unit 151 updates the hash value so that the target key range 602 is included in the slave 404 of the searched entry.
- the data management unit 151 adds a new entry to the configuration information 170 and stores the identifier of the replication destination server 100 in the server ID 401 based on the destination information 603. Further, the data management unit 151 stores the hash value of the target key range 602 in the master 403 of the entry. Further, the data management unit 151 stores a predetermined hash value in the slave 404 of the entry.
- determining the hash value of the slave 404 There are various methods for determining the hash value of the slave 404. For example, a method in which at least one server 100 holds the hash values of the master 403 and the slave 404 of the failed server as history information and determines the hash value of the slave 404 based on the history information can be considered. . Further, a method of determining the hash value of the slave 404 so as to satisfy the multiplicity in the distributed KVS by referring to the slave 404 of the entry of another server 100 can be considered. Note that the present invention is not limited to the method for determining the hash value stored in the slave 404.
- FIG. 10 is a flowchart illustrating data update processing executed by the replication source server 100 according to the first embodiment of this invention.
- the data update process is executed mainly by the data management unit 151.
- the data management unit 151 receives an update command from the client device 200 (step S401).
- the data management unit 151 determines the sequence number of the update command (step S402). Specifically, the following processing is executed.
- the data management unit 151 requests the distributed state machine control unit 152 to make a distribution agreement of the update command together with the processing content of the update command.
- the distributed state machine control unit 152 communicates with the distributed state machine control unit 152 of the other server 100 according to the distributed agreement algorithm, distributes a copy of the update command, and determines the sequence number of the update command.
- the distributed state machine control unit 152 outputs the determined sequence number to the data management unit 151.
- the data management unit 151 determines whether or not it is in a recovery state (step S403). Specifically, the following processing is executed.
- the data management unit 151 transmits an acquisition request for the recovery information 154 to the recovery control unit 153.
- the recovery control unit 153 When the recovery information 154 exists, the recovery control unit 153 outputs the recovery information 154 to the data management unit 151. When the recovery information 154 does not exist, the recovery control unit 153 outputs an error notification.
- the data management unit 151 determines that it is in a recovery state. On the other hand, when the error notification is acquired, the data management unit 151 determines that the recovery state is not established.
- step S408 If it is determined that the state is not the restoration state, the data management unit 151 proceeds to step S408.
- the data management unit 151 determines whether the data to be processed in the update command is included in the target key range (step S404).
- the data management unit 151 calculates the hash value of the key included in the update command.
- the data management unit 151 determines whether or not the calculated key hash value is included in the target key range based on the calculated key hash value and the target key range 602 included in the acquired restoration information 154. judge. When it is determined that the calculated key hash value is included in the target key range, it is determined that the operation target data is included in the target key range.
- step S408 If it is determined that the update target data is not included in the target key range, the data management unit 151 proceeds to step S408.
- the data management unit 151 executes the data update process in the recovery state (step S405), and then executes the determination process (step S406).
- the determination process is a process for determining whether or not the recovery process is completed.
- the data management unit 151 notifies the processing result to the client device 200 (step S407), and ends the processing.
- the other server 100 similarly performs the processing from step S403 to step S408. This process is independent of the master server process.
- step S408 the data management unit 151 executes normal data update processing, and proceeds to step S407.
- the data management unit 151 acquires an exclusive lock, and stores data in which the key, value, and sequence number are associated with each other in the data management information 300.
- the normal data update process is a known technique, and thus detailed description thereof is omitted.
- FIG. 11 is a flowchart for explaining data update processing in the recovery state according to the first embodiment of the present invention.
- the data update process in the recovery state is executed mainly by the data management unit 151.
- the data management unit 151 acquires an exclusive lock of data included in the target key range (step S501). As a result, the data replication process for the target key range is not executed. Therefore, it can be avoided that the data replication process and the data update process occur simultaneously.
- the data management unit 151 continues to wait until the exclusive lock is released.
- the data management unit 151 updates the data based on the update command (step S502). For example, when the update command is a command corresponding to data overwrite processing, the data management unit 151 searches for data to be updated, and overwrites a predetermined value on the value and sequence number of the searched data. Since the data update method is a known technique, a detailed description thereof is omitted.
- the data management unit 151 instructs the recovery control unit 153 to transmit replicated data (step S503).
- the instruction includes updated data.
- the recovery control unit 153 receives the instruction, the recovery control unit 153 refers to the recovery information 154 and transmits the updated data to the recovery control unit 153 of the replication destination server 100 as replicated data.
- the data management unit 151 may transmit the updated data as replication data to the recovery control unit 153 of the replication destination server 100. In this case, the data management unit 151 acquires destination information from the recovery control unit 153.
- the data management unit 151 releases the exclusive lock (step S504) and ends the process. At this time, the data management unit 151 instructs the recovery control unit 153 to execute the determination process.
- the data management unit 151 may omit the processes in steps S502 and S503.
- Exclusive locks are acquired in the data replication process and the data update process in the recovery state. This is to control the order of duplicate data transmitted by two processes executed in parallel.
- the replication source server 100 obtains an exclusive lock and controls the two processes to be executed in series. Thereby, data consistency in the distributed KVS can be maintained.
- the replication source server 100 may receive the replication data in the order in which data inconsistency occurs due to a communication delay or the like.
- the replication destination server 100 may receive the replicated data in the order of data deletion and data overwriting due to communication delay. In this case, data inconsistency occurs.
- the replication source server 100 executes the two processes in series using an exclusive lock in order to avoid the inconsistency of data as described above, so that the replication data transmission order is To control.
- the execution order of the two processes is controlled using the exclusive lock, but the present invention is not limited to this. Any method may be used as long as two processes such as queuing are executed in series.
- FIG. 12 is a flowchart illustrating the determination process according to the first embodiment of the present invention.
- the determination process is executed mainly by the recovery control unit 153.
- the restoration control unit 153 starts the process when receiving an instruction to execute the determination process from the data management unit 151. First, the recovery control unit 153 determines whether or not the merge sequence number is stored in the recovery information 154 (step S601).
- the recovery control unit 153 ends the process.
- the recovery control unit 153 determines whether a member merge event occurs (step S602). Specifically, the following processing is executed.
- the restoration control unit 153 acquires the merge sequence number 604 from the restoration information 154.
- the recovery control unit 153 subtracts the sequence number 501 given to the update instruction in the data update process (step S405) executed before the determination process from the merge sequence number 604.
- the sequence number 501 is assigned to the distributed state machine event information 500 including the proposal content 502 corresponding to the update command executed in the data update process in the recovery state.
- the restoration control unit 153 determines whether or not the calculated value is “1”. When the calculated value is “1”, the recovery control unit 153 determines that a member joining event occurs.
- the recovery control unit 153 ends the process.
- the recovery control unit 153 transmits the recovery completion data to the recovery control unit 153 of the replication destination server 100 (step S603).
- the recovery completion data includes the merge sequence number.
- the restoration control unit 153 initializes the restoration information 154 (step S604) and ends the process. Specifically, the recovery control unit 153 deletes all information included in the recovery information 154. As a result, the recovery state is released.
- FIG. 13 is a flowchart for explaining a recovery process executed by the replication destination server 100 according to the first embodiment of the present invention.
- the server 100 sets information necessary for the recovery process (step S701), and transmits a recovery request to the server 100 based on the set information (step S702). Specifically, the recovery control unit 153 transmits a recovery request to the server 100 based on information set by the user.
- information for specifying the replication source server 100 and the replication destination server is set.
- the user sets the destination information and the target key range of the copy destination server 100.
- a recovery request is transmitted to the replication source server 100 based on the set destination information.
- the replication destination server 100 acquires the configuration information 170 from the other servers 100, refers to the acquired configuration information 170, and searches for the master server of the target key range.
- the replication destination server 100 transmits a recovery request to the server 100 corresponding to the found master server.
- the server 100 determines whether the received data is duplicated data (step S704). Specifically, the recovery control unit 153 determines whether the received data is duplicate data.
- the server 100 writes the received data to the data store 160 (step S705), and returns to step S703. Specifically, the received data is written to the data store 160 by the recovery control unit 153. Note that the recovery control unit 153 may request the data management unit 151 to write data to the data store 160.
- step S704 If it is determined in step S704 that the received data is not replicated data, that is, recovery completion data, the server 100 registers the merge sequence number (step S706) and ends the process. Specifically, the following processing is executed.
- the recovery control unit 153 acquires the merge sequence number included in the recovery completion data, and outputs a registration request including the acquired merge sequence number to the distributed state machine control unit 152.
- the distributed state machine control unit 152 temporarily holds the merge sequence number.
- the distributed state machine control unit 152 may delete the merge sequence number after the member merge event has occurred.
- the server 100 acquires the updated configuration information 170 from the replication source server 100.
- the present invention is not limited to the method for acquiring the updated configuration information 170.
- the following acquisition method can be considered.
- the copy source server 100 transmits the configuration information 170 updated as one copy data to the copy destination server 100.
- the replication source server 100 transmits recovery completion data including the updated configuration information 170.
- the recovery request includes the hash value of the target key range, but it is not always necessary. For example, when all the servers 100 included in the cluster hold the same data, the user does not need to specify the target key range in step S701. In this case, all data stored in the server 100 is the target of the replication process.
- the data update process and the data replication process are executed in parallel, it is possible to perform recovery while maintaining the consistency of the data of the replication source server and the replication destination server.
- the memory usage of the replication source server can be reduced.
- the network traffic can be reduced by transmitting one or a plurality of replicated data. Further, since the updated data is transmitted preferentially, it is not necessary to transmit the same key data a plurality of times. Therefore, it is possible to reduce the network communication amount in the restoration process.
- step S701 the server 100 that executes the data replication process can also be selected. Specifically, the following processing is executed.
- the replication destination server 100 Upon receiving the target key range from the user, the replication destination server 100 acquires the configuration information 170 from the server 100 included in the cluster.
- the replication destination server 100 refers to the configuration information 170 and displays the master server and slave server information of the target key range to the user. The user selects the server 100 that executes the data replication process based on the displayed information.
- the master server is selected as the server 100 that executes the data replication process, the same process as in the first embodiment is performed.
- the recovery control unit 153 When the slave server is selected as the server 100 that executes the data replication process, the recovery control unit 153 includes the slave server identification information and the data replication process execution instruction in the recovery request. Further, the recovery control unit 153 transmits the recovery request to the slave server.
- the slave server executes the processing shown in FIGS.
- the replication source server 100 controls the reception order of the replicated data of the replication destination server 100 by acquiring an exclusive lock at the start of the data replication process and the data update process. This avoids data inconsistencies as described above.
- the replication destination server 100 writes the replication data to the data store 160 in consideration of the execution order of the two processes.
- the server 100 has a buffer for temporarily storing data. Since other configurations are the same as those of the first embodiment, description thereof is omitted. In the second embodiment, the information stored in the data store 160 is different. Other information is the same as in the first embodiment.
- FIG. 14 is an explanatory diagram showing a format of data stored in the data store 160 according to the first embodiment of the present invention.
- the data management information 300 of the second embodiment newly includes a deletion flag 304.
- the deletion flag 304 stores information indicating whether or not the update process indicates data deletion. In the present embodiment, “True” is stored in the case of an update process indicating data deletion, and “False” is stored in an update process other than data deletion.
- FIG. 15 is a flowchart for explaining data replication processing executed by the replication source server 100 according to the second embodiment of the present invention.
- step S301 the processes of step S301, step S305, and step S306 are omitted.
- Other processes are the same as those in the first embodiment.
- FIG. 16 is a flowchart for explaining data update processing in the recovery state according to the second embodiment of the present invention.
- step S501 and step S504 are omitted.
- step S502 and step S503 Processing in the case of an update command that instructs data addition and data overwriting is the same as that in the first embodiment.
- step S502 when the update command is a command to delete data, the data management unit 151 searches for data to be deleted based on the update command.
- the data management unit 151 changes the deletion flag 304 of the searched data to “True”.
- step S503 the data management unit 151 instructs the recovery control unit 153 to transmit the replication data to the replication destination server 100.
- the transmission includes duplicate data in which the deletion flag 304 is “True”. Thereafter, the data management unit 151 deletes the data whose deletion flag 304 is “True”.
- the recovery control unit 153 When the recovery control unit 153 receives the instruction, the recovery control unit 153 refers to the recovery information 154 and transmits the replication data to the recovery control unit 153 of the replication destination server 100.
- FIG. 17 is a flowchart illustrating a recovery process executed by the replication destination server 100 according to the second embodiment of the present invention.
- step S701 to step S704 is the same processing as in the first embodiment.
- step S704 If it is determined in step S704 that the received data is duplicated data, the server 100 temporarily stores the received duplicated data in a buffer (step S801). Specifically, the recovery control unit 153 stores the received duplicate data in the buffer.
- the server 100 determines whether or not to write the data stored in the buffer to the data store 160 (step S802).
- the recovery control unit 153 when the capacity of data stored in the buffer is equal to or greater than a predetermined threshold, the recovery control unit 153 writes the data in the data store 160. Further, the recovery control unit 153 includes a timer, and writes the data into the data store 160 when a certain time has elapsed.
- the server 100 returns to step S703.
- the server 100 If it is determined that the replicated data stored in the buffer is written to the data store 160, the server 100 writes the replicated data stored in the buffer to the data store 160 (step S803), and then returns to step S703. Specifically, the following processing is executed.
- the restoration control unit 153 refers to the sequence number included in the duplicate data stored in the buffer and selects the duplicate data including the smallest sequence number.
- the recovery control unit 153 refers to the key of the selected duplicate data, and searches the buffer and data store 160 for duplicate data including the same key as the key.
- the restoration control unit 153 selects the duplicate data including the largest sequence number from the retrieved duplicate data, and writes the selected duplicate data to the data store 160. Further, the recovery control unit 153 deletes the duplicate data retrieved from the buffer.
- duplicated data whose deletion flag 304 is “True” is also temporarily stored in the data store 160.
- the restoration control unit 153 determines whether data is stored in the buffer. When data is not accumulated in the buffer, the recovery control unit 153 ends the process. On the other hand, when data is stored in the buffer, the recovery control unit 153 repeatedly performs the same processing.
- the processing is executed based on the order of the sequence numbers, but the processing may be executed in the registration order of the key dictionary.
- step S704 If it is determined in step S704 that the received data is recovery completion data, the server 100 writes the data temporarily stored in the buffer to the data store 160, and then the data store 160 based on the deletion flag 304. The data is deleted from (Step S804). Specifically, the following processing is executed.
- the recovery control unit 153 writes the data stored in the buffer to the data store 160.
- the same method as in step S803 is used.
- the recovery control unit 153 refers to the deletion flag 304 of the data management information 300 and searches for data in which “True” is stored in the deletion flag 304.
- the recovery control unit 153 deletes the retrieved data from the data management information 300.
- step S706 Since the process in step S706 is the same as the process in the first embodiment, a description thereof will be omitted.
- the processing overhead associated with the exclusive lock control can be reduced.
- the network traffic of the computer system can be reduced. Further, since the replication source server 100 does not need to acquire a snapshot at the time of recovery processing, the amount of memory used can be reduced. Moreover, since the transmission of the data of the same key can be suppressed by preferentially transmitting the updated data, the network traffic can be reduced.
- data consistency can be maintained by controlling the order of writing the replicated data. Since data is written to the replication destination server 100 without stopping the update process, there is no need to stop the system. Since the replication destination server 100 is added to the cluster based on the merge sequence number, the consistency of the entire system can be maintained and the system configuration can be explicitly changed.
- the various software illustrated in the present embodiment can be stored in various recording media (for example, non-temporary storage media) such as electromagnetic, electronic, and optical, and through a communication network such as the Internet. It can be downloaded to a computer.
- recording media for example, non-temporary storage media
- a communication network such as the Internet. It can be downloaded to a computer.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Human Computer Interaction (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
ステップS701では、データ複製処理を実行するサーバ100を選択することもできる。具体的には、以下のような処理が実行される。
Claims (14)
- ネットワークを介して複数の計算機が接続され、前記複数の計算機の各々が有する記憶領域から構成されたデータベースを用いて業務を実行する計算機システムであって、
前記データベースに格納されるデータは、前記データの識別情報、前記データの値、及び前記データベースにおけるイベントの実行順番であるシーケンス番号を含み、
前記複数の計算機の各々には、前記データの識別情報に対して分散配置アルゴリズムを適用して決定された管理範囲毎に当該データが分散して配置され、
前記複数の計算機の各々は、
配置されたデータを管理するデータ管理部と、
配置されたデータに対する操作の前記シーケンス番号を決定するデータ制御部と、
新たに追加された計算機に、所定の管理範囲に含まれるデータの複製データを送信する復旧制御部と、を有し、
前記複数の計算機は、復旧要求を送信する第1の計算機、及び前記復旧要求を受信する第2の計算機を含み、
前記第2の計算機は、
前記第1の計算機から復旧要求を受信し、前記第2の計算機の状態を復旧状態に遷移させ、前記シーケンス番号に基づいて前記データベースからデータを一つ以上読み出し、第1の複製データとして前記第1の計算機に送信する複製処理と、
前記復旧状態において前記データの更新命令を受信した場合、前記更新命令の前記シーケンス番号を決定し、前記更新命令に基づいて所定のデータを更新し、第2の複製データとして送信する更新処理と、を実行し、
前記第1の計算機又は前記第2の計算機の少なくともいずれか一方が、前記第1の計算機における前記第1の複製データ及び前記第2の複製データの書き込み順番を制御し、
前記第1の計算機は、前記書き込み順番に基づいて、前記第1の複製データ及び前記第2の複製データを前記データベースを構成する記憶領域に書き込む書込処理を実行することを特徴とする計算機システム。 - 請求項1に記載の計算機システムであって、
前記複数の計算機の各々は、前記シーケンス番号、及び前記データベースにおけるイベントの内容が対応づけられた履歴情報を保持し、
前記復旧要求は、処理対象の前記管理範囲を示す情報を含み、
前記複製処理では、
前記第2の計算機の復旧制御部が、
前記復旧要求を受信した場合、前記履歴情報に基づいて、最新のシーケンス番号を複製シーケンス番号として保持し、
前記処理対象の管理範囲の排他ロックを取得し、
前記処理対象の管理範囲に含まれるデータの中から、未送信のデータ、かつ、前記複製シーケンス番号より古い前記シーケンス番号を含むデータを一つ以上読み出して、前記第1の複製データとして前記第1の計算機の復旧制御部に送信し、
前記取得された排他ロックを解放し、
前記更新処理では、
前記第2の計算機のデータ管理部が、
前記処理対象の管理範囲の前記排他ロックを取得し、
前記更新命令に基づいて前記所定のデータを更新し、前記第1の複製データの送信指示を前記第2の計算機の復旧制御部に出力し、
前記取得された排他ロックを解放し、
前記第2の計算機の復旧制御部が、前記送信指示に基づいて前記第1の複製データを前記第1の計算機の復旧制御部に送信し、
前記書込処理では、前記第1の計算機の復旧制御部が、受信した順番にしたがって、前記第1の複製データ及び前記第2の複製データを前記データベースを構成する記憶領域に書き込むことを特徴とする計算機システム。 - 請求項1に記載の計算機システムであって、
前記複数の計算機の各々は、
前記シーケンス番号、及び前記データベースにおけるイベントの内容が対応づけられた履歴情報を保持し、
前記第1の複製データ及び前記第2の複製データを一時的に格納する作業記憶領域を有し、
前記復旧要求は、処理対象の前記管理範囲を示す情報を含み、
前記複製処理では、
前記第2の計算機の復旧制御部が、
前記復旧要求を受信した場合、前記履歴情報に基づいて、最新のシーケンス番号を複製シーケンス番号として保持し、
前記処理対象の管理範囲に含まれるデータの中から、未送信のデータ、かつ、前記複製シーケンス番号より古い前記シーケンス番号を含むデータを一つ以上読み出して、前記第1の複製データとして前記第1の計算機の復旧制御部に送信し、
前記更新処理では、
前記第2の計算機のデータ管理部が、前記更新命令に基づいて前記所定のデータを更新し、前記第1の複製データの送信指示を前記第2の計算機の復旧制御部に出力し、
前記第2の計算機の復旧制御部が、前記送信指示に基づいて前記第1の複製データを前記第1の計算機の復旧制御部に送信し、
前記書込処理では、
前記第1の計算機の復旧制御部が、
前記受信した第1の複製データ及び前記受信した第2の複製データを前記作業記憶領域に格納し、
前記データベース及び前記作業記憶領域の中から、同一の前記データの識別情報を含む、前記第1の複製データ及び前記第2の複製データを検索し、
前記検索された第1の複製データに含まれる前記シーケンス番号、及び前記検索された第2の複製データに含まれる前記シーケンス番号を参照して、最新の前記シーケンス番号を含む複製データを選択し、
前記選択された複製データを前記データベースを構成する記憶領域に書き込むことを特徴とする計算機システム。 - 請求項3に記載の計算機システムであって、
前記データベースに格納されるデータは、さらに、削除対象のデータであるか否かを示す削除フラグを含み、
前記更新処理では、前記第2の計算機のデータ管理部が、前記更新命令がデータの削除命令である場合に、前記削除フラグが付与された削除対象のデータを前記第1の複製データとして前記第1の計算機の復旧制御部に送信し、
前記書込処理では、前記第1の計算機のデータ管理部が、前記データベースに書き込まれたデータのうち、前記削除フラグが付与されたデータを、前記データベースを構成する記憶領域から削除すること特徴とする計算機システム。 - 請求項2又は請求項3のいずれか一項に記載の計算機システムであって、
前記複数の計算機の各々は、前記複数の計算機の各々がマスタとして管理する前記管理範囲、及びスレーブとして管理する前記管理範囲を示す構成情報を保持し、
前記複製処理では、
前記第2の計算機の復旧制御部が、前記複製シーケンス番号を含む前記第1の複製データが送信された後、前記複数の計算機に前記第1の計算機を追加するための合流イベントの前記シーケンス番号である合流シーケンス番号の決定を前記第2の計算機のデータ制御部に指示し、
前記第2の計算機のデータ制御部が、
分散合意アルゴリズムに基づいて、前記複数の計算機の各々の前記データ制御部と通信することによって前記合流シーケンス番号を決定し、
前記決定された合流シーケンス番号、及び、前記合流イベントの内容が対応づけられた前記履歴情報を保持し、
前記決定された合流シーケンス番号を前記第2の計算機の復旧制御部に出力し、
前記更新処理では、
前記第2の計算機のデータ管理部が、
前記第1の複製データが送信された後に、当該第1の複製データに含まれる前記シーケンス番号と、前記合流シーケンス番号とを比較して、前記合流イベントが発生するか否かを判定し、
前記合流イベントが発生すると判定された場合、前記第1の計算機の復旧制御部に、前記合流シーケンス番号を送信し、
前記合流イベントでは、前記第1の計算機が前記処理対象の管理範囲のマスタとなるように前記構成情報が更新されることを特徴とする計算機システム。 - 請求項5に記載の計算機システムであって、
前記第1の計算機は、
前記処理対象の管理範囲をマスタとして管理する計算機、又は、前記処理対象の管理範囲をスレーブとして管理する計算機の少なくともいずれか一方の計算機を前記第2の計算機として選択し、
前記選択された第2の計算機に前記復旧要求を送信することを特徴とする計算機システム。 - ネットワークを介して複数の計算機が接続され、前記複数の計算機の各々が有する記憶領域から構成されたデータベースを用いて業務を実行する計算機システムにおける計算機システム管理方法であって、
前記複数の計算機の各々は、プロセッサと、前記プロセッサに接続されるメモリと、前記プロセッサと接続され、前記ネットワークを介して他の前記計算機と通信するためのネットワークインタフェースとを有し、
前記データベースに格納されるデータは、前記データの識別情報、前記データの値、及び前記データベースにおけるイベントの実行順番であるシーケンス番号を含み、
前記複数の計算機の各々には、前記データの識別情報に対して分散配置アルゴリズムを適用することによって決定された管理範囲毎に当該データが分散して配置され、
前記複数の計算機の各々は、
配置されたデータを管理するデータ管理部と、
配置されたデータに対する操作の前記シーケンス番号を決定するデータ制御部と、
新たに追加された計算機に、所定の管理範囲に含まれるデータの複製データを送信する復旧制御部と、を有し、
前記複数の計算機は、復旧要求を送信する第1の計算機、及び前記復旧要求を受信する第2の計算機を含み、
前記方法は、
前記第2の計算機が、前記第1の計算機から復旧要求を受信し、前記第2の計算機の状態を復旧状態に遷移させ、前記シーケンス番号に基づいて前記データベースからデータを一つ以上読み出し、第1の複製データとして前記第1の計算機に送信する複製処理を実行するステップと、
前記第2の計算機が、前記復旧状態において前記データの更新命令を受信した場合、前記更新命令の前記シーケンス番号を決定し、前記更新命令に基づいて所定のデータを更新し、第2の複製データとして送信する更新処理を実行するステップと、
前記第1の計算機又は前記第2の計算機の少なくともいずれか一方が、前記第1の計算機における前記第1の複製データ及び前記第2の複製データの書き込み順番を制御するステップと、
前記第1の計算機が、前記書き込み順番に基づいて、前記第1の複製データ及び前記第2の複製データを前記データベースを構成する記憶領域に書き込む書込処理を実行するステップと、を含むことを特徴とする計算機システム管理方法。 - 請求項7に記載の計算機システム管理方法であって、
前記複数の計算機の各々は、前記シーケンス番号、及び前記データベースにおけるイベントの内容が対応づけられた履歴情報を保持し、
前記復旧要求は、処理対象の前記管理範囲を示す情報を含み、
前記複製処理は、
前記第2の計算機の復旧制御部が、
前記復旧要求を受信した場合、前記履歴情報に基づいて、最新のシーケンス番号を複製シーケンス番号として保持するステップと、
前記処理対象の管理範囲の排他ロックを取得するステップと、
前記処理対象の管理範囲に含まれるデータの中から、未送信のデータ、かつ、前記複製シーケンス番号より古い前記シーケンス番号を含むデータを一つ以上読み出して、前記第1の複製データとして前記第1の計算機の復旧制御部に送信するステップと、
前記取得された排他ロックを解放するステップと、を含み、
前記更新処理は、
前記第2の計算機のデータ管理部が、
前記処理対象の管理範囲の前記排他ロックを取得するステップと、
前記更新命令に基づいて前記所定のデータを更新し、前記第1の複製データの送信指示を前記第2の計算機の復旧制御部に出力するステップと、
前記取得された排他ロックを解放するステップと、
前記第2の計算機の復旧制御部が、前記送信指示に基づいて前記第1の複製データを前記第1の計算機の復旧制御部に送信するステップと、を含み、
前記書込処理では、前記第1の計算機の復旧制御部が、受信した順番にしたがって、前記第1の複製データ及び前記第2の複製データを前記データベースを構成する記憶領域に書き込むことを特徴とする計算機システム管理方法。 - 請求項7に記載の計算機システム管理方法であって、
前記複数の計算機の各々は、
前記シーケンス番号、及び前記データベースにおけるイベントの内容が対応づけられた履歴情報を保持し、
前記第1の複製データ及び前記第2の複製データを一時的に格納する作業記憶領域を有し、
前記復旧要求は、処理対象の前記管理範囲を示す情報を含み、
前記複製処理は、
前記第2の計算機の復旧制御部が、
前記復旧要求を受信した場合、前記履歴情報に基づいて、最新のシーケンス番号を複製シーケンス番号として保持するステップと、
前記処理対象の管理範囲に含まれるデータの中から、未送信のデータ、かつ、前記複製シーケンス番号より古い前記シーケンス番号を含むデータを一つ以上読み出して、前記第1の複製データとして前記第1の計算機の復旧制御部に送信するステップと、を含み、
前記更新処理は、
前記第2の計算機のデータ管理部が、前記更新命令に基づいて前記所定のデータを更新し、前記第1の複製データの送信指示を前記第2の計算機の復旧制御部に出力するステップと、
前記第2の計算機の復旧制御部が、前記送信指示に基づいて前記第1の複製データを前記第1の計算機の復旧制御部に送信するステップと、を含み、
前記書込処理は、
前記第1の計算機の復旧制御部が、
前記受信した第1の複製データ及び前記受信した第2の複製データを前記作業記憶領域に格納するステップと、
前記データベース及び前記作業記憶領域の中から、同一の前記データの識別情報を含む、前記第1の複製データ及び前記第2の複製データを検索するステップと、
前記検索された第1の複製データに含まれる前記シーケンス番号、及び前記検索された第2の複製データに含まれる前記シーケンス番号を参照して、最新の前記シーケンス番号を含む複製データを選択するステップと、
前記選択された複製データを前記データベースを構成する記憶領域に書き込むステップと、を含むことを特徴とする計算機システム管理方法。 - 請求項9に記載の計算機システム管理方法であって、
前記データベースに格納されるデータは、さらに、削除対象のデータであるか否かを示す削除フラグを含み、
前記更新処理は、前記第2の計算機のデータ管理部が、前記更新命令がデータの削除命令である場合に、前記削除フラグが付与された削除対象のデータを前記第1の複製データとして前記第1の計算機の復旧制御部に送信するステップを含み、
前記書込処理は、前記第1の計算機のデータ管理部が、前記データベースに書き込まれたデータのうち、前記削除フラグが付与されたデータを、前記データベースを構成する記憶領域から削除するステップを含むこと特徴とする計算機システム管理方法。 - 請求項8又は請求項9のいずれか一項に記載の計算機システム管理方法であって、
前記複数の計算機の各々は、前記複数の計算機の各々がマスタとして管理する前記管理範囲、及びスレーブとして管理する前記管理範囲を示す構成情報を保持し、
前記複製処理は、
前記第2の計算機の復旧制御部が、前記複製シーケンス番号を含む前記第1の複製データが送信された後、前記複数の計算機に前記第1の計算機を追加するための合流イベントの前記シーケンス番号である合流シーケンス番号の決定を前記第2の計算機のデータ制御部に指示するステップと、
前記第2の計算機のデータ制御部が、
分散合意アルゴリズムに基づいて、前記複数の計算機の各々の前記データ制御部と通信することによって前記合流シーケンス番号を決定するステップと、
前記決定された合流シーケンス番号、及び、前記合流イベントの内容が対応づけられた前記履歴情報を保持するステップと、
前記決定された合流シーケンス番号を前記第2の計算機の復旧制御部に出力するステップと、を含み、
前記更新処理は、
前記第2の計算機のデータ管理部が、
前記第1の複製データが送信された後に、当該第1の複製データに含まれる前記シーケンス番号と、前記合流シーケンス番号とを比較して、前記合流イベントが発生するか否かを判定するステップと、
前記合流イベントが発生すると判定された場合、前記第1の計算機の復旧制御部に、前記合流シーケンス番号を送信するステップと、を含み、
前記合流イベントでは、前記第1の計算機が前記処理対象の管理範囲のマスタとなるように前記構成情報が更新されることを特徴とする計算機システム管理方法。 - 請求項11に記載の計算機システム管理方法であって、
前記方法は、
前記第1の計算機が、
前記処理対象の管理範囲をマスタとして管理する計算機、又は、前記処理対象の管理範囲をスレーブとして管理する計算機の少なくともいずれか一方の計算機を前記第2の計算機として選択するステップと、
前記選択された第2の計算機に前記復旧要求を送信するステップと、を含むことを特徴とする計算機システム管理方法。 - ネットワークを介して複数の計算機が接続され、前記複数の計算機の各々が有する記憶領域から構成されたデータベースを用いて業務を実行する計算機システムに含まれる前記計算機が実行するプログラムであって、
前記複数の計算機の各々は、プロセッサと、前記プロセッサに接続されるメモリと、前記プロセッサと接続され、前記ネットワークを介して他の前記計算機と通信するためのネットワークインタフェースとを有し、
前記データベースに格納されるデータは、前記データの識別情報、前記データの値、及び前記データベースにおけるイベントの実行順番であるシーケンス番号を含み、
前記複数の計算機の各々には、前記データの識別情報に対して分散配置アルゴリズムを適用することによって決定された管理範囲毎に当該データが分散して配置され、
前記複数の計算機の各々は、
配置されたデータを管理するデータ管理部と、
配置されたデータに対する操作の前記シーケンス番号を決定するデータ制御部と、
新たに追加された計算機に、所定の管理範囲に含まれるデータの複製データを送信する復旧制御部と、を有し、
前記プログラムは、
他の前記計算機から復旧要求を受信し、前記計算機の状態を復旧状態に遷移し、前記シーケンス番号に基づいて前記データベースに格納される前記データを一つ以上読み出し、第1の複製データとして前記他の計算機に送信する複製処理を実行する手順と、
前記復旧状態において前記データの更新命令を受信した場合、前記更新命令の前記シーケンス番号を決定し、前記更新命令に基づいて所定のデータを更新し、第2の複製データとして送信する更新処理を実行する手順と、
前記他の計算機における前記第1の複製データ及び前記第2の複製データの書き込み順番を制御する手順と、を前記計算機に実行させることを特徴とするプログラム。 - 請求項13に記載のプログラムであって、
前記複数の計算機の各々は、前記シーケンス番号、及び前記データベースにおけるイベントの内容が対応づけられた履歴情報を保持し、
前記復旧要求は、処理対象の前記管理範囲を示す情報を含み、
前記複製処理は、
前記計算機の復旧制御部が、
前記復旧要求を受信した場合、前記履歴情報に基づいて、最新のシーケンス番号を複製シーケンス番号として保持する手順と、
前記処理対象の管理範囲の排他ロックを取得する手順と、
前記処理対象の管理範囲に含まれるデータの中から、未送信のデータ、かつ、前記複製シーケンス番号より古い前記シーケンス番号を含むデータを一つ以上読み出して、前記第1の複製データとして前記他の計算機の復旧制御部に送信する手順と、
前記取得された排他ロックを解放する手順と、を含み、
前記更新処理は、
前記計算機のデータ管理部が、
前記処理対象の管理範囲の前記排他ロックを取得する手順と、
前記更新命令に基づいて前記所定のデータを更新し、前記第1の複製データの送信指示を前記計算機の復旧制御部に出力する手順と、
前記取得された排他ロックを解放する手順と、
前記計算機の復旧制御部が、前記送信指示に基づいて前記第1の複製データを前記他の計算機の復旧制御部に送信する手順と、を含むことを特徴とするプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP13882090.7A EP2988220B1 (en) | 2013-04-16 | 2013-04-16 | Computer system, computer-system management method, and program |
US14/426,996 US9892183B2 (en) | 2013-04-16 | 2013-04-16 | Computer system, computer system management method, and program |
PCT/JP2013/061257 WO2014170952A1 (ja) | 2013-04-16 | 2013-04-16 | 計算機システム、計算機システム管理方法及びプログラム |
JP2015512218A JP5952960B2 (ja) | 2013-04-16 | 2013-04-16 | 計算機システム、計算機システム管理方法及びプログラム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/061257 WO2014170952A1 (ja) | 2013-04-16 | 2013-04-16 | 計算機システム、計算機システム管理方法及びプログラム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2014170952A1 true WO2014170952A1 (ja) | 2014-10-23 |
Family
ID=51730925
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/061257 WO2014170952A1 (ja) | 2013-04-16 | 2013-04-16 | 計算機システム、計算機システム管理方法及びプログラム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9892183B2 (ja) |
EP (1) | EP2988220B1 (ja) |
JP (1) | JP5952960B2 (ja) |
WO (1) | WO2014170952A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016117322A1 (ja) * | 2015-01-22 | 2016-07-28 | 日本電気株式会社 | 処理要求装置、処理装置、データベースシステム、データベース更新方法およびプログラム記録媒体 |
WO2016143095A1 (ja) * | 2015-03-11 | 2016-09-15 | 株式会社日立製作所 | 計算機システム及びトランザクション処理の管理方法 |
JP2017130705A (ja) * | 2016-01-18 | 2017-07-27 | 日本電気株式会社 | データ管理システム、データ管理方法、及び、データ管理プログラム |
CN113822015A (zh) * | 2020-06-16 | 2021-12-21 | 北京沃东天骏信息技术有限公司 | 序列号生成方法、装置、电子设备及计算机可读介质 |
JP2022026178A (ja) * | 2020-07-30 | 2022-02-10 | 株式会社日立製作所 | 計算機システム、構成変更制御装置、および構成変更制御方法 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10333724B2 (en) | 2013-11-25 | 2019-06-25 | Oracle International Corporation | Method and system for low-overhead latency profiling |
US9553998B2 (en) | 2014-06-09 | 2017-01-24 | Oracle International Corporation | Sharing group notification |
US20150356117A1 (en) * | 2014-06-09 | 2015-12-10 | Oracle International Corporation | Eventual consistency to resolve subscriber sharing relationships in a distributed system |
US9910740B1 (en) * | 2014-06-30 | 2018-03-06 | EMC IP Holding Company LLC | Concurrent recovery operation management |
US10462218B1 (en) | 2015-07-15 | 2019-10-29 | Google Llc | System and method for sending proposals within a distributed state machine replication system |
US10007695B1 (en) * | 2017-05-22 | 2018-06-26 | Dropbox, Inc. | Replication lag-constrained deletion of data in a large-scale distributed data storage system |
US11275764B2 (en) * | 2018-10-11 | 2022-03-15 | EMC IP Holding Company LLC | Highly resilient synchronous replication with automatic recovery |
JP6972052B2 (ja) * | 2019-02-28 | 2021-11-24 | 株式会社安川電機 | 通信システム、通信方法、及びプログラム |
US11290390B2 (en) | 2019-11-20 | 2022-03-29 | Oracle International Corporation | Methods, systems, and computer readable media for lockless communications network resource quota sharing |
CN111209304B (zh) * | 2019-12-30 | 2023-04-07 | 华为云计算技术有限公司 | 数据处理方法、装置及系统 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5261085A (en) | 1989-06-23 | 1993-11-09 | Digital Equipment Corporation | Fault-tolerant system and method for implementing a distributed state machine |
JP2009199197A (ja) | 2008-02-20 | 2009-09-03 | Hitachi Ltd | 計算機システム、データ一致化方法およびデータ一致化処理プログラム |
WO2012140957A1 (ja) * | 2011-04-13 | 2012-10-18 | 株式会社日立製作所 | 情報記憶システム及びそのデータ複製方法 |
WO2013046352A1 (ja) * | 2011-09-28 | 2013-04-04 | 株式会社日立製作所 | 計算機システム、データ管理方法及びデータ管理プログラム |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004259079A (ja) * | 2003-02-27 | 2004-09-16 | Hitachi Ltd | データ処理システム |
US20060235901A1 (en) * | 2005-04-18 | 2006-10-19 | Chan Wing M | Systems and methods for dynamic burst length transfers |
US9235595B2 (en) * | 2009-10-02 | 2016-01-12 | Symantec Corporation | Storage replication systems and methods |
JP5454201B2 (ja) * | 2010-02-15 | 2014-03-26 | 富士通株式会社 | データストア切替装置、データストア切替方法およびデータストア切替プログラム |
KR101656384B1 (ko) * | 2010-06-10 | 2016-09-12 | 삼성전자주식회사 | 불휘발성 메모리 장치의 데이터 기입 방법 |
US20120284231A1 (en) * | 2011-05-06 | 2012-11-08 | International Business Machines Corporation | Distributed, asynchronous and fault-tolerant storage system |
US8676951B2 (en) * | 2011-07-27 | 2014-03-18 | Hitachi, Ltd. | Traffic reduction method for distributed key-value store |
JP5733124B2 (ja) * | 2011-09-12 | 2015-06-10 | 富士通株式会社 | データ管理装置、データ管理システム、データ管理方法、及びプログラム |
US20140019573A1 (en) * | 2012-07-16 | 2014-01-16 | Compellent Technologies | Source reference replication in a data storage subsystem |
-
2013
- 2013-04-16 EP EP13882090.7A patent/EP2988220B1/en active Active
- 2013-04-16 JP JP2015512218A patent/JP5952960B2/ja active Active
- 2013-04-16 WO PCT/JP2013/061257 patent/WO2014170952A1/ja active Application Filing
- 2013-04-16 US US14/426,996 patent/US9892183B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5261085A (en) | 1989-06-23 | 1993-11-09 | Digital Equipment Corporation | Fault-tolerant system and method for implementing a distributed state machine |
JP2009199197A (ja) | 2008-02-20 | 2009-09-03 | Hitachi Ltd | 計算機システム、データ一致化方法およびデータ一致化処理プログラム |
WO2012140957A1 (ja) * | 2011-04-13 | 2012-10-18 | 株式会社日立製作所 | 情報記憶システム及びそのデータ複製方法 |
WO2013046352A1 (ja) * | 2011-09-28 | 2013-04-04 | 株式会社日立製作所 | 計算機システム、データ管理方法及びデータ管理プログラム |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016117322A1 (ja) * | 2015-01-22 | 2016-07-28 | 日本電気株式会社 | 処理要求装置、処理装置、データベースシステム、データベース更新方法およびプログラム記録媒体 |
JPWO2016117322A1 (ja) * | 2015-01-22 | 2017-11-02 | 日本電気株式会社 | 処理要求装置、処理装置、データベースシステム、データベース更新方法およびプログラム |
WO2016143095A1 (ja) * | 2015-03-11 | 2016-09-15 | 株式会社日立製作所 | 計算機システム及びトランザクション処理の管理方法 |
US10747777B2 (en) | 2015-03-11 | 2020-08-18 | Hitachi, Ltd. | Computer system and transaction processing management method |
JP2017130705A (ja) * | 2016-01-18 | 2017-07-27 | 日本電気株式会社 | データ管理システム、データ管理方法、及び、データ管理プログラム |
CN113822015A (zh) * | 2020-06-16 | 2021-12-21 | 北京沃东天骏信息技术有限公司 | 序列号生成方法、装置、电子设备及计算机可读介质 |
JP2022026178A (ja) * | 2020-07-30 | 2022-02-10 | 株式会社日立製作所 | 計算機システム、構成変更制御装置、および構成変更制御方法 |
JP7047027B2 (ja) | 2020-07-30 | 2022-04-04 | 株式会社日立製作所 | 計算機システム、構成変更制御装置、および構成変更制御方法 |
Also Published As
Publication number | Publication date |
---|---|
EP2988220A4 (en) | 2017-01-11 |
EP2988220B1 (en) | 2020-09-16 |
JP5952960B2 (ja) | 2016-07-13 |
US20150242481A1 (en) | 2015-08-27 |
JPWO2014170952A1 (ja) | 2017-02-16 |
US9892183B2 (en) | 2018-02-13 |
EP2988220A1 (en) | 2016-02-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5952960B2 (ja) | 計算機システム、計算機システム管理方法及びプログラム | |
US10691716B2 (en) | Dynamic partitioning techniques for data streams | |
US9639589B1 (en) | Chained replication techniques for large-scale data streams | |
US9367261B2 (en) | Computer system, data management method and data management program | |
US20170249246A1 (en) | Deduplication and garbage collection across logical databases | |
US11263236B2 (en) | Real-time cross-system database replication for hybrid-cloud elastic scaling and high-performance data virtualization | |
JP6360634B2 (ja) | 計算機システム、及び、データ処理方法 | |
JP6835968B2 (ja) | スタビングによるコンテンツ記憶の最適化 | |
JP2009157785A (ja) | 待機系計算機の追加方法、計算機及び計算機システム | |
JP5686034B2 (ja) | クラスタシステム、同期制御方法、サーバ装置および同期制御プログラム | |
US9984139B1 (en) | Publish session framework for datastore operation records | |
US20190188309A1 (en) | Tracking changes in mirrored databases | |
CN113010496A (zh) | 一种数据迁移方法、装置、设备和存储介质 | |
JP6196389B2 (ja) | 分散型ディザスタリカバリファイル同期サーバシステム | |
WO2013118270A1 (ja) | 計算機システム、データ管理方法及びプログラム | |
JP6007340B2 (ja) | 計算機システム、計算機システム管理方法及びプログラム | |
KR101748913B1 (ko) | 분산 저장 환경에서 게이트웨이를 선택하기 위한 클러스터 관리 방법 및 데이터 저장 시스템 | |
US11157454B2 (en) | Event-based synchronization in a file sharing environment | |
US20210248108A1 (en) | Asynchronous data synchronization and reconciliation | |
US11687565B2 (en) | Asynchronous data replication in a multiple availability zone cloud platform | |
US20230409535A1 (en) | Techniques for resource utilization in replication pipeline processing | |
JPWO2012059976A1 (ja) | プログラム、ストリームデータ処理方法及びストリームデータ処理計算機 | |
WO2013073022A1 (ja) | 計算機システム及び障害検出方法 | |
JP2015165373A (ja) | ノードおよびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13882090 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015512218 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 14426996 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013882090 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |