EP2353277A1 - Verfahren und system zum synchronisieren einer menge von softwaremodulen eines datenverarbeitungssystems, das als cluster von servern verteilt ist - Google Patents

Verfahren und system zum synchronisieren einer menge von softwaremodulen eines datenverarbeitungssystems, das als cluster von servern verteilt ist

Info

Publication number
EP2353277A1
EP2353277A1 EP09768166A EP09768166A EP2353277A1 EP 2353277 A1 EP2353277 A1 EP 2353277A1 EP 09768166 A EP09768166 A EP 09768166A EP 09768166 A EP09768166 A EP 09768166A EP 2353277 A1 EP2353277 A1 EP 2353277A1
Authority
EP
European Patent Office
Prior art keywords
software
software module
subset
synchronization
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP09768166A
Other languages
English (en)
French (fr)
Inventor
Dominique Vinay
Loïc LAMBERT
Philippe Motet
Soazig David
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Active Circle SA
Original Assignee
Active Circle SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Active Circle SA filed Critical Active Circle SA
Publication of EP2353277A1 publication Critical patent/EP2353277A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes

Definitions

  • the present invention relates to a method and system for synchronizing a set of software modules of a distributed computer system into a plurality of interconnected networked servers. It also relates to a computer program for the implementation of this method.
  • the invention applies more particularly to a computer system in which each software module executes on a server of the computer system for the management of a set of digital data, at least part of which digital data is replicated on several modules.
  • software and wherein the synchronization between two software modules of the set includes a synchronization of the common data they manage.
  • the digital data is, for example, data describing a service and the service provided by the computer system is for example a data storage service distributed between the networked interconnected servers, each server being connected to disk storage devices. hard or magnetic tape.
  • the digital data comprise, for example, description data of users of the storage service, data describing the infrastructure and the operation of the computer system for providing the service, and description data of the stored data. and their storage mode.
  • the service provided by the computer system can also be an information data transmission, data processing, calculation, transaction or a combination of these services.
  • the description data is adapted specifically to the service provided.
  • the servers of the computer system on which the software modules run are generally interconnected by at least one LAN (of the "Local Area Network") and / or WAN (of the “Wide Area Network”) network.
  • This set of servers interconnected network can be called “server cluster", the software modules are usually called “nodes” of the cluster.
  • a server or a particular software module is in principle dedicated to the management of all the software modules, especially for the synchronization of the replicated digital data. This poses problems when the server or the software module dedicated to the management of the set is faulty.
  • a service for example, in the patent application published under the number FR 2 851 709, provision is made for a service to be provided, via a communication network, to a user by a main server associated with a database.
  • Auxiliary servers connected to this main server are also provided in the communication network to make this service more quickly accessible to the user. But they must then be synchronized with the main server, including its database.
  • the communication network is provided with specific means of synchronization, for example implemented in resource servers. It thus appears that certain elements of the communication network, the main server and the resource servers, have a very particular role and their failure may have immediate consequences on the quality of service provided.
  • a computer cluster system provides that several computers can copy locally the same data from common storage means.
  • a system of coupling between the computers provides for the updating of the common storage means whenever a locally copied data is modified by a computer, so that the other computers can update their copied data locally, with reference to the common storage means.
  • the architecture of the system provides a particular role for the coupling system and the common storage means.
  • the subject of the invention is therefore a method for synchronizing a set of software modules of a distributed computer system into a plurality of interconnected networked servers, each software module executing on a server of the computer system for the management of a set digital data of which at least a part is replicated on several software modules, wherein the synchronization between two software modules of the set comprises a synchronization of the common data that they manage, characterized in that it comprises the following steps:
  • the candidate software module is synchronized with the other software module found
  • the candidate software module and this other software module found are integrated in this new identified subset.
  • the choice of at least one software module of this subset identified. to achieve synchronization is a function of a workload of at least a portion of the software modules of this identified subset. It is thus possible to advantageously distribute the workload caused by the synchronization itself.
  • a synchronization method according to the invention may further comprise the following steps: detection, by a first identified subset, of a second identified subset,
  • this synchronization method tends to favor the fusion between homogeneous subsets as long as the complete synchronization of all the software modules of the set is not obtained.
  • a software module is selected to be specifically marked as an identifier of that identified subset.
  • This software module selected in each subset then acts as a marker allowing all the other software modules to recognize their belonging to an identified subset.
  • a synchronization method according to the invention may further comprise the following steps:
  • a synchronization method may further comprise the following steps:
  • each subset constituted and identified there is provided a permanent synchronization mechanism of the software modules between them particularly effective and not centralized.
  • the execution of an action on a first software module of a subset has the consequence, through the transmission of a message identifying this action, the execution of this same action on all other software modules of the same subset managing a replication of the digital data concerned by this action. Therefore, whatever the software module on which the action is first executed, it performs a function of synchronization manager in the subset and the result is the same: everything happens as if the action was executed on all the software modules containing the numerical data concerned by the action, in the subset considered.
  • no software module plays a privileged or particular role from the point of view of digital data management, which makes the entire computer system less vulnerable to service continuity failures in the event of a failure of a software module or a problem. server.
  • a synchronization method according to the invention may further comprise the following steps:
  • the candidate software module which comprises a part of the digital data, extraction of a state of the replications of this part of the digital data on at least one other software module, and registration of the candidate software module as a potential receiver of at least one synchronization message identifying an action on a replication of its digital data located on another software module,
  • the invention also relates to a system for synchronizing a set of software modules of a computer system, comprising a plurality of interconnected networked servers, each software module running on a server of the computer system for the management of a set of data, at least a part of which is replicated on several software modules, in which the synchronization between two software modules of the set comprises a synchronization of the common data that they manage, characterized in that it comprises:
  • the invention also relates to a computer program downloadable from a communication network and / or recorded on a computer-readable medium and / or executable by a processor, characterized in that it comprises code instructions of program for performing the steps of a synchronization method as defined above, when said program is run on a computer.
  • FIG. 1 schematically represents the general structure of a data storage computer system distributed in several interconnected networked servers
  • FIG. 2 illustrates an example of distribution of description data in the computer system of FIG. 1,
  • FIG. 3 illustrates the successive steps of a synchronization method implemented in the system of FIG. 1 according to one embodiment of the invention
  • FIG. 4 represents a diagram of states and transitions between these states of software and hardware modules of the computer system of FIG.
  • FIGS. 5 and 6 partially illustrate the successive steps of synchronization methods according to other embodiments of the invention
  • FIGS. 7 and 8 illustrate examples of implementation of a particular synchronization step of the method of synchronization of Figure 3
  • FIG. 9 illustrates an exemplary implementation of another particular synchronization step of the synchronization method of FIG. 3.
  • the computer system 10 shown in Figure 1 comprises several servers 12 1; 12 2 , 12 3 , 12 4 and 12 5 , spread over several areas.
  • Each server is of a classic type and will not be detailed.
  • On each server 12 13 12 2 , 12 3 , 12 4 and 12 5 is installed at least one specific software and hardware module 14 1 s 14 2 , 14 3 , 14 4 and 14 5 of management of a service, for example a data storage service.
  • FIG. 1 Five servers and two domains are represented in FIG. 1 for illustrative purposes only, but any other computer system structure distributed in several servers interconnected in a network may be suitable for implementing a synchronization method according to the invention.
  • a software and hardware module per server there is shown a software and hardware module per server, so that the modules and their respective servers can be confused in the following description, without having to be confused in a more general implementation of the invention.
  • the software and hardware module " U 1 of the server 12 " is detailed in FIG. 1. It comprises a first software layer 16i consisting of a server operating system 12. It includes a second software management layer 18i of the data management system. description of the data storage service provided by the computer system 10.
  • a third software and hardware layer 20i fulfilling at least two functions: a first storage function, on an internal hard disk of the server 12 1; storage service and a second cache function, also on this hard disk, of data stored on storage devices of the server 12.
  • a fourth software and hardware layer 22 1 s 24i of data warehouses comprising at least one hard disk data warehouse 22! and / or at least one data warehouse 24i tapes.
  • a data warehouse designates a virtual data storage space consisting of one or more disk partitions, or one or more magnetic tapes, among the storage devices of the server with which it is associated.
  • the software and hardware modules 14 2 , 14 3 , 14 4 and 14 5 of the servers 12 2 , 12 3 , 12 4 and 12 5 will not be detailed because they are similar to the software and hardware module ⁇ ⁇ ⁇ .
  • the servers 12 13 12 2 and 12 3 are interconnected by a first LAN-type network 26 to create a first subset or domain 28.
  • This first domain 28 corresponds, for example, to a localized geographic organization, such as a geographical site, a building or a computer room.
  • the servers 12 4 and 12 5 are interconnected by a second network 30 of the LAN type to create a second subset or domain 32.
  • This second domain 28 also corresponds for example to another localized geographical organization, such as a site geographical area, a building or a computer room.
  • These two domains are interconnected by a WAN type network 34, such as the Internet network.
  • this clustered computer system distributed over several geographical sites makes it possible to envisage data storage all the more secure that these can be replicated on software and hardware modules located on different geographical sites.
  • the storage service provided by this computer system 10 and the data actually stored are advantageously completely defined and described by a set of description data which will be described in their general principles with reference to FIG. 2.
  • the description data are for example grouped into several sets structured according to their nature and possibly linked together.
  • a structured set which will be called "catalog” in the following description, may be in the form of a directory tree containing themselves other directories and / or description data files.
  • the representation of the description data according to a tree of directories and files has the advantage of being simple and therefore economical to design and manage. In addition, this representation is often sufficient for the service in question. It is also possible for more complex applications to represent and manage the description data in relational databases.
  • a catalog of description data may be global, that is to say relate to description data useful to the entire computer system 10, or local, that is to say relate to description data specific to one or more software module (s) and hardware (s) 14 15 14 2 , 14 3 , 14 4 or 14 5 for managing the service.
  • each catalog is replicated on several servers or software and hardware modules. When it is global, it is preferably replicated on all software and hardware modules. When it is local, it is replicated on a predetermined number of software and hardware modules, including at least those it concerns.
  • FIG. 2 represents a possible distribution of descriptive data catalogs between the five software and hardware modules 14 1s 14 2 , 14 3 , 14 4 and 14 5 .
  • a first global C A catalog is replicated on the five software and hardware modules 14 15 14 2 , 14 3 , 14 4 and 14 5 . It comprises, for example, data describing the general infrastructure and the general operation of the computer system 10 for the provision of the service, in particular the tree structure of the domains and the software and hardware modules of the computer system 10. It may also comprise data describing potential users of the data storage service and their access rights, for example previously registered users, as well as the sharing zones, the structure or the storage mode and the replication of stored data. Other catalogs are local, such as the catalog C B i, containing descriptive data specific to the software and hardware module 14i such as data relating to the local infrastructure and the local operation of the server 12i and its peripherals.
  • the catalog C B i can be replicated in several different fields.
  • the complete system comprising two domains 28 and 32
  • the catalog C B i is for example saved on the modules 14i and 1 A 2 of the domain 28 and on the module 14 5 of the domain 32.
  • the software and hardware modules 14 2 , 14 3 , 14 4 and 14 5 are associated with respective local catalogs C 62 , C 63 , C 64 and C 65 .
  • the catalog C 62 is saved on the modules 14 2 and 14 3 of the domain 28 and on the module 14 4 of the domain 32;
  • the catalog C 63 is saved on the module 14 3 of the domain 28 and on the modules 14 4 and 14 5 of the domain 32;
  • the catalog C 64 is saved on the module 14 4 of the domain 32 and on the modules 14i and 14 3 of the domain 28;
  • the catalog C 65 is saved on the module 14 5 of the domain 32 and on the modules 1 A ⁇ and 1 A 2 of the domain 28.
  • This synchronization method aims to group the software and hardware modules 14 15 14 2 , 14 3 , 14 4 and 14 5 of the computer system 10 into at least one identified subset in which all the software and hardware modules are activated and synchronized between them. Its purpose may even be to result in only one synchronized subset, which then groups together all the software and hardware modules 14 15 14 2 , 14 3 , 14 4 and 14 5 of the computer system 10, synchronized between them.
  • This method is therefore primarily to manage the start or restart of a software and hardware module of the computer system 10 for integration with one of the existing synchronized subsets or a new subset to create.
  • it also aims to manage shutdowns of software and hardware modules, network breaks, detections of a subset synchronized by another, and so on. : as many events likely to evolve the synchronized subsets of the computer system 10.
  • a first step 100 resulting for example from the activation of a software and hardware module 14, following the start of a new server in the computer system 10 or the restart of an existing server, this software and hardware module, activated but not yet synchronized with another software and hardware module of the computer system 10, searches for another software module and hardware enabled computer system 10.
  • a step 102 of selecting at least one of the software and hardware modules of the identified subset is proceeded to to synchronize the software and hardware module 14, with this software module (s) and hardware (s) selected (s).
  • the selection is based on the digital data of the software and hardware module 14, which must be synchronized.
  • the software and hardware modules of the identified subset concerned by this selection are therefore those that manage data in common with the software and hardware module 14.
  • this selection is also a function of a workload of at least a portion of the software and hardware modules of this identified subset.
  • a software and hardware module of the identified subset can not be selected.
  • a requested software and hardware module temporarily has an overload, it can indicate to the software and hardware module 14, to choose another software and hardware module. If no software module and hardware requested is available at a given time, the software and hardware module 14, can wait until the peak activity ends to synchronize.
  • the workload is distributed equitably between the software and hardware modules solicited if a large number of software and hardware modules start at the same time or if a software and hardware module starts while the others are very busy.
  • a synchronization of the software and hardware module 14, with the selected module or modules is performed.
  • a non-limiting example of such a synchronization will be detailed with reference to FIG. 9.
  • the software and hardware module 14 is integrated with the identified synchronized subset which therefore now includes one more element.
  • step 108 during which the software and hardware modules of the identified subset synchronized with each other are permanently maintained according to a predetermined mechanism, a non-limiting example of which will be detailed with reference to FIGS. 7 and 8.
  • the computer system 10 monitors any event likely to change the identified subset: detection of another synchronized subset with which to merge, detection of a connection break between two parts of the subset then intended to separate, loss of a software and hardware module (for example by stopping the corresponding server), etc.
  • step 100 if another activated software and hardware module 14 is found but does not belong to an identified synchronized subset, a synchronization step of the software and hardware module 14 is carried out, with the module software and hardware 14 ,.
  • This synchronization may be identical to that envisaged in step 104. It will be detailed with reference to FIG. 9.
  • a new synchronized subset is created and identified in which we integrate both software and hardware modules 14 and 14 r
  • step 1 14 during which the two software and hardware modules 14 and 14 of this new identified subset synchronized with each other are permanently maintained according to a predetermined mechanism that may be identical to that envisaged. in step 108 and which will be detailed with reference to FIGS. 7 and 8. Also during this step, the computer system 10 monitors any event that may change the new subset created: detection of another subset synchronized with which to merge, detection of a connection break between the two software modules and hardware 14, and 14, then intended to separate, loss of a software and hardware module (for example by stopping the corresponding server), etc.
  • step 100 if no other activated software and hardware module 14 is found, the software and hardware module 14 can not be synchronized and remains isolated, although activated and therefore operational.
  • step 1 16 during which the computer system 10 monitors any event likely to change the synchronization of the software and hardware module 14,: detection of a synchronized subset in which it could be integrated, detection another software and hardware module activated but isolated with which it could be synchronized, stop the server on which it is implemented, etc.
  • each synchronized subset includes a software and hardware module selected to be specifically marked as an identifier of that subset.
  • the software and hardware module 14 which can be selected as having to identify the new subset created following its detection by the software and hardware module 14, .
  • a subset is then identified by this software and hardware module selected, so that any event generating an exclusion of this selected software and hardware module of the subset generates the disappearance of this subset and the possible creation of one or more new subsets.
  • Any software and hardware module 14 of the computer system 10 can therefore be in four different states E1, E2, E3 and E4 shown in FIG. 4 with their transitions.
  • the software and hardware module 14 is stopped.
  • it is activated but isolated, that is to say without being synchronized with another software and hardware module of the computer system 10.
  • the third state E3 it is a member of a synchronized subset identified.
  • the fourth state E4 it is a member of a synchronized subset identified and marked as identifier of this subset.
  • a first transition T1 switches the software and hardware module 14 from its stopped state E1 to step 100 of searching for another activated software and hardware module of the computer system 10.
  • the software and hardware module 14 is activated but in search of a synchronization of the digital data that it manages. This situation is caused for example by starting or restarting the corresponding server.
  • a second transition t2 switches the software and hardware module 14, from step 100 to its activated but isolated state E2. This is the state in which it is if, following step 100, it proceeds to step 1 16 because it has not detected any other activated software and hardware module.
  • a third transition t3 switches the software and hardware module 14 from step 100 to its member state E3 of an identified synchronized subset. This is the state in which it is if following step 100 it goes to step 102 (it joins an existing synchronized subset) or 1 10 (it joins a synchronized subset created by the module software and isolated hardware that it has detected).
  • a fourth transition t4 switches the software and hardware module 14 from its activated but isolated state E2 to its idle state E4 of a synchronized subset.
  • a fifth transition t5 switches the software and hardware module 14 from its member state E3 of a synchronized subset identified to its E4 identifier of a synchronized subset. This is the state in which it can be if the subset to which it belongs has lost its identifier module (corresponding server shutdown for example), or if the software and hardware module 14, itself has lost contact with the module identifier of the subset in which it is after a connection break. It can then become the identifier of a new subset created, but it must be selected.
  • a new identifier module must be selected for all the software and hardware modules of the subset considered that have lost contact with the initial identifier module: a solution is to create a new subset for all these modules software and hardware and select one to be the identifier of this new subset, for example the software and hardware module 14 ,.
  • a sixth transition t6 switches the software and hardware module 14 from its activated but isolated state E2 to its stopped state E1. This transition occurs in two situations:
  • a second situation is the detection by the software and hardware module 14 of an identified synchronized subset or of another software and hardware module in the state E2.
  • a seventh transition t7 switches the software and hardware module 14 from its state E3 or E4 member or identifier of a synchronized subset identified to its stopped state E1.
  • a first situation is the shutdown of the server on which the software and hardware module 14 is located;
  • a second situation is the detection by the subset in which it is located of another identified synchronized subset with which it can merge.
  • FIG. 5 illustrates the successive steps implemented optionally by a synchronization method according to the invention, when a synchronized subset of the computer system 10 detects another, according to the second situation of the transition t7 described above. .
  • a first synchronized subset During a first step 200, a first synchronized subset
  • S1 detects a second synchronized subset S2 with which it is possible to merge.
  • each software and hardware module of the subset Sj synchronizes with at least one software and hardware module of the subset Si.
  • a non-limiting example of such a synchronization will be detailed with reference to FIG. 9. .
  • the subset Sj is deleted.
  • all the software and hardware modules initially in this subset include the subset Si.
  • Steps 204, 206 and 208 apply to all the software and hardware modules of the subset Sj and accompany their successive transitions t7, t1 and t3.
  • FIG. 6 illustrates the successive steps implemented optionally by a synchronization method according to the invention, when a connection break within a synchronized subset of the computer system 10 generates two complementary parts of this subset, each comprising at least one software module and can no longer communicate with each other, in accordance with the situation of the transition t5 described above.
  • a synchronized subset S1 detects a connection break between two complementary parts of S1 each comprising at least one software module.
  • One of the two complementary parts of S1 necessarily includes its identifier module. This part then takes over the identity of S1.
  • the other of the two parts comprises software and hardware modules having lost contact with the identifier module of S1.
  • a new subset S2 is created in which are integrated all the software and hardware modules of this other part.
  • no synchronization is necessary. Only a software module and material of this new subset S2 must be selected to be the identifier.
  • This software and hardware module selected then follows the transition t5.
  • a modification of description data can be completely defined by an action A determined on this description data item.
  • a modification of a description data item relating to a user may be defined by an action on his rights of access to the computer system 10 selected from among a set of rights comprising system administrator rights, administrator rights of data, operator rights, single user rights.
  • the action A identifies precisely the description data to which it applies and the new value of this description data (in this case: system administrator, data administrator, operator or simple user).
  • Action A is identified by a unique universal identifier and can be saved, so that the current state of a description datum can be retrieved by knowing the initial state of this description datum and the series of actions that have been performed on it since its creation.
  • Each local replication of a description data item D is also associated with a version V that includes an N version number and an S signature.
  • any modification or creation made by an action A on a replication of the description data D also modifies its version V as follows: - N ⁇ N + 1;
  • lncr (A) is a random value generated at the execution of the action A on the replication of the relevant description data.
  • an action A is executed on a replication Di of the description data D, this replication Di being stored by the server 12.
  • the replication Di of the description data D has a value val, a version number N and a signature S.
  • the replication Di of the description data D is protected so that other actions on this replication can not be executed. These other possible actions are put on hold in a list provided for this purpose and are executed sequentially as soon as the action A has finished.
  • a synchronization message M is generated by the software and hardware module 14.
  • This message M comprises the universal identifier of the action A, or a complete description of this action A, as well as the value of the signature increment lncr (A).
  • the message M is transmitted to software and hardware modules 14 j and 14 k belonging to the same subset as the software and hardware module 14, and also comprising a replication of the description data D.
  • the software and hardware module 14 j executes the action A on the replication Dj of the description data D, so as to update its value, its version number and its signature which then take the respective values val ', N' and S '.
  • the update of the version number N is done by applying the same rule as that applied by the software and hardware module 14, and updating the signature is done through the transmission of the signature increment lncr (A) generated by the software and hardware module 14 ,.
  • the software and hardware module 14k executes the action A on the replication Dk of the description data item D, so as to update its value, its version number and its signature which then take the respective values val ',
  • an action A is executed on a first instance of a replication Di of the description data D, this replication Di being stored by the server 12,.
  • the replication Di of the description data D Before the execution of the action A, the replication Di of the description data D has a value val, a version number N and a signature S.
  • an action B is executed on one of them, the software and hardware module 14 J; in a step 502.
  • the action B is executed on a second instance of the replication Dj of the description data D.
  • the replication Dj of the data of description D has the value val, the version number N and the signature S.
  • the synchronization message MA is generated by the software and hardware module 14.
  • message MA comprises the universal identifier of the action A, or a complete description of this action A, as well as the value of the signature increment lncr (A).
  • the message MA is transmitted in particular to the software module and hardware 14, including the replication Dj.
  • a synchronization message MB is generated by the software and hardware module 14.
  • This message MB includes the universal identifier of the action B, or a complete description of this action B, as well as the value of the signature increment lncr (B) During this same step, the message MB is transmitted in particular to the software and hardware module 14, including the replication Di.
  • a step 508 upon receipt of the synchronization message MB, the software and hardware module 14, performs the action B on the replication Di of the description data D, so as to update its value, its number of version and its signature which then take the respective values val '", N" and S' ".
  • the value val '" results from action B over val', that is to say from the combination of actions A and B on the value val of the description data D.
  • the software and hardware module 14 executes the action A on the replication Dj of the description data item D, so as to update its value, its version number and its signature which then take the same values val '", N" and S' "as for Di in step 508.
  • the value val "results from the action A on val", that is, the combination of the actions A and B on the value val of the description data D.
  • the value N is equal to N '+ 1, ie N + 2.
  • this mechanism during a first step 600 in which the software and hardware module 14, is in search of another software and hardware module for the synchronization of at least one of its description data catalogs , this one selects the software and hardware module 14,. It selects of course one of the software and hardware modules managing a replication of the catalog that it wishes to update.
  • the software and hardware module 14 transmits its identifier and information about the versions of each of the description data of its catalog (ie number version and signature).
  • the software and hardware module 14 establishes a fixed representation of the contents of its catalog and creates a waiting list for the receipt of any new synchronization message concerning this catalog.
  • the software and hardware module 14 is registered as possessor of a replication of the catalog and recipient of any synchronization messages concerning this catalog. During this step, he also creates a waiting list for receiving any new synchronization messages concerning this catalog.
  • the software and hardware module 14 compares the versions of the description data of the software and hardware module 14 with its own.
  • each node of the tree can be associated with a global signature which represents the sum of the signatures of its data "girls”, that is to say the description data located downstream of this knot in the tree.
  • the search for differences is made by traversing the tree, from its root to its leaves, in other words from upstream to downstream: each time a node of the tree has the same global signature in the two replications of the catalog , this means that this node and the set of data "girls" of this node are identical, so it is not useful to further explore the subtree of the tree defined from this node .
  • the software and hardware module 14 constitutes a first list of description data comprising the values and versions of the description data whose version it has is more recent than that of the software and hardware module 14. It also possibly constitutes a second list of description data comprising the identifiers of the description data whose version it possesses is less recent than that of the software and hardware module 14. It then transmits these two lists to the software and hardware module 14,.
  • the software and hardware module 14 processes the first list so as to update, in its replication of the catalog, the relevant description data.
  • the software and hardware module 14 processes these values and versions of description data identified in the second list so as to update, in its replication of the catalog, the relevant description data. Each time it processes an update of description data, it transmits a synchronization message, in accordance with the method described with reference to FIG. 7, to the possible software and hardware modules of its subset including a replication of this data item. description except software and hardware module 14 ,.
  • the frozen representation of the contents of the catalog is deactivated on the software and hardware module 14 side, during a step 614 and the software and hardware module 14, is informed in a step 616.
  • the software and hardware modules 14 and 14 are released to process, if appropriate, the synchronization messages received in their respective waiting lists during the entire duration of the steps 606 to 616, in order to resorb and delete these waiting lists, then to put in situation to reproduce the synchronization steps as described with reference to Figures 7 and 8 when the situation arises.
  • Steps 600 to 618 are repeated as many times as necessary on the software and hardware module 14, for updating all of its description data catalogs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Hardware Redundancy (AREA)
  • Multi Processors (AREA)
EP09768166A 2008-11-10 2009-11-10 Verfahren und system zum synchronisieren einer menge von softwaremodulen eines datenverarbeitungssystems, das als cluster von servern verteilt ist Withdrawn EP2353277A1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FR0806252A FR2938356B1 (fr) 2008-11-10 2008-11-10 Procede et systeme de synchronisation d'un ensemble de modules logiciels d'un systeme informatique distribue en grappe de serveurs
PCT/FR2009/052158 WO2010052441A1 (fr) 2008-11-10 2009-11-10 Procede et systeme de synchronisation d'un ensemble de modules logiciels d'un systeme informatique distribue en grappe de serveurs

Publications (1)

Publication Number Publication Date
EP2353277A1 true EP2353277A1 (de) 2011-08-10

Family

ID=40897687

Family Applications (1)

Application Number Title Priority Date Filing Date
EP09768166A Withdrawn EP2353277A1 (de) 2008-11-10 2009-11-10 Verfahren und system zum synchronisieren einer menge von softwaremodulen eines datenverarbeitungssystems, das als cluster von servern verteilt ist

Country Status (5)

Country Link
US (1) US20110218962A1 (de)
EP (1) EP2353277A1 (de)
JP (1) JP2012508412A (de)
FR (1) FR2938356B1 (de)
WO (1) WO2010052441A1 (de)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9230084B2 (en) * 2012-10-23 2016-01-05 Verizon Patent And Licensing Inc. Method and system for enabling secure one-time password authentication
US9411868B2 (en) * 2013-08-23 2016-08-09 Morgan Stanley & Co. Llc Passive real-time order state replication and recovery

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5408506A (en) * 1993-07-09 1995-04-18 Apple Computer, Inc. Distributed time synchronization system and method
US6952741B1 (en) 1999-06-30 2005-10-04 Computer Sciences Corporation System and method for synchronizing copies of data in a computer system
US7181539B1 (en) * 1999-09-01 2007-02-20 Microsoft Corporation System and method for data synchronization
FR2851709A1 (fr) 2002-12-31 2004-08-27 Activia Networks Systeme, serveur et procede fournissant des ressources de synchronisation a un reseau de communications comprenant des serveurs de services
GB0323780D0 (en) * 2003-10-10 2003-11-12 Ibm A data brokering method and system
US7203687B2 (en) * 2004-02-26 2007-04-10 International Business Machines Corporation Peer-to-peer replication member initialization and deactivation
US8051170B2 (en) * 2005-02-10 2011-11-01 Cisco Technology, Inc. Distributed computing based on multiple nodes with determined capacity selectively joining resource groups having resource requirements
US7543020B2 (en) * 2005-02-10 2009-06-02 Cisco Technology, Inc. Distributed client services based on execution of service attributes and data attributes by multiple nodes in resource groups
US7457835B2 (en) * 2005-03-08 2008-11-25 Cisco Technology, Inc. Movement of data in a distributed database system to a storage location closest to a center of activity for the data
US7437601B1 (en) * 2005-03-08 2008-10-14 Network Appliance, Inc. Method and system for re-synchronizing an asynchronous mirror without data loss
US7735051B2 (en) * 2006-08-29 2010-06-08 International Business Machines Corporation Method for replicating and synchronizing a plurality of physical instances with a logical master
US7805503B2 (en) * 2007-05-10 2010-09-28 Oracle International Corporation Capability requirements for group membership
FR2932289B1 (fr) * 2008-06-06 2012-08-03 Active Circle Procede et systeme de synchronisation de modules logiciels d'un systeme informatique distribue en grappe de serveurs, application au stockage de donnees.

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2010052441A1 *

Also Published As

Publication number Publication date
WO2010052441A1 (fr) 2010-05-14
US20110218962A1 (en) 2011-09-08
FR2938356B1 (fr) 2011-06-24
JP2012508412A (ja) 2012-04-05
FR2938356A1 (fr) 2010-05-14

Similar Documents

Publication Publication Date Title
EP2962242B1 (de) Verfahren zur erkennung von attacken virtuellen maschine
EP1866754A1 (de) System und verfahren zum transfer einer plattform, von benutzerdaten und anwendungen von mindestens einem server zu mindestens einem computer
FR2843209A1 (fr) Procede de replication d'une application logicielle dans une architecture multi-ordinateurs, procede pour realiser une continuite de fonctionnement mettant en oeuvre ce procede de replication, et systeme multi-ordinateurs ainsi equipe.
FR2773237A1 (fr) Systeme et procede pour modifier le mappage de partitions vers des unites logiques dans une memoire d'ordinateur
JP2022500730A (ja) 分散型異種ストレージシステムにおけるデータ一貫性のリアルタイムチェックのための方法、デバイス、およびシステム
WO2009147357A1 (fr) Procede et systeme de synchronisation de modules logiciels d'un systeme informatique distribue en grappe de serveurs, application au stockage de donnees
EP2353277A1 (de) Verfahren und system zum synchronisieren einer menge von softwaremodulen eines datenverarbeitungssystems, das als cluster von servern verteilt ist
CH718446A2 (fr) Système et méthode pour la sauvegarde par agents distribués de machines virtuelles
EP4026016A1 (de) Migration einer datenblockchain
FR3051934A1 (fr) Procede d'identification d'au moins une fonction d'un noyau d'un systeme d'exploitation
WO2012172234A1 (fr) Procede, dispositif et programme d'ordinateur pour la mise à jour logicielle de clusters optimisant la disponibilite de ces derniers
EP3144812A1 (de) Client-/server-architektur für die verwaltung eines superrechners
FR3073061B1 (fr) Procede de communication entre processus, programme d’ordinateur et installation informatique correspondants
EP2353076A1 (de) Verfahren und system zur virtualisierten speicherung eines digitalen datensatzes
EP2734921B1 (de) Verfahren, computerprogramm und vorrichtung zur unterstützung der bereitstellung von clustern
FR2888651A1 (fr) Procede pour la prise en compte automatique et le stockage persistant de parametres de personnalisation a priori volatils
FR3039024A1 (fr) Systeme et procede automatique de deploiement des services sur un noeud reseau
EP3123314B1 (de) Verfahren und vorrichtung zur steuerung der änderung in einem betriebssystem bei dienstknoten eines hochleistungscomputers
CH718447A2 (fr) Système et méthode pour la restauration de machines virtuelles par des agents distribués.
FR3100351A1 (fr) connexion à chaîne de blocs de données
EP3948574A1 (de) System zur redundanten datenspeicherung, entsprechendes verfahren und computerprogramm
FR3100350A1 (fr) migration d’une chaîne de blocs de données
FR3012900A1 (fr) Procede de protection de metadonnees
EP2212791A2 (de) Verbessertes computersystem, das mehrere netzwerkknoten umfasst
FR2929428A1 (fr) Procede et dispositif de detection de vers passifs dans un reseau de pairs

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110510

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO SE SI SK SM TR

DAX Request for extension of the european patent (deleted)
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN

18D Application deemed to be withdrawn

Effective date: 20130601