WO2007088728A1 - 多層分散処理システム - Google Patents
多層分散処理システム Download PDFInfo
- Publication number
- WO2007088728A1 WO2007088728A1 PCT/JP2007/050587 JP2007050587W WO2007088728A1 WO 2007088728 A1 WO2007088728 A1 WO 2007088728A1 JP 2007050587 W JP2007050587 W JP 2007050587W WO 2007088728 A1 WO2007088728 A1 WO 2007088728A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- node
- layer
- transaction
- processing system
- distributed processing
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 61
- 238000004891 communication Methods 0.000 claims description 9
- 238000012544 monitoring process Methods 0.000 claims description 8
- 230000004044 response Effects 0.000 claims description 4
- 230000010365 information processing Effects 0.000 claims 1
- 230000006870 function Effects 0.000 description 65
- 238000000034 method Methods 0.000 description 36
- 230000008569 process Effects 0.000 description 33
- 238000010586 diagram Methods 0.000 description 22
- 230000007704 transition Effects 0.000 description 4
- 238000012790 confirmation Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/466—Transaction processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/5055—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
Definitions
- the present invention relates to a multilayer distributed processing system that executes a transaction by dividing it into a plurality of predetermined layers.
- Multi-layer distributed processing systems that execute a transaction divided into a plurality of layers are known!
- each layer of a transaction is processed by a two-phase commit protocol, and ACID (Atomicity, Consistency, Isolation and Durability) is guaranteed for each layer, thereby ensuring the atomicity of the transaction as a whole. It is known to guarantee (Atomicity) (see Patent Document 1).
- the present invention has also been provided with the background described above, and an object of the present invention is to provide a multi-layer distributed processing system that can easily increase the independence of a plurality of layers in which transactions are divided and executed.
- the multilayer distributed processing system divides one or more transactions into a plurality of predetermined layers, and each of the divided layers has one or more.
- a multi-layer distributed processing system that executes one or more transactions according to the combination of nodes provided.
- the upper node belonging to the upper layer monitors the execution status of the lower node belonging to the lower layer for the transaction.
- the lower node provides a monitoring enable means that allows the upper node to monitor its own execution status, and provides services to the upper node according to the execution status via the monitor enable means.
- a functional unit is a multi-layer distributed processing system that executes one or more transactions according to the combination of nodes provided.
- the upper node belonging to the upper layer monitors the execution status of the lower node belonging to the lower layer for the transaction.
- the lower node provides a monitoring enable means that allows the upper node to monitor its own execution status, and provides services to the upper node according to the execution status via the monitor enable means.
- a functional unit is a functional unit.
- the upper node has a first communication means for monitoring the lower node, and the lower node provides a second communication for providing a service to the upper node.
- the upper node has node specifying means for specifying the lower node based on information unique to the lower node and an execution situation monitored through the monitoring means.
- the identified lower node provides a service to the upper node.
- the lower node transmits information for identifying itself in accordance with an attribute and an execution status of the functional unit of the lower node in response to a request from the upper node. Means to do.
- the lower node further includes a storage unit that stores information for identifying itself, and an update unit that dynamically updates the information for identifying the self. Yes.
- a version limiting unit that limits a version of one or more functional units according to a transaction, and the node having the functional unit of a version limited by the version limiting unit are specified.
- it further comprises control means for controlling the node specifying means.
- the plurality of nodes each including the functional units having different versions are included in the same layer.
- the management apparatus includes means for collecting usage status of the node and means for controlling the function of the node for a specific situation.
- the management apparatus includes node updating means for dynamically updating the node specifying means of all the nodes based on the usage status of the nodes.
- the management device further includes a version update unit that dynamically identifies the node and updates the version of the node based on the usage status of the node.
- the independence of a plurality of layers in which transactions are divided and executed can be easily increased.
- FIG. 1 is a diagram illustrating a configuration of a multilayer distributed processing system according to the present invention.
- FIG. 2 is a diagram illustrating the configuration of the management device, the first layer node host, the second layer node host, and the third layer node host shown in FIG. 1.
- FIG. 3 is a diagram illustrating a configuration of a transaction that can be executed by a computer or the like
- FIG. 4 is a diagram illustrating an example of the configuration of a first layer node host, a second layer node host, and a third layer node host.
- FIG. 5 is a diagram showing a configuration of a distributed transaction management unit.
- FIG. 6 is a diagram exemplifying node assignment for a plurality of transactions.
- FIG. 7 is a diagram showing a configuration of a management device.
- FIG. 8 is a conceptual diagram illustrating the association of a node with a transaction.
- FIG. 9 is a diagram showing a transaction execution cycle by the multi-layer distributed processing system.
- FIG. 10 is a state transition diagram of a related node list managed by the distributed transaction management unit.
- FIG. 11 is a state transition diagram showing the execution state of the distributed transaction manager.
- FIG. 12 is a flowchart showing a process (S 10) for searching a lower layer node for one or more transactions requested by each of the distributed transaction managers of each node in the node allocation phase. .
- FIG. 13 is a flowchart showing a process (S20) of assigning a node to one or more transactions requested by each distributed transaction manager of each node in the node assignment phase.
- FIG. 14 is a flowchart illustrating an example of processing (S30) in which the management device manages the node version. Is.
- FIG. 1 is a diagram illustrating a configuration of a multi-layer distributed processing system 1 according to the present invention.
- the multilayer distributed processing system 1 includes a management device 2, a first layer node host 3-1, 3-2, and a second layer node connected to each other via a network 10 using, for example, TCPZIP.
- a network system consisting of host nodes 4-1, 4-2 and 3rd layer node hosts 5-1, 1, 2-2.
- One or more transactions are executed, for example, divided from layer 1 to layer 3. It is supposed to be done.
- the management device 2 is a component that provides assistance to the administrator of the multilayer distributed processing system 1 and is an option in the multilayer distributed processing system 1.
- first layer node host 3 when the power of a plurality of components such as the first layer node host 3-1 and 3-2 is not specified, it may be simply abbreviated as “first layer node host 3”.
- FIG. 2 is a diagram illustrating the configuration of the management device 2, the first layer node host 3, the second layer node host 4, and the third layer node host 5 shown in FIG.
- each of the management device 2, the first layer node host 3, the second layer node host 4 and the third layer node host 5 includes a main body 60 including a CPU 600 and a memory 602, a display device and a keyboard, etc.
- the management device 2, the first layer node host 3, the second layer node host 4 and the third layer node host 5 each have a component as a computer.
- FIG. 3 is a diagram showing an example of the configuration of a transaction T100 that can be executed by a computer or the like.
- the transaction T100 includes a function layer T102, a process layer T104, a service layer 106, an information layer 108, an application layer T110, and a resource layer T112.
- the transaction T100 is, for example, a business process, and is a transaction (gross transaction) having a long period in which a plurality of processes are serialized.
- the function layer T102 is, for example, a UI (user interface) and a management tool.
- the process layer T104 is a business process workflow that realizes the function of the function layer T102, and defines the transaction T100.
- process layer T104 executes the transaction T100 by serializing and calling a plurality of services provided by the service layer T106 in accordance with transaction-specific characteristics.
- the service layer T106 not only provides services, but also maintains security, such as whether a process can access a particular service.
- Each layer below the service layer T106 is distributed.
- the information layer T108 is a middleware layer that provides information necessary for the service of the service layer T106 by combining the functions of the application layer T110 executed by the transaction T100 with a predetermined output.
- the resource layer T112 is a database, for example.
- FIG. 8 is a conceptual diagram illustrating the association of nodes with transactions.
- Multi-layer distributed processing system 1 confirms the relationship between transaction X and nodes in each layer by identifying nodes such as nodes for each layer by node host name and function (by port number) for transaction X To do.
- the dotted line can be a read-only shared node, or it can be written according to the characteristics of the function. It can also be a shared node.
- FIG. 4 is a diagram illustrating the configuration of the first layer node host 3, the second layer node host 4, and the third layer node host 5.
- Tier 1 node host 3 Tier 2 node host 4 and Tier 3 node host
- Nodes 30-1 to 30-4 have, for example, four nodes 30-1 to 30-4 and each has the same configuration.
- Nodes 30-1 to 30-4 have the same level for each given layer in each transaction. Perform different functions or the same function.
- first tier node host 3, the second tier node host 4, and the third tier node host 5 are not limited to four nodes 30 and may have different numbers of nodes 30, respectively. Yes.
- Each of the nodes 30-1 to 30-4 includes a node port 32, a distributed transaction manager (DTM) 34, and a function node 36 having different values.
- DTM distributed transaction manager
- the node port 32 is a unique port number corresponding to a unique function that each of the function nodes 36 of the nodes 30-1 to 30-4 has.
- the distributed transaction management unit 34 has information (which will be described later with reference to FIG. 5) for managing each of the nodes 30, and sends the function node 36 of the other node 30 to be combined via the node port 32. Call.
- the distributed transaction management unit 34 is used to execute processing from the service layer T106 to the resource layer T112.
- the function node 36 is a service execution unit that executes the same level of different services or the same service in each predetermined layer of the transaction.
- the function node 36 includes an allocation interface (IZF) 360 and a function interface (IZF) 362 force S is implemented.
- Allocation I / F 360 is an interface to be called by function node 36 force method via node port 32 by distributed transaction manager 34 of other node 30.
- function node 36 force method via node port 32 by distributed transaction manager 34 of other node 30.
- IZF 360 When IZF 360 is called a request for a function that requires allocation, the function can be returned, e.g., true if the function that can be provided by function node 36 matches the required function!
- the allocation IZF 360 is set to return true, for example, even when the node 30 can be shared.
- Function IZF 362 is an interface for function node 36 to input and output services.
- FIG. 5 is a diagram showing a configuration of the distributed transaction management unit 34.
- distributed transaction manager 34 has status flag 340, assigned (designated) transaction ID 342, shared transaction ID list 344, node search category 346, relationship node list 348, node ID 350, version time category. 352, node selection algorithm 354, and execution status monitor address 356.
- the status flag 340 is a flag indicating a status in which the own node 30 is used by the upper layer node 30, and indicates one of the following three status values.
- the own node 30 is used (including occupied) by the upper layer node 30.
- the assigned transaction ID (ATID) 342 is a component that stores a transaction ID for identifying a transaction to which the own node 30 is assigned when the status flag 340 has been assigned.
- the transaction ID is composed of, for example, a node host address (IP address or host name) such as the first layer node host 3 that executes processing of the process layer T104, and a node ID (process ID).
- IP address or host name such as the first layer node host 3 that executes processing of the process layer T104
- process ID node ID
- the shared transaction ID list (STL) 344 is a component that stores, as a shared list, transaction IDs of transactions sharing the node 30 when the status flag 340 is shared.
- the node search segmentation (NSS) 346 has a segment list (not shown) including one or more combinations of an address that identifies one of the node hosts and the port number of the node 30. Then, a special communication channel for exchanging information and assigning nodes 30 to each of the distributed transaction management units 34 is configured.
- the node search category 346 searches for available nodes 30 in the lower layer using the category list.
- the classification list can be, for example, a combination of the IP address of the node host that declares the start and end of the transaction and the port number of node port 32! /, A list of wildcard node host names, or nodes Even a list of host names! /.
- the configuration is as follows.
- host3 Is host30 ⁇ ! It means iost39, host3a to host3z, etc.
- a related node list (RNL) 348 is a component that stores a node list of the lower layer node 30 used for the own node 30.
- the node list is in the order in which node 30 is committed, and holds the node host address and node port 32.
- Version ID 350 indicates the function version of function node 36 of its own node 30.
- Version time classification 352 indicates the function version of function node 36 as a time This is a component that stores information for classifying transactions that can share the node 30 according to the type of the transaction.
- version time segment 352 includes a transaction start and end time pair.
- version time division 352 has no restriction on the division when the transaction starts and ends or when any of these times is null.
- the node selection algorithm 354 is a component that stores an algorithm describing how to select one node 30 from a plurality of unused nodes 30 having the same function in the lower layer. is there.
- the execution status monitor address 356 is a component that stores an address for the management apparatus 2 to monitor the execution status of the node 30.
- FIG. 6 is a diagram illustrating allocation of the node 30 to a plurality of transactions.
- the first layer node hosts 3-1, 3-2, the second layer node hosts 4-1, 4 2, and the third layer node hosts 5-1, 5-2 The case of having one by one is illustrated.
- the node 30 assigns (occupies) a lower layer node 30 to a predetermined transaction as indicated by a solid line arrow.
- node 30 shares the lower layer node 30 with the other 30 for a predetermined transaction, as indicated by the dashed arrow.
- the node 30 shared by the plurality of nodes 30 is a shared node that can be read only or can be written according to the characteristics of the function.
- a node 30 may be shared by a plurality of nodes 30 when it is approved for sharing according to information exchanged with other nodes 30.
- each node 30 is used when starting a new transaction, and searches for a new node 30 in the lower layer and assigns it to the new U and transaction.
- the first layer node host 3, the second layer node host 4, and the third layer node host 5 The distributed transaction manager 34 confirms and stores which transaction each node 30 is using or for which transaction it is used.
- the management device 2 is a component that provides assistance to the administrator of the multi-layer distributed processing system 1, and provides the following assistance to the administrator.
- FIG. 7 is a diagram showing a configuration of the management apparatus 2. As shown in FIG. 7, it has an administrator UI unit 20, an execution status monitor 22, a node search category update unit 24, a node selection algorithm update unit 26, and a function node update unit 28.
- An administrator UI section (Administrator Console) 20 is a UI (Use Interface) for the administrator.
- the execution status monitor 22 has a public address and a port 220, collects the execution status of each unit constituting the multi-layer distributed processing system 1, and reports it to the administrator.
- the public address and port 220 are known by the distributed transaction managers 34 of the first layer node host 3, the second layer node host 4, and the third layer node host 5, respectively.
- each of the distributed transaction managers 34 sends the execution status to the public address and port 220 as the destination, so that the execution status monitor 22 Tier 1 node host 3, Tier 2 node host 4 and Tier 3 node host 5 Each execution status can be collected.
- the node search segmentation updater 24 is a node for the node 30 to search the lower layer node 30 when the situation of the multi-layer distributed processing system 1 becomes a situation defined in advance by the administrator. Broadcasts search category update information to all distributed transaction managers 34.
- the situation defined by the administrator in advance is, for example, that the number of unusable hosts or newly added hosts exceeds a set number in a certain category. Situation.
- the node search category update information broadcast is intended to improve the performance of the multi-layer distributed processing system 1, and even if the node search category update information is not broadcast, each distributed transaction management unit 34 has a sufficiently broad category. Used from the configuration to be able to discover V ⁇ 30.
- the node selection algorithm updater (Node Choice algorithm Updater) 26 has an algorithm library 260 that holds all execution packages. When the situation of the multi-layer distributed processing system 1 becomes a situation defined by the administrator in advance, In order to improve performance, the node selection algorithm execution package is transmitted to each distributed transaction management unit 34.
- the node selection algorithm is, for example, an algorithm that sets to keep using the same node 30 by using a cache.
- the node selection algorithm updater 26 transmits an algorithm for setting the node so as to continue using the same simpler node.
- the node replaces the transmitted algorithm and executes it.
- one lower node that can be used is selected, and lower nodes that cannot be used are not accessed, so the overall performance can be improved.
- the means for updating the node search category and the means including the means for updating the node selection algorithm are called node update means.
- the function node updater 28 (Function Node Updater) 28 updates the function node 36 of the node 30 to a new version in response to an instruction from the administrator via the administrator UI unit 20, and the distributed transaction. Change the version ID 350 of the management unit 34.
- FIG. 9 is a diagram showing a transaction execution cycle by the multilayer distributed processing system 1. As shown in FIG. 9, the multilayer distributed processing system 1 executes a transaction from the node allocation phase (P100), the function execution phase (P102), and the commit process phase (P104).
- P100 node allocation phase
- P102 function execution phase
- P104 commit process phase
- the management device 2 groups addresses of the first layer node host 3, the second layer node host 4, the third layer node host 5, and the like.
- the IP address is divided as follows.
- Disource layer 192. 168. 0. 160-250
- node hosts are grouped by node host name, they are divided as follows.
- Application layer App host 1 to app host 50
- Resource layer Resource host 1 to resource host 80
- Each node 30 searches for a lower layer node 30 having a function required for a transaction by the distributed transaction management unit 34, and is used in a predetermined address and port classification of the lower layer. Select.
- each node 30 performs two stages of processing.
- the node 30 confirms that the tree of all the distributed transaction management units 34 of the transaction to be completed is in a preparation state.
- each node 30 commits the transaction again and rolls back if the transaction fails.
- FIG. 10 is a state transition diagram of the related node list 348 managed by the distributed transaction management unit 34.
- the related node list 348 holds the node list (node host address and node port 32) of the lower layer node 30 used for the own node 30 as described above.
- the distributed transaction management unit 34 does not shift the state power in which the related node list 348 stores the node list (al).
- the related node list 348 is not cleared immediately.
- the distributed transaction management unit 34 accesses the node 30 included in the related node list 348 before partitioning the address.
- the distributed transaction management unit 34 first searches the node 30 stored in the relational node list 348 for a new transaction, and assigns the node 30 in the lower layer. When the node 30 included in the related node list 348 is accessed, the distributed transaction management unit 34 shifts from the state in which the related node list 348 stores the node list to the state in which the related node list 348 is empty (node search: a2).
- FIG. 11 is a state transition diagram showing the execution state of the distributed transaction manager 34.
- the distributed transaction manager 34 shifts to the released state / assigned state (bl).
- the distributed transaction management unit 34 shifts from the allocated state to the released state (b2).
- the distributed transaction management unit 34 shifts to the released state / shared state (b3).
- the distributed transaction management unit 34 shifts from the shared state to the released state (b4).
- Figure 12 shows the distributed transaction of each node 30 in the node assignment phase (P100).
- 10 is a flowchart showing a process (S10) of searching for a lower layer node 30 for one or more requested transactions by each of the Yon management units 34.
- step lOO the distributed transaction management unit 34 checks whether or not the relation node list 348 is empty, and if the relation node list 348 is empty, the processing of S106 is performed. If the related node list 348 is not empty, proceed to S102
- step 102 the distributed transaction management unit 34 also searches the related node list 348 for the unused node 30 of the lower layer having the necessary function or the shareable node 30.
- step 104 the distributed transaction management unit 34 proceeds to the process of S114 if it finds a node 30 that can be assigned the necessary function, and proceeds to the process of S106 if it does not find it. move on.
- step 106 the distributed transaction management unit 34 determines each node search category.
- Step 108 the distributed transaction management unit 34 starts searching for the available nodes 30 by inquiring each node 30 about the execution status. (Check execution status)
- step 110 the distributed transaction management unit 34 proceeds to the process of S114 when it finds a node 30 to which a necessary function can be assigned, and when it does not find it (when it fails). Advances to S112.
- step 112 the distributed transaction management unit 34 transmits to the execution status monitor 22 of the management apparatus 2 a message that the node 30 has been discovered and has been turned off (fail).
- step 114 the distributed transaction management unit 34 sets the related node list 3
- step 116 the distributed transaction management unit 34 adds the found node 30 to the related node list 348.
- Figure 13 shows the distributed transaction of each node 30 during the node allocation phase (P100).
- 14 is a flowchart showing a process (S20) of assigning a node 30 to one or more requested transactions by each of the managers 34.
- step 200 the distributed transaction management unit 34 determines whether or not the lower layer's own node 30 is assigned to the upper layer's node 30, and is assigned. If so, the process proceeds to S206. If released, the process proceeds to S202.
- step 202 the upper layer node 30 calls the method of the assignment IZF 360 of the function node 36 having the function required for the transaction.
- step 204 the upper layer node 30 determines whether or not the return from the lower layer node 30 is true (request function), and if the return is true, Proceed to the processing of S208. If the return is not true, proceed to the processing of S206.
- step 206 the lower layer node 30 transmits information indicating a failure to the upper layer node 30 and the execution status monitor 22 that have requested the function.
- step 208 the upper layer node 30 determines whether or not there is a request for the function version by the transaction. If there is a request, the process proceeds to S210. Advances to processing of S212.
- step 210 the upper layer node 30 determines whether or not the version request for the lower layer function is the same as the version ID stored in the lower layer version ID 350, and If they are the same, the process proceeds to S214. If they are not the same, the process proceeds to S206.
- step 212 the upper layer node 30 refers to the time stamp included in the lower layer version time section 352, and indicates that the function of the lower layer is a sharable version. If the version that can be shared is indicated, the process proceeds to S214. If the version is not sharable, the process proceeds to S206.
- step 214 the upper layer node 30 calls the method of the allocation IZF 360 of the lower layer node 30 having a sharable function.
- step 216 the upper layer node 30 receives a return signal from the lower layer node 30. If the return is true, the process proceeds to S218. If the return is not true, the process proceeds to S220.
- step 218 the lower layer node 30 adds the transaction ID to the shared transaction ID list 344.
- step 220 the lower layer node 30 stores the transaction ID in the assigned transaction ID 342.
- step 222 the lower layer node 30 responds to the upper layer node 30 that requested the function with information indicating the discovery of the available node 30.
- the multi-layer distributed processing system 1 updates the version of the node 30 by the management device 2.
- the management device 2 controls how many nodes 30 update or maintain which version. For example, the management device 2 updates all the nodes 30 of the first layer node host 3-1 to the latest version.
- the management apparatus 2 may maintain, for example, three nodes 30 at version 1, five nodes 30 at version 2, and other nodes 30 updated to the latest version, or 10% Node 30 will maintain version 1, 20% of node 30 will maintain version 2, and the other nodes 30 will be updated to the latest version.
- the management device 2 classifies the predetermined version of the node 30 using a time stamp.
- FIG. 14 is a flowchart illustrating an example of a process (S30) in which the management apparatus 2 manages the version of the node 30.
- step 300 the management apparatus 2 manages the node 3 of each layer.
- step 302 the management device 2 confirms the function of each node 30.
- step 304 the management apparatus 2 adds all the nodes 30 having a predetermined function to the target node list for the transaction.
- step 306 the management device 2 updates all of the nodes 30 having a predetermined function to the latest version in accordance with an instruction from the administrator input via the administrator UI unit 20, for example. Judgment whether or not to update to John, to update the version of all nodes 30
- step 308 the management apparatus 2 determines which node 30 is in which version with respect to each node 30 in accordance with, for example, an administrator instruction input via the administrator UI unit 20. Generate a version update plan that indicates whether to do it.
- step 310 the management apparatus 2 starts a node confirmation loop.
- the node confirmation loop repeats the process up to S318 described later for each node 30 until the value of variable N starts at 1 and increases by 1 until the number of nodes is reached.
- step 312 the management apparatus 2 determines whether or not the node 30 is used. If it is used, the management apparatus 2 proceeds to the process of S314. Proceed to processing 318.
- Step 314 the management apparatus 2 performs version update and time segment setting for the node 30.
- step 316 the management apparatus 2 deletes the node 30 that has been updated in version and set the time segment in the process of S314 from the target node list.
- step 318 the management device 2 ends the node confirmation loop when the condition shown in S310 is satisfied.
- step 320 the management apparatus 2 determines whether or not the target node list has become empty. If not, the management apparatus 2 proceeds to the processing of S322 and becomes empty. If so, the process ends.
- step 322 the management apparatus 2 sets the status of the timer of the execution status monitor 22 so that the processing of S30 can be performed again later.
- the multi-layer distributed processing system 1 can assign different versions of the node 30 depending on the type of transaction, and multiple versions of the node 30 are executed for different transactions in one system. It can be performed.
- the present invention can be used in a multilayer distributed processing system that executes a transaction by dividing it into a plurality of predetermined layers.
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Multi Processors (AREA)
Abstract
Description
Claims
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB0815639A GB2449037B (en) | 2006-01-31 | 2007-01-17 | Multilayer distributed processing system |
US12/162,979 US9015308B2 (en) | 2006-01-31 | 2007-01-17 | Multilayer distributed processing system |
JP2007556809A JPWO2007088728A1 (ja) | 2006-01-31 | 2007-01-17 | 多層分散処理システム |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-022841 | 2006-01-31 | ||
JP2006022841 | 2006-01-31 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007088728A1 true WO2007088728A1 (ja) | 2007-08-09 |
Family
ID=38327306
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/050587 WO2007088728A1 (ja) | 2006-01-31 | 2007-01-17 | 多層分散処理システム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9015308B2 (ja) |
JP (1) | JPWO2007088728A1 (ja) |
GB (1) | GB2449037B (ja) |
WO (1) | WO2007088728A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013190971A (ja) * | 2012-03-13 | 2013-09-26 | Nomura Research Institute Ltd | 統合アクセス制御システム |
JP2015534308A (ja) * | 2012-08-22 | 2015-11-26 | オラクル・インターナショナル・コーポレイション | ミドルウェアマシン環境でインターネットプロトコル(ip)アドレスおよびノード名の整合性を確実にするためのシステムおよび方法 |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009044589A1 (ja) * | 2007-10-03 | 2009-04-09 | Nec Corporation | 階層型負荷推定システム、方法およびプログラム |
KR102436426B1 (ko) * | 2017-03-17 | 2022-08-26 | 콘비다 와이어리스, 엘엘씨 | 네트워크 서비스 계층에서의 분산형 트랜잭션 관리 |
RU2718215C2 (ru) | 2018-09-14 | 2020-03-31 | Общество С Ограниченной Ответственностью "Яндекс" | Система обработки данных и способ обнаружения затора в системе обработки данных |
RU2714219C1 (ru) | 2018-09-14 | 2020-02-13 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для планирования передачи операций ввода/вывода |
RU2731321C2 (ru) | 2018-09-14 | 2020-09-01 | Общество С Ограниченной Ответственностью "Яндекс" | Способ определения потенциальной неисправности запоминающего устройства |
RU2721235C2 (ru) | 2018-10-09 | 2020-05-18 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для маршрутизации и выполнения транзакций |
RU2714602C1 (ru) | 2018-10-09 | 2020-02-18 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для обработки данных |
RU2711348C1 (ru) | 2018-10-15 | 2020-01-16 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для обработки запросов в распределенной базе данных |
RU2714373C1 (ru) | 2018-12-13 | 2020-02-14 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для планирования выполнения операций ввода/вывода |
RU2749649C2 (ru) | 2018-12-21 | 2021-06-16 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для планирования обработки операций ввода/вывода |
RU2720951C1 (ru) | 2018-12-29 | 2020-05-15 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и распределенная компьютерная система для обработки данных |
RU2746042C1 (ru) | 2019-02-06 | 2021-04-06 | Общество С Ограниченной Ответственностью "Яндекс" | Способ и система для передачи сообщения |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05173988A (ja) * | 1991-12-26 | 1993-07-13 | Toshiba Corp | 分散処理方式および該分散処理に適用されるトランザクション処理方式 |
JPH05307478A (ja) * | 1992-04-30 | 1993-11-19 | Nippon Telegr & Teleph Corp <Ntt> | データベース管理システムの構成法 |
JPH07302242A (ja) * | 1994-04-30 | 1995-11-14 | Mitsubishi Electric Corp | 負荷分散方式 |
JPH1069418A (ja) * | 1996-07-02 | 1998-03-10 | Internatl Business Mach Corp <Ibm> | 階層化トランザクション処理方法 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7613801B2 (en) * | 1996-07-12 | 2009-11-03 | Microsoft Corporation | System and method for monitoring server performance using a server |
US20030005068A1 (en) * | 2000-12-28 | 2003-01-02 | Nickel Ronald H. | System and method for creating a virtual supercomputer using computers working collaboratively in parallel and uses for the same |
US7568023B2 (en) * | 2002-12-24 | 2009-07-28 | Hewlett-Packard Development Company, L.P. | Method, system, and data structure for monitoring transaction performance in a managed computer network environment |
US20050022202A1 (en) * | 2003-07-09 | 2005-01-27 | Sun Microsystems, Inc. | Request failover mechanism for a load balancing system |
-
2007
- 2007-01-17 GB GB0815639A patent/GB2449037B/en active Active
- 2007-01-17 JP JP2007556809A patent/JPWO2007088728A1/ja active Pending
- 2007-01-17 WO PCT/JP2007/050587 patent/WO2007088728A1/ja active Application Filing
- 2007-01-17 US US12/162,979 patent/US9015308B2/en active Active - Reinstated
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH05173988A (ja) * | 1991-12-26 | 1993-07-13 | Toshiba Corp | 分散処理方式および該分散処理に適用されるトランザクション処理方式 |
JPH05307478A (ja) * | 1992-04-30 | 1993-11-19 | Nippon Telegr & Teleph Corp <Ntt> | データベース管理システムの構成法 |
JPH07302242A (ja) * | 1994-04-30 | 1995-11-14 | Mitsubishi Electric Corp | 負荷分散方式 |
JPH1069418A (ja) * | 1996-07-02 | 1998-03-10 | Internatl Business Mach Corp <Ibm> | 階層化トランザクション処理方法 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013190971A (ja) * | 2012-03-13 | 2013-09-26 | Nomura Research Institute Ltd | 統合アクセス制御システム |
JP2015534308A (ja) * | 2012-08-22 | 2015-11-26 | オラクル・インターナショナル・コーポレイション | ミドルウェアマシン環境でインターネットプロトコル(ip)アドレスおよびノード名の整合性を確実にするためのシステムおよび方法 |
Also Published As
Publication number | Publication date |
---|---|
GB2449037B (en) | 2011-04-13 |
GB2449037A (en) | 2008-11-05 |
US20090013154A1 (en) | 2009-01-08 |
JPWO2007088728A1 (ja) | 2009-06-25 |
GB0815639D0 (en) | 2008-10-08 |
US9015308B2 (en) | 2015-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007088728A1 (ja) | 多層分散処理システム | |
CN106993019B (zh) | 分布式任务调度方法和系统 | |
US6732166B1 (en) | Method of distributed resource management of I/O devices in a network cluster | |
JP4515314B2 (ja) | 計算機システムの構成再現方法 | |
US6931640B2 (en) | Computer system and a method for controlling a computer system | |
US7177935B2 (en) | Storage area network methods and apparatus with hierarchical file system extension policy | |
US8387037B2 (en) | Updating software images associated with a distributed computing system | |
US7685148B2 (en) | Automatically configuring a distributed computing system according to a hierarchical model | |
US8055735B2 (en) | Method and system for forming a cluster of networked nodes | |
US20060173993A1 (en) | Management of software images for computing nodes of a distributed computing system | |
US20030093509A1 (en) | Storage area network methods and apparatus with coordinated updating of topology representation | |
US20100138540A1 (en) | Method of managing organization of a computer system, computer system, and program for managing organization | |
JPH0944342A (ja) | コンピュータネットワークシステム及びそのオペ レーティングシステムの版数管理方法 | |
WO2012068867A1 (zh) | 虚拟机管理系统及其使用方法 | |
CA2177020A1 (en) | Customer information control system and method in a loosely coupled parallel processing environment | |
CN114070822B (zh) | 一种Kubernetes Overlay IP地址管理方法 | |
EP3232609A1 (en) | Locking request processing method and server | |
WO2015100973A1 (zh) | 锁管理方法及系统、锁管理系统的配置方法及装置 | |
US11822970B2 (en) | Identifier (ID) allocation in a virtualized computing environment | |
US20080270697A1 (en) | Storage system and information transfer method for the same | |
JP2005100387A (ja) | 計算機システム及びクラスタシステム用プログラム | |
JP5235751B2 (ja) | 仮想計算機を有する物理計算機 | |
JP2000105722A (ja) | デ―タ構造割当ての結果をプレビュ―する方法及び装置 | |
US7558858B1 (en) | High availability infrastructure with active-active designs | |
CN114490186A (zh) | 数据备份规则分配方法、节点、系统及存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2007556809 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12162979 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 0815639 Country of ref document: GB Kind code of ref document: A Free format text: PCT FILING DATE = 20070117 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 815639 Country of ref document: GB Ref document number: 0815639.0 Country of ref document: GB |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07706902 Country of ref document: EP Kind code of ref document: A1 |