JP4036661B2 - Replicated data management method, node, program, and recording medium - Google Patents

Replicated data management method, node, program, and recording medium Download PDF

Info

Publication number
JP4036661B2
JP4036661B2 JP2002055757A JP2002055757A JP4036661B2 JP 4036661 B2 JP4036661 B2 JP 4036661B2 JP 2002055757 A JP2002055757 A JP 2002055757A JP 2002055757 A JP2002055757 A JP 2002055757A JP 4036661 B2 JP4036661 B2 JP 4036661B2
Authority
JP
Japan
Prior art keywords
data
node
core
request
exists
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2002055757A
Other languages
Japanese (ja)
Other versions
JP2003256256A (en
Inventor
元紀 中村
稔 久保田
知洋 井上
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2002055757A priority Critical patent/JP4036661B2/en
Publication of JP2003256256A publication Critical patent/JP2003256256A/en
Application granted granted Critical
Publication of JP4036661B2 publication Critical patent/JP4036661B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

[0001]
BACKGROUND OF THE INVENTION
The present invention relates to data configured by a plurality of nodes having a radio interface, identified by a unique identifier, and a duplicate data management method in a radio network in which a plurality of the data is distributed and arranged in the plurality of nodes.
[0002]
[Prior art]
Conventionally, as a method for managing replicated data, it is possible to modify only one master data in the network, and reflect the modified contents of the master data to the replicated data sequentially by communication, or the replicated data is not necessarily the latest. The method of not guaranteeing has been taken. However, in order to avoid load concentration on the master data, to reduce traffic in the network, or to enable data correction when the network topology changes and access to the master data becomes impossible, It is necessary to be able to directly modify the replicated data itself.
[0003]
For this reason, for example, in the “replica database matching apparatus and the replicate database match method” described in Japanese Patent Application Laid-Open No. 11-3263, the database is distributed and distributed at a plurality of sites. It is possible to update the database at an arbitrary site by verifying whether or not the packet is exchanged between the sites. However, data cannot be synchronized with sites that cannot communicate, and data inconsistencies may occur between the time data is updated at each site and the information is reflected at other sites. is there.
[0004]
In the distributed file system, a part of the file is stored as a cache in a client that uses the file service to increase the availability of the file. For example, “Kistler, JJ, Satyanarayanan, M., mud is connected Operation in the Coda File System in ACM Transactions on Computer Systems, Feb. 1992, Vol. 10, No. 1, pp. 3-25” In the replication management method in the distributed file system Coda, in a disconnected state where the client cannot access the server, an operation can be performed on the client cache. Thereafter, when reconnecting to the server, the update information to the cache is compared with the information in the server. When a data mismatch is detected as a result of the comparison, the user is notified and repaired manually. In other words, temporary data inconsistencies are allowed.
[0005]
Furthermore, in a mobile database having a mobile terminal as a client, even when the client leaves the network, data correction at the client is allowed while maintaining strong data consistency. For example, it is described on page 114 of "Daniel Barbara, 溺 obile Computing and Databases-A Survey in IEEE Transactions on Knowledge and Data Engineering, Jan. 1999, Vol.11, No.1, pp.108-117". In the Wang and Paris method, when a copy of a certain data is distributed, in order to update any one of the replicas, a predetermined set of replicas (CSCRs) is notified and all the members of the CSCR are notified. When the confirmation is received, the update is regarded as successful. If some copies do not return confirmation, only the copy that has returned confirmation is registered as a new CSCR to another referee entity. If this registration is successful, the update succeeds, and if registration fails, the update fails. Here, when the mobile terminal leaves the network, it becomes impossible to communicate with any CSCR member, so that the copy on the terminal cannot be updated. Therefore, before the mobile terminal leaves, by registering only itself as a member of the new CSCR with respect to the referee, subsequent updates to the duplicate are possible without communicating with other nodes.
[0006]
FIG. 18 shows an example of a processing sequence of a duplicate data management method using such a referee. Assume that the client 600 and the replica 602 exist on a mobile terminal that is the same node, and the other replicas 604, 606, 608, and referrals 610 all exist on different nodes. Further, it is assumed that the duplicate 602, duplicate 604, duplicate 606, and duplicate 608 are members of the CSCR.
[0007]
When the client 600 updates a certain data, if an update request is transmitted to the replica 602 (step 501), the replica 602 transmits an update request to the replicas 604, 606, and 608 that are members of the CSCR. (Steps 502 to 504). Each of the duplicates 604, 606, and 608 that has received the update request updates its own data, and then returns a confirmation response to the duplicate 602 (steps 505 to 507). Upon receiving confirmation responses from all members of the CSCR, the replica 602 returns a commit request to all members of the CSCR (steps 508 to 510) and returns confirmation responses to the client 600 (step 511).
[0008]
Next, a sequence when a failure occurs in communication with the copy 608 itself or with the copy 608 will be described. The client 600 transmits an update request to the replica 602 (step 512), and the replica 602 transmits an update request to the replicas 604, 606, and 608 that are members of the CSCR (steps 513 to 515). At this time, the replicas 604 and 606 each return a confirmation response (steps 516 and 517), but the replication 602 cannot receive the confirmation response from the replication 608 due to a failure. Accordingly, a registration request is transmitted to register the duplicates 604 and 606 that have returned the confirmation response and the duplicate 602 themselves as new CSCRs in the referral 610 (step 518). Upon receipt of this, the referee 610 returns a change confirmation response to the copy 602 (step 519). As a result, since the copy 602 has received confirmation responses from all members of the CSCR, it transmits a commit request to the copies 604 and 606 (step 520) and returns a confirmation response to the client 600 (step 522). In this way, updating is possible even in a situation where communication with a member of a certain CSCR is not possible.
[0009]
Next, when the mobile terminal in which the client 600 and the replica 602 exist leaves the network, the replica 602 first registers itself with the referee 610 as a member of the only CSCR (step 523). When the referee 610 accepts this change, a change confirmation response is returned to the copy 602 (step 524). After the mobile terminal leaves the network (step 525), even if the client 600 sends an update request to the copy 602 (step 526), the copy 602 updates the data and confirms the response because the member of the CSCR is only itself. Can be returned (step 527).
[0010]
[Problems to be solved by the invention]
As mentioned above, traditional replicated data management methods assume reliable access to servers and all replication sites, or allow temporary inconsistencies between replicas instead of increasing data availability. The update right of the copy is managed on the assumption that the mobile terminal is known to leave the network. Thus, in a network where the topology changes frequently and the change is unpredictable, data can be stored at a node disconnected from the network while ensuring strict alignment between the data to prevent even temporary inconsistencies. It is impossible to update.
[0011]
On the other hand, considering a network in which multiple terminals are expected to be connected in the future and connected via wireless links, there are frequent changes in the topology such as the network being disconnected due to movement of one terminal or power failure. is assumed. Moreover, such changes are generally unpredictable from other nodes. Considering applications that do not allow even temporary inconsistencies on such networks, such as the distribution of electronic money and the use of content that is protected by copyright and has a limited number of concurrent users, data must be strictly aligned. Data must be updated at the mobile terminal while guaranteeing.
[0012]
The object of the present invention is to solve such a conventional problem, and in a network where the topology is dynamically changed and is unpredictable, while maintaining a strict match between the replicates of distributed data, It is an object of the present invention to provide a method for managing replicated data and a wireless network that prevent a decrease in availability of data update.
[0013]
[Means for Solving the Problems]
  In order to achieve the above object, the replication data management method of the present invention comprises:
  At the time of data generation, core data that can execute update processing that is correction or deletion processing of the data is generated at the data generation node,ArrangementAs well as, Peripheral nodes that can directly communicate with the node where the core data exists via a wireless linkTheCreate and place replicated data that can only be referenced by data,
  CoAdataThe node whereReplication dataAll nodes where theKeep the addressAs well as, Core dataThe othernodeInMoveLetWhenIn addition,Know before movingDoubleMade dataAll nodes where theMove againstOf the previous nodeInform the address,
  Each node where the duplicate data exists holds the address of the node where the core data exists, and manages the communication status between the node where the core data exists and the own node,
  If an update request or reference request to the data occurs at the node where the core data exists, the nodeThe coreExecute the request on the data,
  If a data update request occurs on a node with replicated data,If the communication status is good, the node automatically cooperates with the node where the core data exists.After moving the core data to the node,The coreFor dataFurtherNew processingRun,If the communication status is not good, the update request is forwarded to the node where the core data exists, and the node where the core data which received the update request executes the update processing of the core data, and Send the result to the node where the update request occurred,
  If a request to reference the latest data to the data occurs on the node where the replicated data exists,The nodeCore dataTo the node whereRequest for the latest informationtransferAndTheReceived the latest information reference requestTheCore dataThe node where theThe latest information on the dataLatest informationReference requestThe node where the error occurredInSendAnd
  If a normal reference request to the data occurs on the node where the replicated data exists,The nodeData in the nodeInformationTo the requesterSendShi,
  CoIf a data reference request occurs on a node that has no data or duplicate data,The nodeIn cooperation with the node where the core data existsSelfGenerate and place duplicate data on nodesAfter that, the data information in the local node is sent to the request source..
[0014]
Here, “at the time of data generation ...”, since subsequent data update at the generation node and data reading at the peripheral nodes are executed locally, “when core data moves from node to node…” For "", the replicated data knows the location of the latest core data, so for "When a data update request occurs at a node where the replicated data exists ...", when subsequent data updates occur at that node In order to enable local execution, the inter-node communication is performed only when the latest data is necessary for “when a request for referring to the latest data to the data occurs on the node where the replicated data exists”. In order to make it possible to refer to the data locally, it is the main operation meaning. In other words, it is highly probable that the node that generated or updated the data will continue to update the data, and the processing around the data generating node is likely to cause data read. The aim of the present invention is to reduce the number of necessary inter-node communications.
[0015]
In the present invention, for each data, one core data that can be updated is arranged in the network, and a plurality of replicated data that can only be read are arranged in a distributed manner, thereby maintaining the consistency of the data. High availability, and core data is not placed in a fixed location, but is dynamically moved according to data usage and network conditions, and can be communicated with the core data when the topology changes Consistency is maintained by validating only the data.
[0016]
DETAILED DESCRIPTION OF THE INVENTION
Next, embodiments of the present invention will be described with reference to the drawings.
[0017]
FIG. 1 is a configuration diagram of a wireless network according to an embodiment of the present invention.
[0018]
The wireless network of the present embodiment is composed of a node 10, a node 20, a node 30, a node 40, and a node 50 each having a wireless interface (for example, a wireless LAN card). Node 10 can communicate directly with node 20, node 30, and node 40, node 20 can communicate directly with node 10 and node 50, node 30 and node 40 can only communicate with node 10, and node 50 can communicate directly with only node 20, respectively. It is. In the initial state, each node 10, 20, 30, 40, 50 has a data manager 13, 23, 33, 43, 53 for receiving a data operation request (data generation, update, reference) from the client. The duplicate data 27, 37, 47, 57 does not exist. In addition, the nodes 10, 45, and 55 request the data operation in the node 10, the node 40, and the node 50, respectively. Further, a routing agent that manages packet route information exists on each node. In FIG. 1, only the routing agents 16, 46, and 56 on the node 10, the node 40, and the node 50 are shown.
[0019]
In this embodiment, the clients 15, 45, 55, the data managers 13, 23, 33, 43, 53, the routing agents 16, 46, 56, the core data 17, and the duplicate data 27, 37, 47, 57 communicate with each other. By specifying the object ID of the other party, message communication can be performed asynchronously without depending on the position.
[0020]
Furthermore, in the present embodiment, it is assumed that broadcast to all nodes existing within the radio reach from each node, multicast to several nodes, and unicast communication to a specific node are possible. In addition, it is assumed that unicast and multicast communication can be performed even for nodes existing within a range where radio waves do not reach by relaying communication by some nodes.
[0021]
FIG. 2 shows a data implementation method by the data managers 13, 23, 33, 43, 53 (hereinafter collectively referred to as the data manager 61). The data manager 61 includes a data correspondence table 62 that is a correspondence table between a data name that is a unique identifier and an object ID of a data object (hereinafter simply referred to as data) corresponding to the data, and a data name generated on the node 60. It is assumed that the created data table 63 in which is recorded is managed. Here, it is assumed that the data name is composed of a node ID and an arbitrary character string, and the object ID is composed of a node ID and an arbitrary number string. The data manager 61 receives a data generation request, a data update request, and a data reference request from another object or another object of the own node 60, and each request includes a corresponding data name.
[0022]
FIG. 3 is a flowchart showing the operation of the data manager 61.
[0023]
Waiting for message reception (step 101), receiving a message (step 102), and determining the message type (step 103).
[0024]
When the data generation request is received, if an entry corresponding to the data name in the data generation request exists in the created data table 63 or the data correspondence table 62, an error is returned to the request source (steps 104 to 106). Otherwise, the data 64 is generated in its own node, the request source of the data generation request is notified to the data 64, and the object ID of the data 64 is registered as the object ID corresponding to the data name in the data generation request (Step 107), the data name in the data generation request is registered in the created data table 63 (Step 108).
[0025]
When the data manager 61 receives a data update request or data reference request, if an entry including the data name exists in the data correspondence table 62, the request is transferred to the corresponding object ID (steps 109 and 110). ). When the data update request or the data reference request is received, if there is no entry including the data name in the data correspondence table 62, the core data search request is broadcast to the surrounding nodes (step 111). The core data search request includes the search source object ID and the sequence number. If the core data notification for the core data search request is not received for a predetermined time, an error is returned to the request source of the data update request or data reference request (steps 112 and 114). If the core data notification is received within a predetermined time, the data 64 is created in the own node and the object ID of the core data included in the core notification and the request source of the data update request or data reference request are stored in the data 64. (Steps 112 and 113).
[0026]
When the data manager 61 receives a core data search request, if the search source object ID of the core data search request matches its own object ID, a set of the search source object ID and sequence number of the core data search request is set. If it has already been received, the core data search request is discarded (steps 115 to 117). Otherwise, if the data name in the core data search request exists in the data correspondence table 62, the object ID of the corresponding entry is returned to the request source object ID of the core data search request (step 118). , 119). If neither of the above conditions is satisfied, the set of the search source object ID and sequence number of the core data search request is stored, and the core data search request is broadcast to surrounding nodes (steps 118 and 120).
[0027]
It is assumed that the data receives a data update request or a data reference request from another node or another object in the own node, and each request includes a corresponding data name.
[0028]
FIG. 4 shows a sequence of data generation operation when a data generation request is generated at the node 10 in the wireless network of FIG.
[0029]
First, the client 15 on the node 10 transmits a data generation request to the data manager 13 on the node 10 in order to save a file on the memory (step 121). The data manager 13 generates core data 17 on the node 10 (step 122), the core data 17 generates its own local data (step 123), and returns a generation response to the client 15 (step 124). The local data information replication generation request is broadcast (steps 125 to 127).
[0030]
The data managers 23, 33, and 43 that have received the generation request generate the replicated data 27, 37, and 47 in the respective nodes (steps 128, 129, and 130), and the replicated data 27, 37, and 47 are stored in the core data 17. Confirmation responses are returned (steps 131, 132, 133). In steps 125 to 127 and 128 to 130, the core data 17 notifies the object ID, which is its own address, to the duplicate data via the data manager of each node when it is generated.
[0031]
The core data 17 waits for a predetermined time to receive the confirmation response, and then stores the object ID (address) of the duplicate data that has received the confirmation response (step 134), and requests to execute data generation for all the duplicate data stored. Is multicast (steps 135 to 137). The duplicate data 27, 37, and 47 that have received the execution request generate local data (steps 138, 139, and 140).
[0032]
FIG. 5 shows an operation sequence when a data read and update request is generated in the node 10 in the network of FIG.
[0033]
When the client 15 on the node 10 reads data, a reference request is transmitted to the core data 17 (step 141). The core data 17 reads local data (step 142), and returns the result to the client 15 as a reference response (step 143).
[0034]
When the client 15 updates the data, an update request is transmitted to the core data 17 (step 144). The core data 17 updates the local data (step 145), and returns an update response to the client 15 based on the result (step 146).
[0035]
FIG. 6 shows an operation sequence when a data read and update request is generated in the node 40 in the wireless network of FIG.
[0036]
When the client 45 on the node 40 performs normal data reading, a reference request is transmitted to the duplicated data 47 (step 151). The duplicate data 47 reads local data (step 152) and returns the result to the client 45 as a reference response (step 153).
[0037]
Next, when the client 45 reads the latest data, a reference request is transmitted to the duplicated data 47 (step 154). In order to obtain the latest data, the duplicated data 47 transmits a reference request to the core data 17 stored therein (step 155). The core data 17 that has received the reference request reads the local data (step 156), and returns the result to the duplicate data 47 as a reference response (step 157). The duplicate data 47 returns the result to the client 45 (step 158) and saves it as local data (step 159).
[0038]
When the client 45 updates data, an update request is transmitted to the duplicate data 47 (step 160). The duplicated data 47 transmits a core movement request to the core data 17 stored therein (step 161). The core data 17 that has received the core transfer request is itself duplicated data (step 162), and a core movement response is returned to the duplicated data 47 (step 163). At this time, the object ID of the duplicate data stored by itself is also returned in the core movement response.
[0039]
The replicated data 47 that has received the core movement response becomes itself core data (step 164), updates its own local data (step 165), and returns an update response to the client 45 based on the result (step 165). 166). Further, for the replicated data notified in the core movement response, the fact that the core has moved, the address after the movement, and the update contents of the data are included in the update notification and multicast (steps 167 to 169). The core data 17 and the duplicate data 27 and 37 that have become duplicate data are updated according to the content of the message if an update notification is received (steps 170, 171, and 172).
[0040]
FIG. 7 shows an operation sequence when a data read request is generated in the node 50 in which no duplicate data exists in the wireless network of FIG.
[0041]
Since the client 55 on the node 50 does not know the object ID of the duplicate data or core data for the data, it sends a reference request for the data to the data manager 53 (step 181). The data manager 53 broadcasts a core search request to search for core data for the data (step 182). In this embodiment, only the data manager 23 receives the core search request. If the data manager 23 does not know the object ID of the core data 17, it rebroadcasts the core search request. In this embodiment, since the data manager 23 knows the object ID of the core data 17, the object ID of the core data 17 is included in the core notification and returned to the data manager 53 (step 183).
[0042]
The data manager 53 that has received the core notification message generates the duplicate data 57 and notifies the core data 17 and the object ID of the client 55 (step 184). The duplicate data 57 transmits a data request to the core data 17 (step 185). The core data 17 reads the local data (step 186), includes the data contents in the data notification, returns it to the duplicate data 57 (step 187), and stores the duplicate data 57 (step 188). The duplicate data 57 generates local data from the notified information (step 189), and returns a reference response to the client 55 (step 190).
[0043]
FIG. 8 shows an operation sequence when a data update request is generated in the node 50 in which no duplicate data exists in the wireless network of FIG.
[0044]
Since the client 55 on the node 50 does not know the object ID of the duplicated data or core data for the data, it sends an update request for the data to the data manager 53 (step 201). The data manager 53 broadcasts a core search request to search for core data for the data (step 202). In this embodiment, only the data manager 23 receives the core search request. If the data manager 23 does not know the object ID of the core data 17, it rebroadcasts the core search request. In this embodiment, since the data manager 23 knows the object ID of the core data 17, the object ID of the core data 17 is included in the core notification and returned to the data manager 53 (step 203).
[0045]
The data manager 53 that has received the core notification message generates the duplicate data 57 and notifies the core data 17 and the object ID of the client 55 (step 204). The duplicate data 57 transmits a data request to the core data 17 (step 205). The core data 17 includes the data contents in the data notification and returns it to the duplicate data 57 (step 206), and stores the duplicate data 57 (step 207). The duplicate data 57 generates local data from the notified information (step 208). At this point, the node 50 is in a state where duplicate data exists.
[0046]
Next, the copy data 57 transmits a core movement request to the core data 17 in order to update the data (step 209). The core data 17 that has received the core transfer request becomes copy data (step 210), and returns a core transfer response to the copy data 57 (step 211). At this time, the object ID of the duplicate data stored by itself is also returned in the core movement response.
[0047]
The replicated data 57 that has received the core movement response itself becomes core data (step 212), and the update process is executed (step 213). Then, an update response is returned to the client 55 (step 214), and an update notification is multicast to all the replicated data 17, 27, 37, 47 notified in the core move response (steps 215 to 218). The core data 17 and the duplicate data 27, 37, and 47 that have become duplicate data are updated according to the contents of the message if an update notification is received (steps 219 to 222).
[0048]
In this embodiment, the copy data 57 makes a data request to the core data 17 before moving the core data (step 205), but the data request and the core move are requested together in one message. Is also possible.
[0049]
FIG. 9 shows a sequence showing an operation of accessing the replicated data in the wireless network of FIG.
[0050]
The duplicate data 47 on the node 40 periodically transmits a communication status confirmation message to the core data 17 stored therein (step 231). The core data 17 that has received the communication status confirmation message returns a communication status notification message to the duplicate data 47 (step 232). The duplicate data 47 that has received the communication status notification message determines whether the communication status is good depending on whether the round trip time from the transmission of the communication status confirmation message to the reception of the communication status notification message exceeds a certain threshold. Is determined (step 233). In this embodiment, since the round trip time due to reception of the communication status notification exceeds a certain threshold, it is assumed that the communication status is determined as “bad”.
[0051]
At this time, when the client 45 on the node 40 performs a data update operation, an update request is transmitted to the duplicate data 47 (step 234). Since the replication status of the duplicate data 47 with the core data 17 stored therein is “bad”, an update request is transferred to the core data 17 together with the update contents (step 235). The core data 17 that has received the update request updates the local data (step 236) and returns an update response (step 237). The duplicate data 47 returns the result to the client 45 as an update response (step 238) and saves it as local data (step 239).
[0052]
Next, when the replication data 47 transmits a periodic communication status confirmation message to the core data 17 (step 240) and receives the communication status notification message (step 241), the round trip time does not exceed a certain threshold value. Assume that the communication status is changed to “good” (step 242). At this time, when the client 45 on the node 40 transmits a data update request (step 243), the duplicated data 47 transmits a core movement request to the core data 17 (step 244). The core data 17 that has received the core transfer request becomes copy data itself (step 245), and returns a core transfer response to the copy data 47 (step 246). At this time, the object ID of the duplicate data stored by itself is also returned in the core movement response.
[0053]
The replicated data 47 that has received the core movement response becomes itself core data (step 247), updates its own local data (step 248), and returns an update response based on the result to the client 45 ( Step 249), the update notification is multicast to all the replicated data 17, 27, 37 notified in the core movement response (Steps 250 to 252). The core data 17 and the duplicate data 27 and 37 that have become duplicate data are updated according to the content of the message if an update notification is received (steps 253 to 255).
[0054]
FIG. 10 is a sequence diagram showing an operation of accessing the replicated data in the wireless network of FIG.
[0055]
The replicated data 47 on the node 40 extracts the node ID of the existing node 10 from the object ID of the core data 17 stored by itself, and transmits a route information request message to the routing agent 46 of the own node (step) 261), the number of hops of the route to the node 10 is acquired. The routing agent 46 returns a route information notification message to the duplicate data 47 (step 262). The replicated data 47 that has received the route information notification message determines whether the communication status is good depending on whether the acquired hop count information exceeds a certain threshold (step 263). In this embodiment, it is assumed that the communication status is determined as “bad” because the number of hops to the node 10 acquired by receiving the route information notification message exceeds a certain threshold. In the network used in this embodiment, the number of hops between the node 40 and the node 10 is always 1. However, for example, the node 40 has moved to a place where only the node 50 can communicate, so the node 40 to the node 10 A case where the number of hops of the route is 3 is also conceivable.
[0056]
At this time, when the client 45 on the node 40 performs a data update operation, an update request is transmitted to the duplicate data 47 (step 264). Since the duplicate data 47 has a communication status of “bad” with the core data 17 stored in itself, the update request is transferred to the core data 17 together with the update contents (step 265). The core data 17 that has received the update request updates the local data (step 266) and returns an update response (step 267). The duplicate data 47 returns the result to the client 45 as an update response (step 268) and saves it as local data (step 269).
[0057]
Next, when the replication data 47 sends a periodic route information request message to the routing agent 46 (step 270), and the routing agent 46 returns a route information notification message (step 271), the number of hops to the node 10 is It is assumed that the communication status is changed to “good” because it is below a certain threshold. For example, a case where the number of hops of the path from the node 40 to the node 10 becomes 1 because the node 40 returns to the position shown in FIG.
[0058]
At this time, when the client 45 on the node 40 transmits a data update request (step 273), the duplicated data 47 transmits a core movement request to the core data 17 (step 274). The core data 17 that has received the core move request becomes copy data itself (step 275), and returns a core move response to the copy data 47 (step 276). At this time, the object ID of the duplicate data stored by itself is also returned in the core movement response.
[0059]
The replicated data 47 that has received the core move response itself becomes core data (step 277), updates its own local data (step 278), and returns an update response based on the result to the client 45 ( In step 279), update notification is multicast to all the replicated data 17, 27 and 37 notified in the core movement response (steps 280 to 282). The core data 17 and the duplicate data 27 and 37 that have become duplicate data are updated according to the content of the message if an update notification is received (steps 283 to 285).
[0060]
FIG. 11 and FIG. 12 are sequence diagrams showing an access operation to replicated data in the wireless network of FIG.
[0061]
The duplicate data 57 on the node 50 periodically transmits a communication status confirmation message to the core data 17 stored therein (step 301). The core data 17 that has received the communication status confirmation message returns a communication status notification message to the duplicated data 57 (step 302). The replicated data 57 that has received the communication status notification message stores the round trip time from the transmission of the communication status confirmation message to the reception of the communication status notification message. Next, the duplicated data 57 extracts the node ID of the existing node 10 from the object ID of the core data 17, transmits a route information request message to the routing agent 56 of its own node (step 303), and sends it to the node 10. Get the number of hops for the route. The routing agent 56 returns a route information notification message to the duplicate data 57 (step 304), and notifies the number of hops of the route to the node 10. The duplicate data 57 that has received the route information notification message stores the acquired hop number information. Further, whether the stored round trip time exceeds a certain threshold, and whether the stored round trip time divided by the stored hop count exceeds another threshold Whether or not the communication status with the core data 17 is determined is determined (step 305). In the present embodiment, it is assumed that the communication status is determined as “bad” because the stored round trip time exceeds a certain threshold.
[0062]
Assume that the client 55 on the node 50 has transmitted a data update request to the duplicated data 57 (step 306). Since the replication status of the duplicate data 57 is not “good”, an update request is first transmitted to the core data 17 (step 307). The core data 17 that has received the update request updates the local data (step 308), and returns an update response (step 309). The replicated data 57 that has received the update response returns an update response to the client 55 (step 310) and updates its own local data (step 311).
[0063]
Next, the replicated data 57 transmits a sequential movement request to the core data 17 (step 312). The core data 17 that has received the sequential movement request determines the node ID of the existing node 50 from the object ID of the replication data 57 that is the request source, and transmits a route information request to the routing agent 16 of the own node (step 313). Request next node information on the route to the node 50. The routing agent 16 confirms the route information, returns a route information notification (step 314), and notifies that the next node is the node 20. The core data 17 that has received the route information notification sequentially transmits a movement request to the replicated data 27 on the notified node 20 (step 315). If there is no duplicate data on the node 20, the data manager 23 on the node 20 is requested to generate data.
[0064]
The replicated data 27 that has received the sequential movement request transmits a core movement request to the core data 17 (step 316). The core data 17 that has received the core movement request is itself duplicated data (step 317), and a core movement response is transmitted to the duplicated data 27 (step 318). At this time, the core movement response includes the object ID information of the duplicate data held in the core data 17. The replicated data 27 that has received the core migration response becomes itself core data (step 319), and multicasts an update notification to all the replicated data 17, 37, 47, 57 notified in the core migration response (step 320). ˜323), it is notified that the core data has moved. Each copy data 37, 47, 57 that has received the update notification updates the object ID of the core data held (steps 324 to 327). In this way, the core data sequentially moves on the node 20. Note that this movement is performed after the update request generated by the client 55 is processed, so that the processing time of the update request from the client 55 is not affected.
[0065]
Next, the replicated data 57 on the node 50 transmits a periodic communication status confirmation message to the core data 27 (step 331). The core data 27 that has received the communication status confirmation message returns a communication status notification message to the duplicate data 57 (step 332). The replicated data 57 that has received the communication status notification message stores the round trip time from the transmission of the communication status confirmation message to the reception of the communication status notification message. Next, the duplicated data 57 extracts the node ID of the existing node 20 from the object ID of the core data 27, transmits a route information request message to the routing agent 56 of the own node (step 333), and sends to the node 20 Get the number of hops in the route. The routing agent 56 returns a route information notification message to the duplicate data 57 (step 334). The duplicate data 57 that has received the route information notification message stores the acquired hop number information. In the present embodiment, the stored round trip time does not exceed a certain threshold, and the value obtained by dividing the stored round trip time by the stored number of hops does not exceed another threshold. Therefore, it is assumed that the communication status is determined as “good” (step 335).
[0066]
At this time, it is assumed that the client 55 on the node 50 transmits an update request to the copy data 57 again (step 336). Since the communication status is good this time, the duplicate data 57 transmits a core movement request to the core data 27 (step 337). The core data 27 that has received the core move request itself becomes duplicate data (step 338), and a core move response is returned to the duplicate data 57 (step 339). The replicated data 57 that has received the core move response itself becomes core data (step 340), updates the local data (step 341), returns an update response to the client 55 (step 342), and is in the core move response. An update notification is multicast to all the notified replicated data 17, 27, 37, and 47 (steps 343 to 346) to notify that the updated data contents and core data have moved. Each copy data 17, 27, 37, 47 that has received the update notification updates the object ID and local data of the core data held (steps 347 to 350).
[0067]
In the following embodiments in FIG. 13 and subsequent figures, the execution request (steps 135 to 137) in the data generation sequence shown in FIG. 4 is notified including the validity period information (a period for guaranteeing the latest) of the data. It is assumed that the duplicate data 27, 37, and 47 that have received the execution request store the valid period information. In this case, the core data 17 is calculated and stored in step 123 of FIG. 4 as (the current time + a predetermined time) as the effective information period.
[0068]
FIG. 13 shows a sequence of operations when a data read and update request is generated in the node 10 in the wireless network of FIG.
[0069]
When the client 15 on the node 10 reads data, a reference request is transmitted to the core data 17 (step 351). The core data 17 reads local data (step 352), and returns the result to the client 15 as a reference response (step 353).
[0070]
When the client 15 on the node 10 updates data, an update request is transmitted to the core data 17 (step 354). The core data 17 multicasts an update request to all the duplicate data 27, 37, 47 stored in itself (steps 355 to 357). The copy data 27, 37, and 47 that have received the update request set a timer until the update execution request is received, and return confirmation responses (steps 358 to 360), respectively. When the core data 17 receives confirmation responses from all the replicated data 27, 37, and 47, it updates its own local data and uses (current time + predetermined predetermined time) as new effective period information. (Step 361), an update response is returned to the client 15 (step 362), an execution request is multicast to all the duplicate data 27, 37, and 47, and the newly stored validity period information is notified. (Steps 363 to 365). Each copy data 27, 37, 47 that has received the execution request cancels the timer until the set update execution request is received, executes the update process on the local data, and stores the valid period information ( Steps 366-368).
[0071]
Thereafter, when an update request is not generated for the data, the core data 17 multicasts a data extension request to the replicated data 27, 37, and 47 before the valid period of the data expires (steps 369 to 371). The duplicate data 27, 37, 47 that has accepted the extension of data returns an extension response (steps 372, 373, 374). By repeating this, the validity of the data is maintained.
[0072]
FIG. 14 shows an operation sequence when a data read request is generated at the node 40 in the wireless network of FIG.
[0073]
When the client 45 on the node 40 reads data, a reference request is transmitted to the duplicate data 47 (step 381). If the replicated data 47 is within the valid period of the data, the local data is read (step 382) regardless of whether it is normal data reading or latest data reading, and the result is used as a reference response to the client. It returns to 45 (step 383).
[0074]
On the other hand, when the client 45 transmits a reference request to the replicated data 47 (step 384), if the valid period of the data held by the replicated data 47 has passed, a reference request for the latest data is transmitted to the core data 17 ( Step 385). The core data 17 that has received the reference request refers to the local data (step 386), and returns a reference response including the valid period information of the data that is also notified to other replicated data to the peripheral replicated data 47 (step 386). 387).
[0075]
The replicated data 47 that has received the reference response updates the local data and its valid period (step 388), and returns the result to the client 45 as a reference response (step 389). Further, a confirmation response is returned to the core data 17 (step 390). The core data 17 that has received the confirmation response stores the object ID of the duplicated data 47 (step 391), and transmits a periodic valid period extension notification to the duplicated data 47 as well.
[0076]
FIG. 15 shows an operation sequence when a data update request is generated at the node 40 in the wireless network of FIG.
[0077]
When the client 45 on the node 40 updates data, an update request is transmitted to the duplicate data 47 (step 401). The duplicated data 47 transmits a core movement request to the core data 17 (step 402).
[0078]
The core data 17 that has received the core transfer request becomes copy data itself (step 403), and returns a core transfer response to the copy data 47 (step 404). The core movement response includes object ID information of duplicate data stored in the core data 17. The copy data 47 that has received the core move response itself becomes core data (step 405), and an update request is made to the copy data included in the core move response and the previous core data, that is, the copy data 17, 27, and 37. Multicast (steps 406 to 408).
[0079]
The replicated data 17, 27, 37 that has received the update request sets a timer until the update execution request is received and sends back a confirmation response (steps 409 to 411). When the core data 47 receives confirmation responses from all the replicated data 17, 27, and 37, it updates the local data of itself (step 412), returns an update response to the client 45 (step 413), An execution request is multicast to the duplicated data 17, 27, 37 (steps 414 to 416). Each of the replicated data 17, 27, and 37 that has received the execution request executes update processing on the local data (steps 417 to 419).
[0080]
FIG. 16 shows an operation sequence when a data update request is generated in the node 50 in which no duplicate data exists in the wireless network of FIG. The data reference request at the node 50 is the same sequence as in FIG.
[0081]
Since the client 55 on the node 50 does not know the object ID of the replicated data or core data for the data, it transmits an update request for the data to the data manager 53 (step 421). The data manager 53 broadcasts a core search request to search for core data for the data (step 422). In this embodiment, only the data manager 23 receives the core search request. If the data manager 23 does not know the object ID of the core data 17, it rebroadcasts the core search request. In this embodiment, since the data manager 23 knows the object ID of the core data 17, the object ID of the core data 17 is included in the core notification and returned to the data manager 53 (step 423).
[0082]
The data manager 53 that has received the core notification message generates the duplicate data 57 and notifies the core data 17 and the object ID of the client 55 (step 424). The duplicate data 57 transmits a data request to the core data 17 (step 425). The core data 17 includes the data contents in the data notification and returns it to the duplicate data 57 (step 426), and stores the duplicate data 57 (step 427). The duplicate data 57 generates local data from the notified information (step 428). At this point, the node 50 is in a state where duplicate data exists.
[0083]
Next, the copy data 57 transmits a core movement request to the core data 17 in order to update the data (step 429). The core data 17 that has received the core move request becomes copy data itself (step 430), and returns a core move response to the copy data 57 (step 431). At this time, the object ID of the duplicate data stored by itself is also returned in the core movement response.
[0084]
The duplicate data 57 that has received the core movement response becomes itself core data (step 432), and is updated with respect to the duplicate data included in the core movement response and the previous core data, that is, the duplicate data 17, 27, 37, and 47. The request is transmitted (steps 433 to 436).
[0085]
The replicated data 17, 27, 37, and 47 that have received the update request sets a timer until the update execution request is received and returns an acknowledgment (steps 437 to 440). When the core data 57 receives confirmation responses from all the replicated data 17, 27, 37, 47, it updates its own local data (step 441) and returns an update response to the client 55 (step 442). An execution request is transmitted for all the replicated data (steps 443 to 446). Each of the replicated data 17, 27, 37, and 47 that has received the execution request executes update processing on the local data (steps 447 to 450).
[0086]
In this embodiment, the copy data 57 makes a data request to the core data 17 before moving the core data 17 (step 425), but the data request and the core move are collectively requested in one message. It is also possible.
[0087]
In the present invention, data consistency can be maintained even if a failure such as loss of any message in the sequence diagrams described so far occurs. Hereinafter, such an example will be described with reference to the drawings.
[0088]
FIG. 17 shows a sequence when a failure occurs in some message communication in the sequence diagram of the operation shown in FIG.
[0089]
Receiving the update request from the client 15 on the node 10 (step 451), the core data 17 multicasts the update request to all the duplicate data 27, 37, 47 stored therein (steps 452, 453, 454). However, it is assumed that this update request did not reach the copy data 47 due to a failure. At this time, the replicated data 27 and 37 that have received the update request set a timer until the update execution request is received and return confirmation responses respectively (steps 455 and 456), but the replicated data 47 naturally does not transmit the confirmation response. .
[0090]
In the core data 17, the timer waiting for the confirmation response from the duplicate data 47 times out (step 457), so the object ID of the duplicate data 47 is deleted from the list of duplicate data managed (step 458) and updated to the client 15. Is notified by an update response (step 459), and an update cancellation is transmitted to the duplicate data 27 and 37 that have returned the confirmation response (steps 460 and 461) to notify the cancellation of the update request. Here, it is assumed that the update cancellation is not received by the duplicate data 37 due to a failure.
[0091]
The replicated data 27 that has received the update cancellation does not execute the update and responds to the subsequent reference request. On the other hand, the copy data 37 discards the data (step 463) because the execution request waiting timer from the core data 17 times out (step 462). At this time, the data of each node is either not updated information or does not exist at all, and the data consistency is maintained. Even when the duplicate data 47 receives the update request and the confirmation response transmitted thereafter is not received by the core data 17 due to a failure, the same processing as the duplicate data 37 is performed.
[0092]
Thereafter, if an update request arrives before the validity period of the data in the core data 17 expires (step 470) (step 464), an attempt to update fails, so the failure is immediately notified by an update response ( Step 465).
[0093]
If more time elapses and the valid period of the duplicated data 27 or 47 has passed (steps 466 and 467), the data becomes invalid and is discarded (steps 468 and 469). When there is a reference request or update request from the client to each replicated data after the data is discarded, a process is requested for the core data as in the case of access to a node without a replica.
[0094]
On the other hand, if the valid period expires in the core data (step 470), the newest data and all the duplicate data 27 and 37 held after waiting for a certain time to absorb the time error at each node. The valid period of the data to be newly set is notified (steps 471 and 472). The duplicate data 27 and 37 that have received the data notification generate new data (steps 473 and 474) and return confirmation responses (steps 475 and 476), respectively. If the core data 17 does not receive an acknowledgment within a certain time, the duplicated data is deleted from the held list. On the other hand, in the replicated data 47, if there is no notification from the core data even after a lapse of a certain time after the data is discarded, it disappears (step 477). Thereafter, normal processing may be performed for requests from clients.
[0095]
If the core movement response message at the time of moving the core (for example, step 404 in FIG. 15) is lost, the core data does not exist on the network, and the data cannot be updated thereafter. Since no data is updated, there is no data inconsistency. In addition, a plurality of core data cannot exist. Furthermore, since the core data is moved only when the communication status with the core data is good, the possibility that the core movement response message is lost is small. Further, by sequentially moving the core data, it is possible to reduce the possibility that the core movement response message is lost when moving the core.
[0096]
The node described above is not realized by dedicated hardware, but a program for realizing the function is recorded on a computer-readable recording medium, and the program recorded on the recording medium is recorded on the computer. It may be read by the system and executed. The computer-readable recording medium refers to a recording medium such as a floppy disk, a magneto-optical disk, a CD-ROM, or a storage device such as a hard disk device built in the computer system. Furthermore, a computer-readable recording medium is a server that dynamically holds a program (transmission medium or transmission wave) for a short period of time, as in the case of transmitting a program via the Internet, and a server in that case. Some of them hold programs for a certain period of time, such as volatile memory inside computer systems.
[0097]
【The invention's effect】
As described above, according to the present invention, it is possible to maintain the consistency of data in the entire network by having the right to update unique core data in the network for each data that can exist in an arbitrary node. Is possible.
[0098]
According to the first aspect of the present invention, since the core data is moved to the update request generation node when data is updated, and the latest data is read from the core data, data inconsistency after data update does not occur. The present invention is efficient because communication between nodes is unnecessary when a data reference request is more likely to occur at the update generation node after data update. In particular, in a network in which the topology is likely to change and the change is unpredictable, the ability to refer to data without inter-node communication makes it possible to increase the availability of data.
[0099]
Further, in the inventions according to claims 2 to 6, the access status to the core data is determined by using the round trip time of communication to the core data, the number of hops, and the value calculated from them, and the data update request If the access situation to the core data is bad at the time of occurrence, the core data is not moved, but the data update request is requested to the core data. It is possible to reduce the possibility that a failure such as the subsequent data update becomes impossible.
[0100]
On the other hand, if the number of hops in the path between the core data and the data update request source is large, if the core moves a long number of hops at a time, the message is likely to be lost, and the data update after the core data disappears There is a high possibility that it will not be possible. According to the third aspect of the present invention, the possibility of such a failure occurring can be kept low by sequentially moving the core data toward the data update source each time the data is changed.
[0101]
According to the seventh aspect of the present invention, since a valid period is provided for data, a reference request at a node where duplicate data exists can be executed without inter-node communication. Strict consistency of data in the event of unpredictable network disconnects.
[0102]
In the present invention, as shown in the embodiment, even if a message related to data update is lost, it is possible to maintain data consistency. In addition, even if communication between nodes is interrupted on the way, normal update and reference can be performed while maintaining data consistency after a certain period of time, thus suppressing a decrease in data availability.
[0103]
Therefore, the present invention reduces the cost of inter-node communication and reduces the availability of data while robustly maintaining consistency between replicas of data distributed in a network where the topology is likely to change and the change is predictable. This is effective when it is desired to suppress the deterioration of the image.
[Brief description of the drawings]
FIG. 1 is a configuration diagram of a wireless network according to an embodiment of the present invention.
FIG. 2 is an explanatory diagram of a data realization method in the embodiment of FIG. 1;
FIG. 3 is a flowchart showing the operation of the data manager.
FIG. 4 is a sequence diagram illustrating an example of a data generation operation.
FIG. 5 is a sequence diagram illustrating an example of an access operation to core data.
FIG. 6 is a sequence diagram illustrating an example of an access operation to replicated data.
FIG. 7 is a sequence diagram illustrating an example of a reference operation in a node where no copy exists.
FIG. 8 is a sequence diagram showing an example of an update operation in a node where no copy exists.
FIG. 9 is a sequence diagram illustrating an example of an access operation to replicated data.
FIG. 10 is a sequence diagram illustrating an example of an operation of accessing data.
FIG. 11 is a sequence diagram illustrating an example of a data update operation when the communication state is not good.
FIG. 12 is a sequence diagram illustrating an example of a data update operation when the communication state is good.
FIG. 13 is a sequence diagram illustrating an example of an operation of accessing core data.
FIG. 14 is a sequence diagram illustrating an example of a data read operation with replicated data.
FIG. 15 is a sequence diagram illustrating an example of a data update operation with replicated data.
FIG. 16 is a sequence diagram illustrating an example of a data update operation in a node where no copy exists.
FIG. 17 is a sequence diagram illustrating an example of an operation when a message is lost.
FIG. 18 is a sequence diagram showing an example of a data update operation in the conventional method.
[Explanation of symbols]
10, 20, 30, 40, 50, 60 nodes
13, 23, 33, 43, 53, 61 Data manager
15, 45, 55 clients
16, 46, 56 Routing agent
17 Core data
27, 37, 47, 57 Replicated data
62 Data correspondence table
63 Created data table
64 Object data
65 Local data
101-120, 121-140, 141-146, 151-172 steps
181 to 190, 201 to 222, 231 to 255, 261 to 285 steps
301-327, 331-350, 351-374, 381-391 steps
401-419, 421-450, 451-477, 501-527 steps

Claims (13)

  1. When configured by a plurality of nodes having a wireless interface, data identified by a unique identifier and a copy of the data are distributed to a plurality of nodes, and a user executes access to the data using the identifier, In a wireless network with access to data or any replicated data,
    At the time of data generation , core data capable of executing update processing that is correction or deletion processing of the data is generated and arranged at the data generation node, and can be directly communicated with a node where the core data exists through a wireless link. only the reference processing of the data around the node generates an executable copy data, place,
    Nodes co Adeta there holds the addresses of all nodes that duplicated data is present, when Before moving the core data to other nodes, all the replicated data that has been grasped before moving there To the node of the destination node address,
    Each node where the duplicate data exists holds the address of the node where the core data exists, and manages the communication status between the node where the core data exists and the own node,
    When an update request or a reference request to the data occurs in a node where the core data exists, the node executes an update process or a reference process on the core data,
    When an update request to data is generated at a node where duplicate data exists, the node moves the core data to its own node in cooperation with the node where the core data exists if the communication status is good. on the, with respect to the core data running update process, if the communication status is not good, then forwards the update request to the node to which the core data is present, the core data received the further new request The existing node executes the update process of the core data and transmits the result to the node where the update request has occurred,
    If the reference request for the latest data to the data at the node where the copy data is present occurs, the node forwards the reference request updates to a node to which the core data is present, it receives a reference request of the latest information node the core data exists that sends updates to the core data to the nodes reference request of the newest information is generated,
    If the copied data is normal request for reference to data generated in a node that exists, the node transmits the information of the data in the node to the requesting,
    Co Adeta even if reference request to the data occurs on a node that does not exist duplicated data, the node generates a copy data to the own node in cooperation with the node in which the core data is present, in terms of the arrangement, the self A replicated data management method that transmits information on data in a node to a request source .
  2. If an update request for the data at the node where replicated data exists occurs, the node if the communication status is good, transfers the update request to the node to which the core data is present, the core to the node data is present, it transmits a sequential move request for requesting the transfer of the core data to the next node on the communication path between the node and the own node to which the core data is present,
    The successive nodes the core data exists that has received the movement request, in cooperation with the routing agent of the node to move the core data to said next node, the duplicated data management method according to claim 1.
  3. Each node replicated data is present, periodically send packets for grasping communication status to the node core data exists, including a time stamp indicating the packet transmission time in said packet,
    The nodes core data exists which has received the packet for the communication condition ascertainment may send a response packet containing a timestamp in the packet, the node receive replicated data packets for grasping the communication condition exists And
    The node having the duplicate data that has received the response packet compares the time stamp in the packet with the current time, measures the round trip time of communication to the node having the core data, and determines the round trip time. by determining good the communication status is smaller than a certain threshold, to manage the communication status, duplicate data management method according to claim 1 or 2.
  4. Each node replicated data is present, to grasp the number of hops of the route to the node in which the core data in cooperation with the routing agent of its own node exists, the communication status good smaller than a certain threshold number of the hops by determining, for managing the communication status, duplicate data management method according to claim 1 or 2.
  5. Each node replicated data is present, transmits a packet for periodically the communication condition ascertainment to the node core data exists, including a time stamp indicating the packet transmission time in said packet,
    The nodes core data exists which has received the packet for the communication condition ascertainment may send a response packet containing a timestamp in the packet, the node receive replicated data packets for grasping the communication condition exists And
    Node replicated data received the response packet is present, by comparing the time stamp and the current time in the packet, measures the round-trip time of the communication to the node where the core data is present, also the own node If the number of hops of the route to the node where the core data exists is grasped in cooperation with the routing agent, the round trip time is smaller than the threshold value, and the value obtained by dividing the round trip time by the hop number is smaller than another threshold value. , by determining good the communication status, manages the communication status, duplicate data management method according to claim 1 or 2.
  6. Each node node core data exists and replicated data exists, the data is held valid period information is a period to ensure that the latest,
    If the core data update request to the data occurs in the nodes existing, the node notifies the updates to all nodes replicated data exists,
    Node replicated data notified updates from nodes that the core data is present is present, notified that stores the updates has been accepted to the node to which the core data is present, if the node to which the core data is present to but within a predetermined time after performing the notification of the updating, when receiving the notification acknowledgment from all nodes which replicated data is present, the execution request and the new validity period information of the update all the replicated data is present Notify the node ,
    If in the node of the core data is present the constant update notification after performing time, you can not receive the notification of the approval, if the node duplicated data exists there, node the core data is present, the own held in the node, delete the address of the node that replicated data that could not be notified of the approval is present, has transmitted the notice of acknowledgment, the failure of the update to all nodes replicated data is present Notify
    When the update data is notified from the node where the core data exists, and the node where the duplicate data exists receives the update execution request within a certain time, the node updates the data with the update contents stored when the update is notified. The effective period information notified in the execution request is stored, and if the update execution is not requested within a certain time, the data is deleted,
    If the reference request for the most recent data to the data at the node where the copy data is present occurs, the node unless if request generation time has passed period indicated by the valid period information of the data of the own node, the local node 2. The data is referenced, and if the latest data reference request occurrence time has passed the period indicated by the data validity period information of the own node , the latest information reference request is transferred to the node where the core data exists. 6. The replication data management method according to any one of items 1 to 5 .
  7. When data identified by a unique identifier and a copy of the data are distributed to a plurality of nodes having a radio interface and a user accesses the data using the identifier, the data or any duplicate data a said node access to constitute a wireless network capable,
    When the data is generated by the own node, core data capable of executing update processing that is correction or deletion of the data is arranged in the own node, and peripheral nodes that can directly communicate with the own node through a radio link A replication generation request for requesting generation and arrangement of replication data that can be executed only by the data reference processing,
    When the copy generation request is received from the node where the core data exists, the copy data of the core data is generated and arranged in the own node,
    When the core data exists in the own node, the addresses of all the nodes where the duplicate data exists are held, and when the core data is moved to another node, the duplicate data that was grasped before the move exists Notify all nodes of the address of the destination node,
    When duplicate data exists in the own node, the address of the node where the core data exists is held, and the communication status between the node where the core data exists and the own node are managed,
    When core data exists in the own node and an update request or reference request to the data occurs in the own node, update processing or reference processing is executed on the core data,
    Copy data exists in the own node, and if the update request to the data is generated by the own node, if it is good the communication status, a co Adeta to the own node in cooperation with the node in which the core data is present in terms of the moved, to perform the update processing for the core data, if said communication status is not good, then forwards the update request to the node to which the core data is present,
    When core data exists in the local node and the update request is received from a node in which duplicate data exists, the core data is updated, and the result is transmitted to the node where the update request occurs. ,
    When duplicate data exists in the own node and a request for referring to the latest data to the data occurs in the own node, the reference request for the latest information is transferred to the node in which the core data exists,
    When the core data exists in the own node and the reference request for the latest information is received from the node where the duplicate data exists, the latest information of the core data is transmitted to the node where the request for reference of the latest information has occurred.
    When duplicate data exists in the own node and a normal reference request to the data occurs in the own node, the data information in the own node is sent to the request source,
    If there is no core data or duplicate data in the own node and a request to reference data occurs in the own node, the duplicate data is generated and arranged in the own node in cooperation with the node in which the core data exists. Then, the node that transmits the data information in the own node to the request source .
  8. If there replicated data to the own node, and update request to the data in the own node occurs, if the communication status is not good, transfers the update request to the node to which the core data is present, the core to the node data is present, it transmits a sequential move request for requesting the transfer of the core data to the next node on the communication path between the node and the own node to which the core data is present,
    The core data is moved to the next node in cooperation with the routing agent of the own node when the sequential movement request is received from a node where the core data exists in the own node and the duplicate data exists. Node described in .
  9. If there is duplicate data in its own node, it transmits a packet for grasping the communication state to the node where the core data exists , and the packet includes a time stamp indicating the packet transmission time,
    When the communication status grasping packet is received from the node where the core data exists in the own node and the duplicate data exists, the response packet including the time stamp in the packet is changed to the communication status grasping packet. To the node where the duplicate data of the source
    There replicated data to the own node, and if the core data receives the response packet from the node that exists, communications from the time stamp and the current time in the packet to the node where the core data is present round time The node according to claim 7 or 8 , wherein the communication status is determined to be good if the round time does not exceed a threshold value.
  10. If replicated data to the own node exists, in cooperation with the routing agent the own node to grasp the number of hops of the route to the node where the core data is present, before Symbol communication is smaller than the threshold value is the number of the hops The node according to claim 7 or 8 , wherein the node determines that the communication status is good.
  11. When duplicate data exists in its own node, a packet for grasping the communication state is periodically transmitted to the node where the core data exists , and the packet includes a time stamp indicating a packet transmission time,
    When the communication status grasping packet is received from the node where the core data exists in the own node and the duplicate data exists , the response packet including the time stamp in the packet is changed to the communication status grasping packet. To the node where the duplicate data of the source
    There duplicated data to the own node, and if the core data receives the response packet from the node that is present, round-trip communications from the time stamp and the current time in the packet to the node where the core data is present Measuring the time and cooperating with the routing agent of its own node to know the number of hops of the route to the node where the core data exists, the round trip time is smaller than a threshold, and the round trip time is determining a good pre-Symbol communications situation is smaller than another threshold also divided by the number of hops, node according to claim 7 or 8.
  12. An information processing program for causing a computer to execute the processing of the node according to any one of claims 7 to 11 .
  13. The recording medium which recorded the information processing program for making a computer perform the process of the node of any one of Claim 7 to 11 .
JP2002055757A 2002-03-01 2002-03-01 Replicated data management method, node, program, and recording medium Expired - Fee Related JP4036661B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2002055757A JP4036661B2 (en) 2002-03-01 2002-03-01 Replicated data management method, node, program, and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2002055757A JP4036661B2 (en) 2002-03-01 2002-03-01 Replicated data management method, node, program, and recording medium

Publications (2)

Publication Number Publication Date
JP2003256256A JP2003256256A (en) 2003-09-10
JP4036661B2 true JP4036661B2 (en) 2008-01-23

Family

ID=28666521

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002055757A Expired - Fee Related JP4036661B2 (en) 2002-03-01 2002-03-01 Replicated data management method, node, program, and recording medium

Country Status (1)

Country Link
JP (1) JP4036661B2 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4568576B2 (en) * 2004-10-26 2010-10-27 株式会社デンソーアイティーラボラトリ Data sharing system, communication terminal, and data sharing method
US8516137B2 (en) * 2009-11-16 2013-08-20 Microsoft Corporation Managing virtual hard drives as blobs
EP2548135A2 (en) * 2010-03-18 2013-01-23 NUODB Inc. Database management system
JP5649457B2 (en) * 2011-01-05 2015-01-07 株式会社東芝 Database system and its clients
US9501363B1 (en) 2013-03-15 2016-11-22 Nuodb, Inc. Distributed database management system with node failure detection
US10037348B2 (en) 2013-04-08 2018-07-31 Nuodb, Inc. Database management system with database hibernation and bursting
US10067969B2 (en) 2015-05-29 2018-09-04 Nuodb, Inc. Table partitioning within distributed database systems
US10180954B2 (en) 2015-05-29 2019-01-15 Nuodb, Inc. Disconnected operation within distributed database systems

Also Published As

Publication number Publication date
JP2003256256A (en) 2003-09-10

Similar Documents

Publication Publication Date Title
Whetten et al. A high performance totally ordered multicast protocol
Wang et al. Hadoop high availability through metadata replication
Lin et al. Stochastic analysis of network coding in epidemic routing
TWI357242B (en) Route selection in wireless networks
Demmer et al. DTLSR: delay tolerant routing for developing regions
US7366113B1 (en) Adaptive topology discovery in communication networks
JP4732661B2 (en) How to synchronize the client database with the server database
CA2155058C (en) Method and system for updating replicated databases in foreign and home telecommunication network systems for supporting global mobility of network customers
US8171171B2 (en) Data synchronization method and system between devices
US6977908B2 (en) Method and apparatus for discovering computer systems in a distributed multi-system cluster
EP1126682A2 (en) Position identifier management apparatus and method, and position identifier processing method
KR100576935B1 (en) Ontology-based service discovery system and method for ad hoc networks
US6256747B1 (en) Method of managing distributed servers and distributed information processing system using the method
CN1201228C (en) Method for updating customer's mounting data response customer raising events
US8375001B2 (en) Master monitoring mechanism for a geographical distributed database
US20050086384A1 (en) System and method for replicating, integrating and synchronizing distributed information
US6810259B1 (en) Location update protocol
US7849134B2 (en) Transaction accelerator for client-server communications systems
CN102035886B (en) Consistency within a federation infrastructure
US7804769B1 (en) Non-stop forwarding in a multi-chassis router
ES2363796T3 (en) System and method for maintaining the coherence of a "cache" content in a multiple level software system intended to constitute interface between large data bases.
US20040085980A1 (en) System and method for maintaining transaction cache consistency in mobile computing environment
JP3974652B2 (en) Hardware and data redundancy architecture for a node in a communication system
Yu et al. A scalable web cache consistency architecture
US7890463B2 (en) Apparatus and method for a distributed storage global database

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20040121

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20040121

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20040121

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20050610

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070614

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070627

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070821

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20071024

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20071030

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101109

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20101109

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111109

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111109

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121109

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121109

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131109

Year of fee payment: 6

LAPS Cancellation because of no payment of annual fees