GB2522200A - Multi cluster synchronization method within a ad-hoc network - Google Patents

Multi cluster synchronization method within a ad-hoc network Download PDF

Info

Publication number
GB2522200A
GB2522200A GB1400667.0A GB201400667A GB2522200A GB 2522200 A GB2522200 A GB 2522200A GB 201400667 A GB201400667 A GB 201400667A GB 2522200 A GB2522200 A GB 2522200A
Authority
GB
United Kingdom
Prior art keywords
node
cluster
synchronization
nodes
clusters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB1400667.0A
Other versions
GB201400667D0 (en
GB2522200B (en
Inventor
Arnaud Closset
Pascal Lagrange
Pascal Viger
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Priority to GB1400667.0A priority Critical patent/GB2522200B/en
Publication of GB201400667D0 publication Critical patent/GB201400667D0/en
Publication of GB2522200A publication Critical patent/GB2522200A/en
Application granted granted Critical
Publication of GB2522200B publication Critical patent/GB2522200B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/001Synchronization between nodes
    • H04W56/0015Synchronization between nodes one node acting as a reference for the others

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Synchronisation In Digital Transmission Systems (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A method of synchronizing nodes belonging to separate respective node clusters is disclosed. The invention aims to provide better synchronization in multi cluster networks. The method comprises: locally synchronizing the nodes of each node cluster according to synchronization elements regularly transmitted within said each node cluster, and upon a cluster association event, forwarding previously received synchronization elements, by nodes of said node clusters to respective neighbour nodes, receiving, by a first node of a first cluster, a synchronization element from a second node of a neighbour second cluster, computing a synchronization offset element between said first and second clusters and synchronizing said first and second nodes based at least on said synchronization offset element.

Description

Multi cluster synchronization method within an ad-hoc network The present invention relates to synchronization of nodes of multi cluster based ad hoc networks.
Large-scale networks may require accurate synchronization, in particular for small, wireless, low-power sensors. The synchronization may need to be more accurate than Internet applications. For example, a very precise mapping of gathered (multimedia) sensor data with the time of corresponding events may be required in applications such as tracking and surveillance, or multi-projector rendering systems.
Ad-hoc sub-networks environments generally dynamically experience mobility of nodes as well as modification of transmission capabilities and quality.
In order to overcome such drawbacks, a multi clustering approach is often considered. The ad-hoc network is partitioned according to clusters that may share a common characterization factor. The characterization factor may be the transmission capability (for instance maximum number of hops" between two nodes), the transmission quality (for example the guaranteed Bit Error Rate using more or less complex transmission schemes), the synchronization capability (for example same level of synchronization accuracy) or other capabilities.
The multi clustering approach is generally a spatial approach as far as the cluster organization is considered.
However, when a common real time streaming or sensor application needs to be shared between several nodes within the network, it is necessary to share same application events, in association to a common time reference.
The IEEE 802.11 standard includes a master-slave protocol for clock synchronization which provides a limited accuracy due to instantaneous synchronization. In the case of instantaneous synchronization, a node computes a local clock error and adjusts its clock using this computation. This results in abrupt changes in local clock time, which can cause time discontinuity. Time discontinuity can lead to serious faults in distributed systems, such as a node missing important events (e.g., deadlines) or recording the same event several times.
Continuous clock synchronization avoids such discrepancies by spreading the correction over a finite interval. The local clock time is corrected by gradually speeding up or slowing down the clock rate. However, this approach suffers from a high run-time overhead since clocks need to be adjusted extremely regularly (up to every clock tick). As an example, RBS (acronym for "Reference Broadcast Synchronization") is a synchronization method of the receiver-to-receiver' type, which is well suited for offering better accuracy in distributed environments (not vulnerable to delay between senders and receivers) An RBS scheme is described in document << Un Mécanisme de Synchronisation pour /es Réseaux Sans Fil Multi-Sauts x' (White paper NOTERE'07, June 2007). According to this RBS scheme, a reference node periodically sends beacon messages (serving as synchronization event pulses) to the neighbour nodes using the wireless network's physical layer broadcast.
The receiving nodes use the message's arrival time as a point of reference for comparing their clocks, and further perform clock correction based on a linear transformation and the successive set of comparisons.
However, this mechanism doesn't address mobility and is based on static clustering based topology within the ad-hoc network.
Also, regular message flooding is necessary to maintain reference clock difference monitoring between all network nodes.
The IEEE P1394.1 D3.0 Draft Standard for High Performances Serial Bus Bridges (May 2004) describes a multi bus network synchronization scheme, and means to control application synchronization between nodes.
On an IEEE 1394 serial bus, synchronization is controlled by a cycle master node in charge of generating a synchronization packet at a 125 us mean period (IEEE 1394 reference clock period), from a nominal 24.576 MHz reference clock oscillator. This synchronization packet includes information containing an absolute reference time information indicating a second counter, a 125 us cycle counter, and a 24.576 MHz ticks period counter.
This information is used by all nodes on the bus to refresh local reference time with master reference time.
Each time a node is plugged or unplugged to an IEEE 1394 bus, a bus reset occurs on the bus until bus topology is determined and monitoring is performed.
Communication between buses is performed using specific dual portal bridge nodes. These nodes are used for transferring application data between upper and lower IEEE 1394 buses.
A synchronization tree is computed to define relative master / slave scheme between 1394 buses. Except the bus declared as network cycle master, each bus is slave relative to only one neighbor bus which acts as its master bus. Consequently, for each bridge, one portal is on master bus with reference to the other slave portal. The IEEE 1394 bridge standard requires that all IEEE 1394 reference cycle periods are kept synchronized with null phase.
Each bridge regularly computes an offset between IEEE 1394 cycle events generated by cycle master nodes on each portal. This offset is then delivered to the cycle master node on the slave bus portal, as a cycle period adjustment packet, in order to lock in phase and frequency.
Application packets crossing bridges experience a constant pre-defined delay within bridge fabric, expressed in IEEE 1394 cycle counts. This constant delay is added to all application timestamps of application packets before being issued on the output bridge portal.
A drawback of the IEEE 1394 synchronization is that a synchronization tree must be established towards IEEE 1394 bridges.
The fact that nodes mobility is not assumed is another drawback.
Any plug/unplug of a node will reset all applications initially running on the bus.
Also, another drawback is that delay experienced by data application is cascaded along bridge crossing, requiring therefore intermediate memory, and contributing to unguaranteed end to end latency in case of nodes mobility.
Thus, there is a need for enhancing synchronization is multi cluster networks.
The Invention lies within this context.
According to a first aspect of the invention there is provided a method of synchronizing nodes belonging to separate respective node clusters, the method comprising the following steps: -locally synchronizing the nodes of each node cluster according to synchronization elements regularly transmitted within said each node cluster, and upon a cluster association event -forwarding previously received synchronization elements, by nodes of said node clusters to respective neighbour nodes belonging to neighbour clusters, -receiving, by a first node of a first cluster, a synchronization element from a second node of a neighbour second cluster, -computing a synchronization offset element between said first and second clusters, and -synchronizing said first and second nodes based at least on said synchronization offset element.
A method according to the first aspect makes it possible to sustain application synchronization between nodes independently from the mobility of the nodes between clusters, and without requiring a global network synchronization scheme.
A dynamic clusters association / dissociation map may be used to allow optimizing message flooding between nodes, whereas local clock synchronization may dynamically switch between a free running mode and a lock mode, transparently for running applications.
For example, said synchronizing step is performed based on a reference offset table built based at least on synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters.
According to embodiments, the method further comprises the following steps: -receiving, by a master node of said first node cluster, synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters, -building said reference offset table, based at least on said offset elements received, and -broadcasting said reference offset table to nodes of said first node cluster.
For example, the method further comprises the following steps: -receiving, by a master node of said first node cluster, synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters, -updating said reference offset table, based at least on said offset elements received, and -broadcasting said reference offset table to nodes of said first node cluster.
The nodes may be synchronized with reference to a master cluster designated among said separate respective node clusters.
The synchronizing elements may comprise at least a cluster identification of the node cluster to which belongs the node from which they originate.
For example, said synchronizing elements comprise at least a reference count associated with a reference clock period within a node cluster to which belongs the node from which they originate.
The synchronizing elements may comprise at least a local count associated with a local clock period of the node from which they originate.
For example, the nodes are synchronized with reference to a master cluster designated among said separate respective node clusters.
According to embodiments, the synchronizing elements comprise at least a cluster identification of the node cluster to which belongs the node from which they originate.
For example, said synchronizing elements comprise at least a reference count associated with a reference clock period within a node cluster to which belongs the node from which they originate.
The synchronizing elements may comprise at least a local count associated with a local clock period of the node from which they originate.
For example, the cluster association event is triggered by an execution of an application involving several nodes belonging to several respective node clusters.
According to a second aspect of the invention there are provided computer programs and computer program products comprising instructions for implementing methods according to the first, second and/or third aspect(s) of the invention, when loaded and executed on computer means of a programmable apparatus.
According to a third aspect of the invention, there is provided a device configured for implementing methods according to the first aspect.
According to a fourth aspect of the invention, there is provided a system comprising devices according to the third aspect.
Other features and advantages of the invention will become apparent from the following description of non-limiting exemplary embodiments, with reference to the appended drawings, in which: -Figure 1 illustrates an ad-hoc network according to embodiments, -Figure 2 illustrates a device according to embodiments, -Figure 3a illustrates a synchronization timestamp format according to embodiments, -Figure 3b illustrates a reference offset format according to embodiments, -Figure 4 illustrates a reference offset map table for synchronizing clusters and applications according to embodiments, -Figure 5 illustrates a reference offset map table generation by cluster head of cluster, -Figures 6a to 6c are flowcharts of steps performed, according to embodiments, by nodes during synchronization set-up and running phases, and -Figures 7a to 7c are flowcharts of steps performed according to embodiments, by protocol adaptation layers connected to applications.
In what follows, methods and devices according to embodiments are described.
A network comprising nodes partitioned according to clusters is considered. A default local synchronization mode is independently managed within each cluster, relying on the regular transmission of a synchronization timestamp message by a selected master node in each cluster (the cluster head" in what follows), to the other non-master nodes populating its cluster.
The synchronization timestamp messages may contain a cluster identifier and a timestamp information representative of the time at which the message has been transmitted. For example, the time the message has been transmitted is defined with reference to a nominal clock period (called "reference clock period" in what follows) used within each nodes of the system.
The timestamp information may concatenate one integer value containing a counter index running with a constant divider factor from a nominal clock oscillator, plus an offset value containing the number of nominal clock periods ("ticks" hereinafter) of a nominal clock oscillator, within current internal counter index value.
When some nodes sharing common application data and events need to be synchronized, a cluster association may be triggered between corresponding clusters, in order to start multi-cluster synchronization.
Some nodes within each cluster may act as gateways, by having capability to exchange messages between clusters.
Among the concerned pool of clusters partitioning the network, one is designated as "cluster master", the other ones as "non-cluster masters".
The nodes within the clusters start forwarding previous synchronization timestamp messages to their neighborhood. These nodes may
B
be gateway nodes as well as ordinary nodes, according to how transmission concurrency is managed between clusters.
During an association set-up phase, when any node from a cluster receives, for the first time, a remote synchronization timestamp message generated by any node from another neighbor cluster, it computes an offset difference between information inside the remote synchronization timestamp message and the time the message has been received, with reference to the local reference clock period of its local cluster. Thus, this offset difference contains the difference in terms of counter index, and the difference in terms of clock ticks. This offset difference is then transmitted to the cluster head.
Next, the cluster head builds a reference offset map table and broadcasts it to all nodes in its local cluster. All nodes within a cluster thus locally stores all computed offset differences between local clock reference period and remote clock reference periods of each neighbor cluster.
During a subsequent association running phase, each time any node from a cluster receives a subsequent remote synchronization timestamp message generated by any node from another neighbor cluster, it computes a new offset difference in the same way than during the set-up phase. This new offset difference is then compared to the initial offset difference stored in the reference offset map table. Next, a corresponding deviation is delivered to the master node of the local cluster thereby enabling the master node to adjust the cluster's local reference clock period.
Over a pre-defined synchronization window, each cluster head except the master cluster head adjusts only once its local reference period, according to the offset deviations received from cluster nodes. These adjustments are therefore inherently reflected to other nodes in the local cluster according to the local reference clock period synchronization means described hereinabove.
Concurrently to the association running phase, synchronization of the applications run by the nodes is advantageously performed irrespective of the local time reference synchronization adjustments experienced between clusters, and irrespective of the number of clusters through which the timestamps are transmitted.
A node connected to a source application computes application timestamps from application events plus an expected regeneration latency, with reference to the reference clock period of its local cluster. These application timestamps have the same format as the cluster synchronization timestamp messages, and are transmitted to the neighboring nodes.
Each node receiving an application timestamp modifies it and forwards a new message with an updated timestamp, whether or not it shares the associated application with the node from which is received the application timestamp. For example, the application timestamp is modified by replacing the cluster identifier by the local cluster identifier (namely the identifier of the cluster to which belongs the node that received the application timestamp), and by adjusting the initial timestamp information with the reference offset difference from the association map table.
Figure 1 illustrates an exemplary multi cluster wireless ad-hoc network 140 comprising three node clusters 100, 110 and 120.
Each cluster is controlled by a node acting as cluster head (or master node). For example, the cluster head is in charge of sharing synchronization means for supporting the communication between the nodes within the cluster.
In the exemplary network of Figure 1, the nodes 101, 111 and 121 are cluster heads for clusters 100, 1110 and 120 respectively.
Inter cluster communications are performed through gateway nodes, whether clusters overlap or not. In case of overlapping clusters (e.g. clusters 100, 110), gateway nodes (like node 102b in Figure 1)visibility allows managing packets transmission between clusters thanks to the knowledge of medium access transmission scheme. In case of non-overlapping clusters (e.g.; 100, or 110, 120), gateway nodes (like nodes 102a, 122a, 122b and 112) may rather communicate in a point to point manner, using a beam forming transmission scheme in association with antenna directivity.
The number of gateway nodes in not necessarily limited within a same cluster. This number may be dynamically reconsidered according to communication needs between clusters.
Cluster nodes which are not cluster heads and which do not communicate with other cluster nodes are referred to as "ordinary nodes".
The mapping of the network between gateway and ordinary nodes can be dynamically controlled by cluster heads, using state of the art topology management schemes.
In network 140, node 103c of cluster 100 is connected to an input application, for instance a real time video streaming. This input application is shared within a first scheme with node 102a in the same cluster 100, and with node 123a in cluster 120. The application may dynamically be shared with a second scheme with node 11 3a of cluster 110.
An exemplary node device is described with reference to Figure 2.
The nodes of network 140 (see Figure 1) may have architecture as described hereinafter.
The device in Figure 2 comprises a wireless interface 200 for communicating within the network to which belongs. The device also has connectivity means 210 and 220 for communicating with input and output applications respectively.
A local oscillator 209 is used to generate a nominal system local reference clock period 207 issued from a generator 202, to support timestamps computation used for synchronizing communication and application layers.
Reference clock period 207 can be adjusted according to a Go fast/Go slow" order issued from a medium access controller 204.
The medium access controller 204 controls: * synchronization message transmission / reception using wireless interface 200, * reference offset table elaboration (cluster head), * reference offset computation and local reference clock period drift computation / correction, * on the fly modification of application synchronization timestamps, transmission I reception of application data and timestamps.
A protocol adaptation layer 203 is in charge of encapsulating application data and computing application synchronization timestamps for input applications, as well as regenerating data and application events to output applications according to application timestamps information.
A reference offset map table 201 is used for storing reference offset values with neighbour clusters with reference to the reference local clock period. This table is aggregated by the cluster head and broadcasted to all nodes in the clusters. The table is used for defining a local reference clock period lock during clusters association and for modifying application timestamps.
A synchronization timestamp format 300 according to embodiments is described with reference to Figure 3a. The synchronization timestamp may be used for managing cluster synchronization. The application timestamps may have the same format.
A first field of the timestamp "Cluster_ID" 301 identifies uniquely the cluster of network 140 from which originates the message containing the timestamp.
A second field of the timestamp "Cycle_count" 302 comprises a counter value incremented on every event of the local reference clock period.
According to the cluster formation and the cluster heads set-up process, the cycle count is incremented independently from one cluster to the other. When running in association mode, the increment periods dynamically lock to any neighbour cluster.
A third field of the timestamp "Ticks_count" 303 contains a counter value incremented on every event of local clock oscillator 209, modulo the number of ticks count necessary to complete a local reference clock period. For example, assuming a local reference clock period of 125 us, and a local clock oscillator of 24.576 MHz, local reference clock period event is generated in self-running mode every 3072 ticks count of local clock oscillator.
With reference to Figure 3b a reference format 310 used for elaborating table 201 is described.
A first field of the reference offset "Cluster_ID" 301 identifies uniquely the cluster of network 140 from which originates the message containing the reference offset.
Second fields of the reference offset "CC_Sign" 312a and "Cycle_count" 312b indicate a signed difference in cycle counts between the reference clock periods.
Third fields of the reference offset "IC_Sign" 313a and "Ticks_count" 313b indicate signed difference in ticks counters between reference clock periods.
An internal structure of a reference offset table is described with reference to Figure 4.
Table 201 comprises a map 401 associating states and the
corresponding reference offsets (field 403).
Within association map 401: * a first column 404 contains the list of available clusters in the network, * a second column 405 indicates association requests with neighbor clusters, * a third column 406 indicates the association states with each of neighbor clusters with which association has been requested.
Nodes having the field of column 405 set to YES perform the processes described in what follows with reference to Figures 5 to 7c. A node receiving a synchronization message or a message containing applicative timestamps and which is not associated with the initiating cluster (and therefore has the field of column 405 set to NO) can discard the message. The message can even be destroyed.
Nodes of the network perform synchronization when they receive timestamps associated with a cluster identifier for which the "association request" field is set to "YES" in the reference table. In case the timestamp received is associated with a cluster identifier for which the field is set to "NO", it is discarded.
In the third column, the IDLE" state indicates that no offset has been computed yet between the local reference clock period and the reference clock period of the corresponding remote cluster identifier. The SYNC" state indicates that a first offset has been computed between the local reference clock period and the reference clock period of the corresponding remote cluster identifier. This offset is used to compute any further drift between a subsequent offset computation and the first value computed and stored in the table. One drift per reference synchronization period is used to lock the local reference clock period to one neighbor cluster. The reference synchronization period value is shared between all clusters and is an integer multiple of a pre-defined nominal local reference clock period value.
Back to the association map 401, the fourth column 407 indicates whether a drift with regard to the reference offset has already been computed within the current synchronization period. This information is used to freeze synchronization adjustment on local reference clock period for current synchronization period.
The field 403 stores the first offset computation between the local and the remote reference clock periods, for each of associated clusters: * the field 408 contains the signed offset difference from the
"Cycle_counter" field 302,
* the field 402 contains the signed offset difference from
"Ticks_counter" field 303.
Figure 5 is a flowchart of steps performed by a cluster head when elaborating a reference offset table according to embodiments.
After an initialization step 500, the cluster head enters a waiting state 501, waiting for the receipt of a message from a node in the cluster containing a reference offset computation.
When such message is received, a test 502 is performed in order to check whether association is requested with the cluster identified in the message, and in the affirmative, whether a reference offset has already been received. For example, the "association request" field 405 is checked ("YES" or "NO") for the cluster identified in the message.
In case association is not requested yet, the information received in the message is registered in columns 408 and 402 of table 201 during step 503.
The reference offset and the cluster identifier is then broadcasted during step 504 to all other nodes within the local cluster. Next, association status 406 is validated as SYNC" during step 505 before going back to the waiting step 501.
Computation of synchronization timestamps is described with reference to Figure 6a.
After an initialization step 600, the node computing the synchronization timestamp enters a waiting step 601 during which it waits for the detection of a local reference clock period event 207.
Once the event is detected, the adjustment status 407, as described with reference to Figure 4 is reset as "pending", in order to reconsider the local reference clock period adjustment for the current reference clock period.
Next] it is tested during step 603 whether the node is a cluster head.
In case it is a cluster head, the message containing the timestamp information corresponding to the time at which the message is delivered to the wireless interface is computed during step 604 and is broadcasted to all nodes in the cluster during step 605. In case the node is not a cluster head, the process goes back to step 601.
Alternatives may be considered for delivering the timestamp information. For example, the timestamp information may be delivered in a sub-sequent message, in reference to initial transmission.
The process is recursively executed on each event of local reference clock period.
Figure 6b is a flowchart of steps performed by all nodes for performing multi-cluster synchronization.
After an initialization step 610, the node enters a waiting step during which it is waited for the receipt of a message containing a synchronization timestamp. The "association request" field 405 is checked ("YES" or "NO") in the reference table for the cluster identified in the message. In case the field is set to "NO", the message is discarded. In case the field is set to "YES" the process is continued.
The message can be issued from either a node in same cluster (cluster head, gateway node, ordinary node), or a node in a remote cluster (gateway node).
Once such message is received by a gateway node only, it is first relayed during step 612 to the neighbouring nodes, according to the gateway node configuration. This configuration defines the scheme used to relay messages to local or remote nodes.
When the cluster identifier indicates the local cluster in the timestamp, it corresponds to a synchronization timestamp computed by the cluster head and is used to adjust the local reference clock period, by comparing the difference between the timestamp in the message and the locally computed timestamp at the time the triggering message was received (step 621).
The corresponding drift is used to request adjustment of the local reference clock period 207 to module 202 (see Figure 2), using a Go fast! Go slow signal 208.
For the timestamps generated by a remote cluster, it is tested during step 614 whether a reference offset already exists in table 201 for the identified remote cluster.
In case the reference offset is not present in the table, the reference offset is computed during step 619 using the timestamp locally computed at the time the triggering message was received. Next, the status 406 for the identified cluster is marked as "SYNC", during step 620, in order to indicate that association set-up has been performed with this cluster. This reference offset is then delivered to the cluster head for consolidating table 201 to update all cluster nodes table 201 fields 403, during step 625.
In case the reference offset is present in the table, it is determined during step 615 whether a computation of a drift with any neighbour cluster has already been performed within the current synchronization period, by checking status 407 in table 1.
In case a drift has not been computed, it is computed relatively to the existing reference offset for the identified cluster, by using the timestamp locally computed at the time the triggering message was received and by comparing it with the reference offset value stored in column 408 and 402 of table 201 (step 616). Next, the computed drift is delivered to the cluster head during step 617 and any additional drift computation for the current synchronization period is inhibited by freezing the state 407 in table 201 as YES" (step 618).
In case a drift has already been computed, the process goes back to step 611.
Figure 6c is a flowchart of steps performed by a cluster head in order to lock a local reference period during a cluster association.
Step 630 is an initialization step that is followed by a waiting step 631. During step 631 it is waited for the receipt of a message containing a drift computation based on any reference offset of table 201. The "association request" field 405 is checked ("YES" or NO") in the reference table for the cluster identified in the message. In case the field is set to "NO", the message is discarded. In case the field is set to "YES" the process is continued.
Next, it is determined during step 632, whether previous drift information had already been received within the current synchronization period, by analyzing status 407 and therefore whether synchronization has already been performed during the current synchronization period.
In case synchronization has already been performed, the process goes back to step 631.
In case synchronization has not been performed yet, drift information is used to request adjustment of the local reference clock period 207 to module 202 (see Figure 2), using a Go fast I Go slow signal 208. Next, status 407 is modified as "YES", in order to inhibit any further adjustment of the local reference clock period during the current synchronization period.
Figure 7a is a flowchart of steps performed by a node connected to an input application.
After an initialization step 700, the protocol adaptation layer 203 of the node waits, during step 701, for the next application event that shall be time stamped.
Once the event detected, an application timestamp is computed, during step 702, that contains a cycle count value and a tick count value at the time the application event has been received.
Next, the application timestamp is completed by the medium access controller with the cluster identifier associated with the node, during step 703.
During step 704, the corresponding application timestamp is then forwarded to either a node in the same cluster (cluster head, gateway node, ordinary node), or a node in a remote cluster (gateway node).
Figure 7b is a flowchart of steps performed by a node receiving an application timestamp.
After an initialization step 710, it is waited for the receipt of a new application timestamp is received (step 711). Next, a cluster identifier is extracted and the corresponding reference offset is retrieved in columns 408 and 408 of table 201, during step 712.
The "association request" field 405 is checked ("YES" or "NO") in the reference table for the cluster identified in the message. In case the field is set to "NO", the message is discarded. In case the field is set to "YES" the process is continued.
Next, the application timestamp is modified with the reference offset content, by addition of the reference offset to the application timestamp during step 713. The cluster identifier is also replaced by the local cluster identifier, during step 714.
During step 715, the corresponding application timestamp is then forwarded from either a node in the same cluster (cluster head, gateway node, ordinary node), or a node in a remote cluster (gateway node).
Next, if the node shall also regenerate the application on interface 220 (see Figure 2), the modified timestamp is delivered to protocol adaptation layer 203.
Figure 7c is a flowchart of steps performed by a node having to regenerate application data and events on interface 220 (see Figure 2).
After an initialization step 730, it is waited during step 731 for the receipt of a new application timestamp from the medium access controller 204.
During step 732, it is waited for the correlation between the local reference clock period (207) and the cycle count field. It is also waited for the correlation between the local oscillator ticks count (209) and the ticks count field.
When correlation is reached, the corresponding application event is generated on interface 220.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive, the invention being not restricted to the disclosed embodiment. Other variations to the disclosed embodiment can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure and the appended claims.
In the claims, the word comprising" does not exclude other elements or steps, and the indefinite article a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

Claims (15)

  1. CLAIMS1. A method of synchronizing nodes belonging to separate respective node clusters, the method comprising the following steps: -locally synchronizing the nodes of each node cluster according to synchronization elements regularly transmitted within said each node cluster, and upon a cluster association event -forwarding previously received synchronization elements, by nodes of said node clusters to respective neighbour nodes belonging to neighbour clusters, -receiving, by a first node of a first cluster, a synchronization element from a second node of a neighbour second cluster, -computing a synchronization offset element between said first and second clusters, and -synchronizing said first and second nodes based at least on said synchronization offset element.
  2. 2. A method according to claim 1, wherein said synchronizing step is performed based on a reference offset table built based at least on synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters.
  3. 3. A method according to claim 2, further comprising the following steps: -receiving, by a master node of said first node cluster, synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters, -building said reference offset table, based at least on said offset elements received, and -broadcasting said reference offset table to nodes of said first node cluster.
  4. 4. A method according to claim 2, further comprising the following steps -receiving, by a master node of said first node cluster, synchronization offset elements relating to respective synchronization offsets between said first node cluster and neighbour node clusters, -updating said reference offset table, based at least on said offset elements received, and -broadcasting said reference offset table to nodes of said first node cluster.
  5. 5. A method according to any one of the preceding claims, wherein the nodes are synchronized with reference to a master cluster designated among said separate respective node clusters.
  6. 6. A method according to any one of the preceding claims, wherein the synchronizing elements comprise at least a cluster identification of the node cluster to which belongs the node from which they originate.
  7. 7. A method according to any one of the preceding clams, wherein said synchronizing elements comprise at least a reference count associated with a reference clock period within a node cluster to which belongs the node from which they originate.
  8. 8. A method according to any one of the preceding clams, wherein said synchronizing elements comprise at least a local count associated with a local clock period of the node from which they originate.
  9. 9. A method according to any one of the preceding claims, wherein the cluster association event is triggered by an execution of an application involving several nodes belonging to several respective node clusters.
  10. 10. A node device for a node cluster of network, capable of synchronizing with another node of another node cluster of said network, the node device comprising: -a control unit configured to locally synchronize the node device according to synchronization elements regularly transmitted within the node cluster to which it belongs, -a communication unit configured to forward, upon a cluster association event, previously received synchronization elements, to respective neighbour nodes belonging to neighbour clusters, the communication being further configured to receive a synchronization element from a node device of a neighbour node cluster, the control unit being further configured to: -compute a synchronization offset element between the node cluster to which the node device belong and said neighbour node cluster, and -synchronize the node device to said node device of the neighbour node cluster, based at least on said synchronization offset element.
  11. 11. A node device according to claim 10, further configured to implement a method according to any one of claims 2 to 9.
  12. 12. A system comprising a plurality of node devices according to claim 11.
  13. 13. A non-transitory information storage means readable by a computer or a microprocessor storing instructions of a computer program, for implementing a method according to any one of claims 1 to 9, when the program is loaded and executed by the computer or microprocessor.
  14. 14. A device substantially as hereinbefore described with reference to, and as shown in, Figures 1 and 2 of the accompanying drawings.
  15. 15. A method substantially as hereinbefore described with reference to, and as shown in, Figures 5, 6a-6c and 7a-7c of the accompanying drawings.
GB1400667.0A 2014-01-15 2014-01-15 Multi cluster synchronization method within an ad-hoc network Active GB2522200B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1400667.0A GB2522200B (en) 2014-01-15 2014-01-15 Multi cluster synchronization method within an ad-hoc network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1400667.0A GB2522200B (en) 2014-01-15 2014-01-15 Multi cluster synchronization method within an ad-hoc network

Publications (3)

Publication Number Publication Date
GB201400667D0 GB201400667D0 (en) 2014-03-05
GB2522200A true GB2522200A (en) 2015-07-22
GB2522200B GB2522200B (en) 2016-01-06

Family

ID=50238992

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1400667.0A Active GB2522200B (en) 2014-01-15 2014-01-15 Multi cluster synchronization method within an ad-hoc network

Country Status (1)

Country Link
GB (1) GB2522200B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2404121A (en) * 2003-07-18 2005-01-19 Motorola Inc Inter-network synchronisation
US20090122782A1 (en) * 2007-11-09 2009-05-14 Qualcomm Incorporated Synchronization of wireless nodes
WO2009101550A1 (en) * 2008-02-14 2009-08-20 Nxp B.V. Method of correction of network synchronisation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2404121A (en) * 2003-07-18 2005-01-19 Motorola Inc Inter-network synchronisation
US20090122782A1 (en) * 2007-11-09 2009-05-14 Qualcomm Incorporated Synchronization of wireless nodes
WO2009101550A1 (en) * 2008-02-14 2009-08-20 Nxp B.V. Method of correction of network synchronisation

Also Published As

Publication number Publication date
GB201400667D0 (en) 2014-03-05
GB2522200B (en) 2016-01-06

Similar Documents

Publication Publication Date Title
EP2515501B1 (en) Media clock negotiation
JP5358813B2 (en) Network node, time synchronization method, and network system
US9973601B2 (en) Fault tolerant clock network
US9690674B2 (en) Method and system for robust precision time protocol synchronization
CN103916950A (en) Time synchronization method and system
EP3140933B1 (en) System and method to dynamically redistribute timing and synchronization in a packet switched network
WO2015196685A1 (en) Clock synchronization method and apparatus
CN103595494B (en) A kind of non-stop layer time division multiple access synchronous method being applicable to wireless self-networking
WO2020135279A1 (en) Clock synchronization method and apparatus and storage medium
JP6555445B1 (en) Time synchronization system, time master, management master, and time synchronization method
CN107959537B (en) State synchronization method and device
JP5891142B2 (en) Communications system
US11336687B2 (en) System and method for providing security for master clocks
US20240048261A1 (en) Robust time distribution and synchronization in computer and radio access networks
JP2018088644A (en) Time synchronization method and time synchronization system between wirelessly-connected terminals
US20230024329A1 (en) Apparatus and method for supporting precision time synchronization protocol of stealth clock type
GB2522200A (en) Multi cluster synchronization method within a ad-hoc network
US10554319B2 (en) Wireless communications with time synchronization processing
CN102769905B (en) Dynamic synchronous method of heterogeneous network system
CN107959968B (en) High-precision low-overhead wireless sensor network clock synchronization method
US20160309435A1 (en) Segment synchronization method for network based display
EP4068659A1 (en) Clock port attribute recovery method, device, and system
Chand Gautam et al. Quantitative and qualitative analysis of time synchronization protocols for wireless sensor networks
CA3093341A1 (en) Method and system for synchronizing a mesh network
GB2512605A (en) Method and apparatus for synchronisation of network nodes