WO2017157459A1 - Gestion de configuration dans un réseau de communication - Google Patents

Gestion de configuration dans un réseau de communication Download PDF

Info

Publication number
WO2017157459A1
WO2017157459A1 PCT/EP2016/055962 EP2016055962W WO2017157459A1 WO 2017157459 A1 WO2017157459 A1 WO 2017157459A1 EP 2016055962 W EP2016055962 W EP 2016055962W WO 2017157459 A1 WO2017157459 A1 WO 2017157459A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
synchronisation
communication network
configuration data
congestion
Prior art date
Application number
PCT/EP2016/055962
Other languages
English (en)
Inventor
Orazio Toscano
Silvia PEDEMONTE
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2016/055962 priority Critical patent/WO2017157459A1/fr
Publication of WO2017157459A1 publication Critical patent/WO2017157459A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/103Active monitoring, e.g. heartbeat, ping or trace-route with adaptive polling, i.e. dynamically adapting the polling rate
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0852Delays
    • H04L43/0858One way delays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/10Active monitoring, e.g. heartbeat, ping or trace-route
    • H04L43/106Active monitoring, e.g. heartbeat, ping or trace-route using time related information in packets, e.g. by adding timestamps

Definitions

  • the present disclosure is generally related to management of configuration data in a communication network.
  • node data can comprise configuration data, performance data, fault data and other logging and security data.
  • a first approach is to directly access configuration data on each node in the network.
  • An example is Command Line Interface (CLI).
  • CLI Command Line Interface
  • a disadvantage of this approach is that it places a heavy load on the Operations and Maintenance (O&M) interface of a node and creates a large processing overhead in the OSS.
  • a second approach is to store a copy of the configuration data from each node in a data store at the OSS. Storing configuration data at the OSS can improve performance because a query requiring access to configuration data can use the local store of configuration data at the OSS. This is especially useful in large networks. It also simplifies accessing CM data for one or more nodes and makes it easier to design applications.
  • the data stored at the OSS can be maintained in an up-to-date state using notifications from the nodes.
  • the node sends a notification to the OSS and the OSS can update its local store of data for that node.
  • the OSS can periodically perform a synchronisation or discovery to ensure that the configuration data held in the OSS store is up-to-date.
  • the OSS store of configuration data can be maintained in a synchronised state with the data at the node by notifications.
  • the OSS store is updated within a few seconds of a change in configuration data occurring at a node.
  • the OSS store is not up-to-date. For example, when the OSS first connects to a node, it requires time to acquire "sync" with the node. This could happen if the node is newly added to OSS or if the OSS itself starts or restarts. Another example is when the node is restarted.
  • the OSS may not receive notifications for some time and when a connection is re-established with the node the OSS requires time to get back "in sync".
  • Another example is when the notification processing capability of the OSS becomes over-loaded and notifications from nodes must be discarded by the OSS. The OSS must treat the node as if it had restarted and consequently, needs time to get back "in sync". Another example is when the notification sending capability of the node is over-loaded and it has to stop sending notifications to the OSS. The OSS must detect this condition and then treat the node as if it had restarted. Another example is when software bugs either in the OSS or in the node cause the notification sending or receiving to break down. The OSS attempts to detect this fault and then treats the node as if it had restarted. All of the above scenarios involve a synchronisation operation.
  • the OSS supports two types of synchronisation operations: Delta and Full.
  • a Delta synchronisation is performed if the node is capable of telling the OSS how many changes (i.e. notifications) occurred since it was last in contact. In this case, the node provides OSS with the list of changes which OSS applies to the data and brings it up- to-date.
  • a Full synchronisation is performed if a Delta synchronisation is not supported, not possible (e.g. an excessive amount of changes have taken place or the node has restarted), or if the user requests a manual synchronisation.
  • a Full synchronisation a full audit and comparison of all CM data for the node is executed by the OSS and differences are detected with the appropriate changes being applied to the copy of the data in the OSS.
  • a full synchronisation operation is an intensive operation both for the node and for the OSS. Even a delta synch can be intensive for the OSS if a large number of changes have occurred.
  • the length of the synchronisation operation is increased with the size of the node (more configuration data) and number of nodes requiring synchronisation at the same time.
  • a node may be configured to periodically send a heartbeat notification to the OSS, or the OSS may send a stimulus to the node (e.g. a message) and the node replies by sending a heartbeat notification to the OSS. If a heartbeat notification is not received within a predetermined waiting time, the OSS can determine that the node is out of contact, and the data held at the OSS about that node is now out of sync.
  • the OSS may not receive a heartbeat notification from one or more of the nodes in the network. Therefore, the OSS determines that it is out of sync with those nodes and performs a synchronisation operation with those nodes. This can cause a significant delay before the OSS once again holds current data about those nodes.
  • An aspect of the disclosure provides a method of operating a control entity of a communication network.
  • the method comprises storing configuration data for a node of the communication network.
  • the method comprises determining a congestion level of the communication network.
  • the method comprises setting a duration of a heartbeat signal waiting period based on the determined congestion.
  • the duration of the heartbeat signal waiting period increases with increased congestion.
  • the method comprises determining if synchronisation of configuration data is required with the node based on whether a heartbeat signal, or a predetermined number of heartbeat signals, is received from the node within the heartbeat signal waiting period.
  • An advantage of at least one example is that it is possible to avoid the need for a synchronisation operation (e.g. delta sync or full sync) when the network is congested. Increasing the length of the heartbeat waiting period can allow the control entity to receive the delayed heartbeat signal.
  • a synchronisation operation e.g. delta sync or full sync
  • An advantage of at least one example is preventing deterioration of an already congested network.
  • An advantage of at least one example is minimising the time required to restore configuration data at a control entity. Fewer iterations of a synchronisation operation are required to achieve synchronisation with a node.
  • An advantage of at least one example is improved network scalability.
  • the method may determine synchronisation of configuration data is not required by receiving a heartbeat signal from the node, or a predetermined number of heartbeat signals from the node, within the heartbeat signal waiting period.
  • the method may determine synchronisation of configuration data is required by not receiving a heartbeat signal from the node, or not receiving a predetermined number of heartbeat signals from the node within the heartbeat signal waiting period. If it is determined that a synchronisation of configuration data is required with the node the method may comprise sending a synchronisation request to the node.
  • the method may comprise suppressing sending a synchronisation request to the node.
  • the synchronisation request may be a delta synchronisation request to synchronise a part of the configuration data with the control entity.
  • the synchronisation request may be a full synchronisation request to synchronise all of the configuration data with the control entity.
  • Heartbeat waiting period durations There may be a set of possible heartbeat waiting period durations.
  • the method may comprise determining if the congestion level has changed. If the congestion level has changed, the method may increment the duration of the heartbeat signal waiting period to a next duration in the set of possible heartbeat waiting period durations.
  • the method may comprise determining a magnitude of a change in the congestion level.
  • the method may increment the duration of the heartbeat signal waiting period to a different duration in the set of possible heartbeat waiting period durations based on the magnitude of the change in the congestion level.
  • Determining a congestion level of the communication network may be based on delay times of configuration notifications received from at least one of the nodes in the network.
  • Determining a congestion level of the communication network may be based only on delay times of configuration notifications received from the node.
  • Determining a congestion level of the communication network may be based on delay times of configuration notifications received from a plurality of nodes in the network.
  • Determining a congestion level of the communication network based on transit times of configuration notifications may comprise determining an arrival time of a notification received from a node.
  • the method may inspect a timestamp in the notification.
  • the method may determine a delay time of the notification based on the timestamp and the arrival time.
  • the method may comprise distributing sending of synchronisation requests to a plurality of nodes over a period of time.
  • An aspect of the disclosure provides apparatus for a control entity of a communication network.
  • the apparatus comprises means for storing configuration data for a node of the communication network.
  • the apparatus comprises means for determining a congestion level of the communication network.
  • the apparatus comprises means for setting a duration of a heartbeat signal waiting period based on the determined congestion, wherein the duration of the heartbeat signal waiting period increases with increased congestion.
  • the apparatus comprises means for determining if synchronisation of configuration data is required with the node based on whether a heartbeat signal, or a predetermined number of heartbeat signals, is received from the node within the heartbeat signal waiting period.
  • the control entity can be an Operations Support System (OSS), Network Management System (NMS) or other control entity.
  • OSS Operations Support System
  • NMS Network Management System
  • An aspect of the disclosure provides a control apparatus for a communication network.
  • the apparatus comprises a processor and a memory, the memory containing instructions that when executed by the processor cause the processor to store configuration data for a node of the communication network.
  • the instructions cause the processor to determine a congestion level of the communication network.
  • the instructions cause the processor to set a duration of a heartbeat signal waiting period based on the determined congestion, wherein the duration of the heartbeat signal waiting period increases with increased congestion.
  • the instructions cause the processor to determine if synchronisation of configuration data is required with the node based on whether a heartbeat signal, or a predetermined number of heartbeat signals, is received from the node within the heartbeat signal waiting period.
  • An aspect of the disclosure provides a method of operating a control entity of a communication network.
  • the method comprises storing configuration data for a node of the communication network.
  • the method comprises determining if synchronisation of configuration data is required with the node.
  • the method comprises determining a congestion level of the communication network. If it is determined that synchronisation of configuration data is required, the method comprises determining when to send a synchronisation request to the node based on the congestion level.
  • An advantage of at least one example is preventing deterioration of an already congested network.
  • An advantage of at least one example is improved network scalability.
  • Determining when to send a synchronisation request to the node may comprise delaying a sending of a synchronisation request to the node if the congestion level is above a threshold congestion level.
  • the method may comprise sending a synchronisation request to the node when the congestion level is below the threshold congestion level.
  • the method may comprise distributing sending of synchronisation requests to a plurality of nodes over a period of time.
  • An aspect of the disclosure provides apparatus for a control entity of a communication network.
  • the apparatus comprises means for storing configuration data for a node of the communication network.
  • the apparatus comprises means for determining if synchronisation of configuration data is required with the node.
  • the apparatus comprises means for determining a congestion level of the communication network. If it is determined that synchronisation of configuration data is required, the apparatus is configured to determine when to send a synchronisation request to the node based on the congestion level.
  • An aspect of the disclosure provides a control apparatus for a communication network.
  • the apparatus comprises a processor and a memory, the memory containing instructions that when executed by the processor cause the processor to store configuration data for a node of the communication network.
  • the instructions cause the processor to determine if synchronisation of configuration data is required with the node.
  • the instructions cause the processor to determine a congestion level of the communication network. If it is determined that synchronisation of configuration data is required, the instructions cause the processor to determine when to send a synchronisation request to the node based on the congestion level.
  • the functionality described here can be implemented in hardware, software executed by a processing apparatus, or by a combination of hardware and software.
  • the processing apparatus can comprise a computer, a processor, a state machine, a logic array or any other suitable processing apparatus.
  • the processing apparatus can be a general-purpose processor which executes software to cause the general- purpose processor to perform the required tasks, or the processing apparatus can be dedicated to perform the required functions.
  • Another aspect of the invention provides machine-readable instructions (software) which, when executed by a processor, perform any of the described methods.
  • the machine-readable instructions may be stored on an electronic memory device, hard disk, optical disk or other machine- readable storage medium.
  • the machine-readable medium can be a non-transitory machine-readable medium.
  • the term "non-transitory machine-readable medium" comprises all machine-readable media except for a transitory, propagating signal.
  • the machine-readable instructions can be downloaded to the storage medium via a network connection.
  • Figure 1 shows a network and a control entity
  • Figure 2 shows a method of operating the control entity
  • Figures 3 to 5 show an example of varying heartbeat waiting period with congestion in the network
  • Figure 6 shows an example time line of operation
  • Figure 7 shows another example time line of operation where the control entity sends a stimulus to a node to prompt a response
  • Figure 8 shows an example of determining congestion using delay times of notifications
  • Figure 9 shows a method of operating the control entity
  • Figures 10 to 14 show an example of synchronising with nodes following congestion in the network
  • Figure 15 shows a network and a control entity with a separate congestion detector
  • Figure 16 shows an example of delay in a network
  • Figure 17 shows a method of determining congestion level
  • Figure 18 shows apparatus for a computer-based implementation.
  • Figure 1 shows a communication network 5 and a control entity 30 for the network 5.
  • the control entity 30 can be an Operations Support System (OSS), Network Management System (NMS) or other control entity.
  • the communication network 5 comprises a plurality of nodes 10 and communication links 1 1 which connect the nodes 10.
  • the network of nodes 10 and links 1 1 can have any suitable topology, such as a ring topology, meshed topology, star topology, tree topology.
  • the OSS 30 comprises a configuration data manager 31.
  • the configuration data manager 31 acquires configuration data from nodes 10 in the network. One of nodes 10 is shown in detail. A store 16 holds configuration data 17 about the node. The configuration data 17 may be accessed via an O&M interface 18 of the node.
  • the configuration data manager 31 stores configuration data 33 for the nodes 10 of the network in a store 32. The configuration data 33 may be used by applications (not shown) which require data about nodes in the network. Storing configuration data at the OSS allows the OSS to service requests more efficiently.
  • the configuration data manager 31 is configured to receive notifications 36 from nodes 10. Notifications 36 carry information about configuration changes at the nodes 10.
  • a notification 36 may comprise: any change of an attribute at a node; the creation or the deletion of a port on the node; the creation or deletion of a Virtual Local Area Network (VLAN).
  • the OSS may communicate with the nodes 10 using Network Configuration Protocol (NETCONF).
  • NETCONF is defined in the Internet Engineering Task Force (IETF) document RFC 6241 .
  • the OSS 30 is configured to determine congestion in the network 5.
  • the OSS 30 comprises a congestion detector 34 which is configured to determine a congestion level of the network.
  • the congestion detector 34 can determine congestion level based on delay times of the notifications 36 received from nodes 10. An increase in the delay times is indicative of an increase in congestion level of the network.
  • the congestion detector 34 may determine congestion level for an entire network, for a part of a network, for a group of nodes, or for a single node.
  • the node 10 can comprise a switch 12 with input ports 13 and output ports 14. Input queues 15 are associated with the input ports 13.
  • the switch may operate in the electrical domain or in the optical domain.
  • the node may receive packets, frames, or other protocol data units (PDUs), queue the packets at an input port and forward them to an output port.
  • PDUs protocol data units
  • a delay time for an end-to-end path between nodes can increase. For example, traffic may be queued for a longer period at nodes along the path.
  • the OSS uses heartbeat signals to determine if a node 10 in the network 5 is still in contact with the OSS.
  • a node may be configured to periodically send a heartbeat notification to the OSS, or the OSS may send a stimulus to the node (e.g. a message) and the node replies by sending a heartbeat notification to the OSS. If a heartbeat notification is not received within a predetermined heartbeat waiting period, the OSS can determine that the node is out of contact, and the data held at the OSS about that node is now out of sync.
  • the OSS may expect to receive at least one heartbeat notification within a waiting period before declaring an out of sync condition, but the number can be more than one.
  • the configuration data sent by a node may also become delayed due to congestion in the network. This may prolong or increase congestion in the network. It is also possible that the synchronisation operation will fail, and that the OSS may again try to begin a synchronisation operation with that node. This situation can repeat over a long period of time. During this time the OSS does not hold current configuration data for the node.
  • the configuration data manager can dynamically vary the heartbeat waiting time based on the congestion level of the network.
  • Figure 2 shows a method of operating a control entity, such as an OSS 30 of Figure 1.
  • the method stores configuration data for a node of the communication network.
  • the OSS stores a full set of configuration data: when the OSS starts up; when a node joins the network; or when a full synchronisation operation is performed.
  • the OSS stores a subset of configuration data: when a delta synchronisation operation is performed; or when a notification is received from a node.
  • the OSS determines a congestion level of the communication network. For example, the OSS may determine congestion level based on delay times of notifications received from nodes, or from delay times of notifications received from a particular node.
  • Block 103 sets a duration of a heartbeat signal waiting period based on the determined congestion. The duration of the heartbeat signal waiting period is variable. The duration of the heartbeat signal waiting period increases with increased congestion. Block 104 determines if synchronisation of configuration data is required with the node. This determination is based on whether a heartbeat signal, or a predetermined number of heartbeat signals, is received from the node within the heartbeat signal waiting period.
  • Receiving a heartbeat signal (or a predetermined number of heartbeat signals) from the node within the heartbeat signal waiting period indicates that no synchronisation of configuration data is required with the node.
  • the OSS does not need to perform a synchronisation operation for the node, and suppresses sending a synchronisation request, 106. If a heartbeat signal (or a predetermined number of heartbeat signals) is not received from the node within the heartbeat signal waiting period at block 104, it indicates that a synchronisation of configuration data is required with the node, 105.
  • the OSS can send a synchronisation request, such as a delta synchronisation request or a full synchronisation request.
  • the OSS may delay sending the synchronisation request until congestion falls below a threshold level. If the OSS needs to synchronise with a plurality of nodes it may distribute the times at which it sends synchronisation requests to the plurality of nodes over a period of time.
  • the method of Figure 2 repeatedly determines the congestion level at block
  • the heartbeat waiting time can rise and fall over a period of time.
  • FIGs 3 to 5 show an example of operating a network over a period of time.
  • the congestion detector 34 determines that the congestion level is low.
  • the configuration data manager 31 sets a relatively short heartbeat waiting period T1 based on the low congestion level.
  • the congestion detector 34 determines that the congestion level is high.
  • the configuration data manager 31 increases the heartbeat waiting period to T2, where T2 > T1 .
  • the congestion detector 34 determines that the congestion level has returned to a low level.
  • the configuration data manager 31 reduces the heartbeat waiting period to T1.
  • the heartbeat waiting period can be set to one of two possible values: T1 or T2. In other examples the heartbeat waiting period can be set to one of a larger range of possible values.
  • the set of possible values can be a linear or a non-linear (e.g. logarithmic) set of values.
  • the number of possible values and the distribution of the values in the set can be based on factors such as the required granularity of operation, performance and so on.
  • the method may increment the heartbeat waiting period by one increment each time the method is iterated. For example, if the method uses the set of heartbeat waiting periods T1 , T2, T3, T4 (where T1 is the shortest and T4 is the longest), the method may increment to the next length of heartbeat waiting period at each iteration of the method.
  • the heartbeat waiting period may change from T1 ⁇ T2, and during the next iteration the heartbeat waiting period may either: remain at T2, increase to T3 or reduce to T1.
  • the method may determine if the congestion level has changed and, if the congestion level has changed, increment (up or down) the duration of the heartbeat signal waiting period to a next duration in the set of possible heartbeat waiting period durations.
  • the method may change the heartbeat waiting period to another value based on the congestion level, without necessarily incrementing through the set of values. For example, if the method uses the set of heartbeat waiting periods T1 , T2, T3, T4 (where T1 is the shortest and T4 is the longest), then at a heavily congested time the heartbeat waiting period can change directly from T1 ⁇ T4.
  • the method may determine a magnitude of a change in the congestion level and increment the duration of the heartbeat signal waiting period to a different duration in the set of possible heartbeat waiting period durations based on the magnitude of the change in the congestion level.
  • Setting the heartbeat waiting period duration may use one or more other factors, such as duration of the congestion (e.g. if congestion remains at a high level for a long period, increase the heartbeat waiting period duration).
  • the heartbeat waiting period duration may vary from the order of tens of seconds during a state of no congestion up to tens of minutes during a state of congestion.
  • FIG. 6 shows an example time line of a heartbeat mechanism in which a node N1 sends heartbeat notifications to the OSS without requiring a stimulus from the OSS.
  • this drawing only shows heartbeat notifications sent from the node.
  • HWP heartbeat waiting period
  • the OSS determines that the node is in contact with the OSS and therefore that the configuration data stored for node N1 is up-to-date.
  • the OSS maintains the heartbeat waiting period at length T1 because the congestion state has not changed.
  • congestion occurs.
  • the congestion detector of the OSS detects the start of congestion.
  • the OSS increases the length of the heartbeat waiting period to T2 (T2 > T1 ).
  • T2 T2 > T1
  • the heartbeat waiting period is set to T1. Shortly after the start of the third heartbeat waiting period, the OSS detects the start of the increased congestion.
  • the OSS changes the heartbeat waiting period from T1 to T2.
  • the third heartbeat waiting period now has a new length of T2.
  • the OSS receives a single heartbeat notification from node N1 .
  • the OSS determines that the node is still in contact with the OSS and therefore that the configuration data stored for node N1 is up-to-date.
  • congestion ends. There is a short delay before the OSS detects the end of congestion.
  • the OSS has not yet detected the end of the congestion.
  • the heartbeat waiting period remains at length T2. Shortly after the start of the fifth heartbeat waiting period, the OSS detects the end of the congestion.
  • the OSS reduces the heartbeat waiting period from T2 to T1.
  • Figure 7 shows an example time line of a heartbeat mechanism in which the
  • OSS sends a stimulus (e.g. a signal or message) to node N1 and node N1 sends, in reply to that stimulus, a heartbeat notification to the OSS.
  • a heartbeat waiting period begins with the sending of a stimulus from the OSS.
  • HWP heartbeat waiting period
  • the OSS receives a heartbeat notification from node N1 during the first heartbeat waiting period.
  • the OSS determines that the node is in contact with the OSS and therefore that the configuration data stored for node N1 is up-to-date.
  • the OSS maintains the heartbeat waiting period at length T1 because the congestion state has not changed.
  • congestion occurs.
  • the congestion detector of the OSS detects the start of congestion.
  • the OSS increases the length of the heartbeat waiting period to T2 (T2 > T1 ).
  • T2 T2 > T1
  • the heartbeat waiting period is set to T1. Shortly after the start of the third heartbeat waiting period, the OSS detects the start of the increased congestion.
  • the OSS changes the heartbeat waiting period from T1 to T2.
  • the third heartbeat waiting period now has a new length of T2.
  • the OSS receives a heartbeat notification from node N1 .
  • the OSS determines that the node is still in contact with the OSS and therefore that the configuration data stored for node N1 is up-to-date.
  • congestion ends. There is a short delay before the OSS detects the end of congestion.
  • the heartbeat waiting period remains at length T2. Shortly after the start of the fifth heartbeat waiting period, the OSS detects the end of the congestion.
  • the OSS reduces the heartbeat waiting period from T2 to T1.
  • FIG 8 shows an example of determining congestion in the network.
  • a node N1 10 sends a notification (e.g. packet 40) to the OSS 30.
  • the notification 40 carries a timestamp TS 41 of the time at which the notification is sent by the node N1 .
  • a synchronised network nodes 10 are synchronised, to within a predetermined level of accuracy, and are able to provide an accurate timestamp.
  • Networks may use mechanisms such as Precision Time Protocol (IEEE1588) or Network Time Protocol version 4 to acquire synchronisation with another node in the network.
  • a node may use Global Positioning System (GPS) or a similar system to obtain an accurate time.
  • GPS Global Positioning System
  • the OSS 30 notes the arrival time of the notification.
  • the difference between the arrival time of the notification and the send time of the notification is the delay time.
  • Delay times are applied to a congestion calculation function 45.
  • the congestion calculation function 45 calculates a congestion level based on a plurality of delay times.
  • the congestion calculation function 45 can use a filter 46 to provide an average of the delay times.
  • the delay time of notifications can vary according to congestion in the network. Generally, as a network becomes more congested nodes along the path between the sending node and the OSS take an increased time to forward the notification. Therefore the notification takes longer to reach the OSS. The increased transit delay can be due to factors such as longer queues at nodes. An example of congestion calculation is provided later in this specification.
  • the delay time of notifications has been found to be a useful indicator of network congestion, as it will also affect the heartbeat notifications.
  • Figure 9 shows a method of operating a control entity, such as an OSS 30 of Figure 1.
  • the method stores configuration data for a node of the communication network. There are various times when the OSS stores configuration data for a node.
  • the OSS stores a full set of configuration data when the OSS starts up, when a node joins the network, or when a full synchronisation operation is performed.
  • the OSS stores a subset of configuration data when a delta synchronisation operation is performed or when a notification is received from a node.
  • the OSS determines if a synchronisation operation is required for a node.
  • the OSS may determine that synchronisation of configuration data is required if the OSS has not received a heartbeat notification within a heartbeat waiting period as described previously. This can be caused by congestion in the network.
  • the use of a variable heartbeat waiting period is optional in this aspect of the disclosure.
  • the OSS determines a congestion level of the communication network. For example, the OSS may determine congestion level based on delay times of notifications received from nodes, or from delay times of notifications received from a particular node, 204. Block 203 can be performed in parallel to the remainder of the method, or between blocks 202 and 205.
  • the OSS sends a synchronisation request to a node based on congestion level of the network.
  • One method for performing this block is shown in Figure 9.
  • the OSS compares the congestion level (calculated by block 203) with a threshold level, 206. If congestion is below the threshold level, the method determines that the network is not congested and proceeds to block 207.
  • the method sends a synchronisation request to the node. If congestion is above the threshold level, it is determined that the network is still congested.
  • the method proceeds to block 208 and delays sending a synchronisation request to the node.
  • the method continues to compare the congestion level (calculated on an on-going basis by block 203) with a threshold level until it is determined that the congestion has reduced to allow synchronisation to take place.
  • Block 207 can include a block 209 of distributing the sending of synchronisation requests.
  • Figures 10 to 14 show an example of operating a network over a period of time.
  • the OSS is synchronised with each of the nodes N1 -N5 in the network 5.
  • the OSS has lost synchronisation with all of the nodes N 1 -N5. This can occur due to congestion in the network, or for other reasons.
  • the congestion detector 34 determines that the congestion level is high. Therefore, the OSS delays sending synchronisation requests to nodes N1 -N5 as they are unlikely to be successful, and may prolong or even increase the congestion.
  • the congestion detector 34 determines that the congestion level is low. The OSS has begun to synchronise with nodes N1 -N5.
  • the OSS Because the OSS has to synchronise with five nodes N1 -N5, it distributes sending synchronisation requests to the nodes N1 -N5 over a period of time.
  • the OSS has synchronised with two of the nodes: N1 and N2.
  • the OSS has synchronised with four of the nodes: N1 -N4.
  • the OSS delayed sensing synchronisation requests to nodes N3 and N4 until it had synchronised (or at least begun to synchronise) with nodes N1 and N2.
  • the OSS has synchronised with all five of the nodes: N1 -N5.
  • FIG. 15 shows another example of a network with an OSS 30.
  • a congestion detector 134 is located in the network, separately from the OSS 30.
  • the congestion detector 134 determines congestion in the network using data obtained from nodes.
  • the congestion detector 134 may inspect timestamped packets such as those used in Precision Time Protocol (IEEE 1588) or Network Time Protocol.
  • the congestion detector 134 sends a congestion indication 37 to the OSS 30.
  • the congestion indication 37 is stored at the OSS 30.
  • the congestion indication 37 indicates a congestion level detected by the external congestion detector 134.
  • the configuration data manager 31 can set a heartbeat waiting period based on the congestion indication 37.
  • the OSS 30 comprises a configuration data manager 31.
  • the configuration data manager 31 acquires configuration data from nodes 10 in the network, such as via notifications 36 received from the nodes 10.
  • the delay experienced by a notification is caused by several stochastic factors.
  • the delay comprises three major elements:
  • Transit Time is the physical time required to pass through the network, i.e. if the load of the network is zero, all the queues are empty.
  • Figure 16 shows an example of the total delay 56 over a period of time.
  • the bottom curve 51 represents the transit time due to physical delays (fibers, cables, connectors and so on). It may slowly change during a day on account of slow context changes (e.g. temperature changes during daylight).
  • spikes 52 due to the queue delay.
  • the queue delay can have large values and large variability dependent on the network congestion state. It is possible to isolate the different components by filtering. A filtering of the response shown in Figure 16 obtains the minimum delay time values (points on curve 51 ) and discards the spikes 52.
  • Figure 17 shows a method of determining congestion level.
  • Block 301 determines a total delay time of a plurality of notifications. A suitable method is shown in Figure 8.
  • Block 302 determines maximum and minimum values of the total delay time. In the example of Figure 16A the maximum values are the height of the peaks 52 and the minimum values form the base line curve 51.
  • Block 303 filters the minimum values of the total delay time to find transit time. The filtering can be use interpolation to find the base line curve 51.
  • block 304 filters the maximum values of the total delay time to find a curve which joins the peaks 52.
  • Block 305 subtracts transit time from the maximum values to find queueing time. Queueing time is indicative of congestion level.
  • the method described above can be performed for notifications received from: all nodes in a network; a sub-set of nodes in the network; a single node.
  • calculation of the congestion level can be based on notifications received from a plurality of nodes in the network. This congestion level can be used to set the heartbeat waiting period for a plurality of nodes in the network (N1 , N2,).
  • calculation of the congestion level for a node N1 can be based on notifications only received from node N1.
  • the heartbeat waiting period can be set for node N1 based only on the congestion level determined for node N1 .
  • Calculation of the congestion level for other nodes in the network, and setting of heartbeat waiting periods for other nodes in the network can be independent of node N1 .
  • FIG 18 shows an example of processing apparatus 400 which may be implemented as any form of a computing and/or electronic device, and in which embodiments of the system and methods described above may be implemented.
  • Processing apparatus may implement all, or part of, the methods.
  • Processing apparatus 400 comprises one or more processors 401 which may be microprocessors, controllers or any other suitable type of processors for executing instructions to control the operation of the device.
  • the processor 401 is connected to other components of the device via one or more buses 406.
  • Processor-executable instructions 403 may be provided using any computer-readable media, such as memory 402.
  • the processor-executable instructions 303 can comprise instructions for implementing the functionality of the described methods.
  • the memory 402 is of any suitable type such as read-only memory (ROM), random access memory (RAM), a storage device of any type such as a magnetic or optical storage device. Additional memory 404 can be provided to store data 405 used by the processor 401 .
  • the processing apparatus 400 comprises one or more network interfaces 408 for interfacing with other network entities.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Health & Medical Sciences (AREA)
  • Cardiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

La présente invention concerne un procédé de fonctionnement d'une entité de commande d'un réseau de communication qui consiste à stocker (101) des données de configuration destinées à un nœud (10) du réseau de communication. L'entité de commande détermine (102) un niveau d'encombrement du réseau de communication. L'entité de commande établit (103) une durée d'une période d'attente de signal de battement de cœur en fonction de l'encombrement déterminé. La durée de la période d'attente de signal de battement de cœur augmente à mesure que l'encombrement augmente. L'entité de commande détermine (104) si une synchronisation de données de configuration est requise ou non à l'aide du nœud en fonction de la réception ou non d'un signal de battement de cœur, ou d'un nombre prédéterminé de signaux de battement de cœur, en provenance du nœud dans la période d'attente de signal de battement de cœur. L'entité de commande peut déterminer (102) un niveau d'encombrement du réseau de communication en fonction des temps de retard de notifications de configuration reçues en provenance d'un nœud (10), ou d'une pluralité de nœuds (10), dans le réseau.
PCT/EP2016/055962 2016-03-18 2016-03-18 Gestion de configuration dans un réseau de communication WO2017157459A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/055962 WO2017157459A1 (fr) 2016-03-18 2016-03-18 Gestion de configuration dans un réseau de communication

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2016/055962 WO2017157459A1 (fr) 2016-03-18 2016-03-18 Gestion de configuration dans un réseau de communication

Publications (1)

Publication Number Publication Date
WO2017157459A1 true WO2017157459A1 (fr) 2017-09-21

Family

ID=55589835

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2016/055962 WO2017157459A1 (fr) 2016-03-18 2016-03-18 Gestion de configuration dans un réseau de communication

Country Status (1)

Country Link
WO (1) WO2017157459A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115333983A (zh) * 2022-08-16 2022-11-11 超聚变数字技术有限公司 心跳管理方法及节点

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2081325A2 (fr) * 2008-01-17 2009-07-22 Nec Corporation Procédé de contrôle de surveillance et dispositif de contrôle de surveillance
US20130031253A1 (en) * 2011-07-29 2013-01-31 Cisco Technology, Inc. Network management system scheduling for low power and lossy networks

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2081325A2 (fr) * 2008-01-17 2009-07-22 Nec Corporation Procédé de contrôle de surveillance et dispositif de contrôle de surveillance
US20130031253A1 (en) * 2011-07-29 2013-01-31 Cisco Technology, Inc. Network management system scheduling for low power and lossy networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CARMELO LANTOSCA ET AL.: "2005 Conference on IEEE 1588, CH Winterthur", October 2005, INSTITUTE OF DATA ANALYSIS AND PROCESS DESIGN, article "Synchronising IEEE 1588 clocks under the presence of stochastic network delays"
GAVIN D. MCCULLAGH, EXPLORING DELAY-BASED TCP CONGESTION CONTROL, February 2008 (2008-02-01)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115333983A (zh) * 2022-08-16 2022-11-11 超聚变数字技术有限公司 心跳管理方法及节点
CN115333983B (zh) * 2022-08-16 2023-10-10 超聚变数字技术有限公司 心跳管理方法及节点

Similar Documents

Publication Publication Date Title
US11252060B2 (en) Data center traffic analytics synchronization
US10320635B2 (en) Methods and apparatus for providing adaptive private network centralized management system timestamp correlation processes
US8401007B2 (en) Network synchronization over IP networks
JP2012170076A (ja) アビオニクスネットワーク内のフリーランニングノードを時間同期させる方法
CN110932814B (zh) 软件定义的网络授时安全防护方法、装置及系统
Zhang et al. Tuning the aggressive TCP behavior for highly concurrent HTTP connections in intra-datacenter
EP2749968A1 (fr) Dispositif de contrôle temporel, procédé de contrôle temporel, et programme
EP2512048A2 (fr) Système et procédé pour éviter l'accumulation de gigue à basse fréquence en vue d'obtenir une distribution d'horloge de précision dans de vastes réseaux
US9331804B2 (en) Using multiple oscillators across a sub-network for improved holdover
US10069583B2 (en) Faster synchronization time and better master selection based on dynamic accuracy information in a network of IEEE 1588 clocks
Popescu et al. Measuring network conditions in data centers using the precision time protocol
US9385930B2 (en) Method to detect suboptimal performance in boundary clocks
WO2017157459A1 (fr) Gestion de configuration dans un réseau de communication
US10334539B2 (en) Metered interface
US10042384B2 (en) System and methods for computer clock synchronization without frequency error estimation
EP3420655A1 (fr) Procédés et systèmes d'estimation de précision de synchronisation de fréquence
WO2019042102A1 (fr) Procédé et appareil d'évaluation de la qualité d'environnement d'exécution de logiciel d'un dispositif
Li et al. A high-accuracy clock synchronization method in distributed real-time system
WO2016177240A1 (fr) Procédé et dispositif de synchronisation de fréquence
JP2018046488A (ja) ネットワーク品質測定装置、ネットワーク品質測定方法およびネットワーク品質測定プログラム
Deshpande et al. Towards a Network Aware Model of the Time Uncertainty Bound in Precision Time Protocol
US11853114B1 (en) Virtualized hardware clocks for providing highly accurate time information in hosted machine instances
US11855757B1 (en) Highly accurate time information in hosted machine instances using dedicated timing network
KR100911512B1 (ko) 타이밍 패킷을 이용한 동기화 방법 및 장치
Liu et al. How many planet-wide leaders should there be?

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16711586

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 16711586

Country of ref document: EP

Kind code of ref document: A1