WO2009152855A1 - Network configuration - Google Patents

Network configuration Download PDF

Info

Publication number
WO2009152855A1
WO2009152855A1 PCT/EP2008/057718 EP2008057718W WO2009152855A1 WO 2009152855 A1 WO2009152855 A1 WO 2009152855A1 EP 2008057718 W EP2008057718 W EP 2008057718W WO 2009152855 A1 WO2009152855 A1 WO 2009152855A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
node
configuration data
time period
configuration
Prior art date
Application number
PCT/EP2008/057718
Other languages
French (fr)
Inventor
Kieran Nash
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2008/057718 priority Critical patent/WO2009152855A1/en
Publication of WO2009152855A1 publication Critical patent/WO2009152855A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/04Arrangements for maintaining operational condition

Definitions

  • the invention relates to improvements in or relating to configuration of a communications network, and in particular to a method, a node, a communications packet, and a computer program product for configuring a communications network.
  • a typical communications network comprising of nodes in different geographical locations connected by links may be required to be reconfigured. This may be due to changing network requirements, or due to the addition of new network equipment.
  • An example of such a required reconfiguration would be a new cell deployment plan for a mobile radio access network.
  • a new configuration of the network may be described by configuration data which is applied to the network by the management node.
  • the configuration data may describe how one or more nodes of the network, or one or more network links between nodes, or groups of nodes having links are to communicate with one another.
  • the management node is a centralized control point for coordinating these configuration changes.
  • One problem associated with configuring a network is that the nodes and links of the network may not experience the same operating load conditions prior to the new network configuration being implemented. This means that changes to the network may not be implemented at the same time.
  • This problem may be made worse by the geographical spread of the nodes of the network, which may create additional delays in implementing a desired configuration. In some cases some or all of the desired configuration data may be lost due to the delay introduced by the different load conditions and/or geographical spread of the nodes of the network.
  • a consequence of failure to implement a desired network configuration is that only a partial implementation of the desired configuration may be applied. In some cases the whole network or portions of the network may be corrupted such that they do not function correctly. Furthermore the network configuration prior to the reconfiguration may also have been lost or corrupted. This can lead to a reduction of performance and the level of service offered by the network without providing the benefits of the desired configuration.
  • a further consequence of failure to implement the complete desired configuration is a complicated and time consuming effort by engineers to return the communications network back to its original configuration. This may involve many steps to determine which configuration changes have been made so that they can be undone requiring additional management traffic in the network. This process wastes time and the additional management traffic may introduce an additional latency into the network. Such additional management traffic may also succumb to the same problems that caused the initial failure to change to the desired configuration which may further waste time in returning the network back to its original configuration.
  • An object of the present invention is to provide a way of improving the implementation of network changes whilst reducing or minimising the above-mentioned problems.
  • a method of configuring a communications network, node or link comprising storing current configuration data relating to a current configuration.
  • the method comprising creating desired configuration data relating to a desired configuration.
  • the method further comprising defining a time period for which the desired configuration data is intended to be valid.
  • the method further comprising configuring the communications network, node or link with the desired configuration data within the time period.
  • Such a method has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration.
  • the method further includes configuring the communications network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
  • configuring the communications network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
  • the method further includes transmitting a failure message if the time period has expired without configuring the network, node or link with the desired configuration data.
  • the method further includes transmitting a success message when the network, node or link has been configured with the desired configuration data. This provides the advantage of being able to know when the desired configuration has or has not been implemented.
  • the method further includes implementing the time period as a time-to-live counter value.
  • a time-to-live counter value is a convenient way of implementing the time period in a header of a communications packet.
  • the method further includes configuring the communications network, node or link with the current configuration data to revert back to the current configuration after a further time period.
  • the method further includes implementing the further time period as a time- to-live counter value.
  • a node to configure a communications network, node or link.
  • the node comprising a memory to store current configuration data relating to a current configuration and desired configuration data relating to a desired configuration.
  • the node operable to define a time period for which the desired configuration data is intended to be valid.
  • the node being further operable to configure the communications network, node or link with the desired configuration data within the time period.
  • Such a node has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration.
  • the node is operable to configure the network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
  • the node is operable to transmit a failure message if the time period has expired without configuring the network, node or link with the desired configuration data.
  • the node is operable to transmit a success message when the network, node or link has been configured with the desired configuration data.
  • the time period is implemented as a time-to-live counter value, which is a convenient way of implementing the time period in the header of a communications packet.
  • the node is operable to configure the communications network, node or link with the current configuration data to revert back to the current configuration after a further time period.
  • the further time period is implemented as a time-to-live counter value.
  • a communications packet operable to configure a communications network, node or link.
  • the packet including desired configuration data relating to a desired configuration.
  • the packet further including a counter value for defining a time period for which the desired configuration data is intended to be valid.
  • the communications packet being operable to configure the communications network, node or link to the desired configuration data within the time period.
  • Such a packet has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration.
  • the counter value may be arranged to increment up to a predetermined value, or decrement from a predetermined value.
  • the counter value may be linked to a timing clock or timing cycle of the network which may be a convenient way to define the time period.
  • the communications packet further includes prior configuration data relating to a configuration of the network, node or link prior to the desired configuration data, and is operable to configure the network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
  • the communications packet is operable to transmit a failure message if the time period has expired without configuring the network, node or link with the desired configuration data.
  • the communications packet is operable to transmit a success message when the network, node or link has been configured with the desired configuration data.
  • the time period is implemented as a time-to-live counter value, which is a convenient way of implementing the time period in the header of the communications packet.
  • the communications packet is operable to configure the communications network, node or link with the prior configuration data after a further time period.
  • the further time period is implemented as a time-to-live counter value.
  • a computer program product operable to perform the method according to the first aspect, or operable to control the node of the second aspect, or operable to implement the communications packet of the third aspect.
  • a communications network configured using the method according to the first aspect, or using a node according to the second aspect, or operable with a communication packet according to the third aspect, or arranged to implement a computer program product according to the fourth aspect.
  • Figure 1 shows a network undergoing network reconfiguration according to an embodiment of the invention
  • Figure 2 shows a first part of a flow diagram illustrating the configuration actions performed by a management node.
  • Figure 3 shows a second part of a flow diagram illustrating the configuration actions performed by a network element
  • Figure 4 shows a flow diagram illustrating a method according to an embodiment of the present invention.
  • FIG. 1 shows a network undergoing network reconfiguration according to an embodiment of the invention, generally designated 10.
  • the network 10 is a simplified version of a real life network comprising three network elements which are Base
  • Transceiver Stations (BTS) 12, 14, 16 each having a respective antenna 18, 20, 22 and is used to describe a broad overview of how the invention according to an embodiment of the invention is implemented.
  • the network 10 also has a management node 24 for controlling the operations of each BTS 12, 14, 16.
  • solid lines represent physical links along which traffic can flow, whereas dashed lines are used to show the management messages travelling between each BTS 12, 14, 16 and the management node 24. It will be appreciated that there may be many nodes between each BTS 12, 14,
  • the BTS 16 represents a new item of network equipment that has been added to the network 10.
  • the two existing BTSs 12, 14 were in communication with one another according to an existing network configuration, and defined by existing network configuration data.
  • a memory 26 of the management node 24 is arranged to store the existing network configuration data for later use as described below.
  • Such existing network configuration data is typically readily available to the management node 24 as the entity in control of the network 10.
  • various management probe packets can be transmitted into the existing network 10 to determine the existing network configuration.
  • the memory 26 could be located in one or more of the BSCs 12, 14, 16.
  • Configuration data relating to a desired network configuration which includes the new BTS 16 is defined, for example by an engineer, and held in the memory 26.
  • the desired configuration data is transmitted into the network 10 in the form of a communications packet 30, as shown at 28, to implement the desired network configuration.
  • the existing network configuration data may include details of any dependencies that exist between different network elements. Such existing dependencies can be flagged in the packet 30 as required. It will be appreciated that the packet 30 may alternatively be arranged to be first transmitted from any network element and then propagated through the network from any other network element.
  • the communications packet 30 has a header 32 which has a field to define a time period for which the desired network configuration data is intended to be valid. This may be most conveniently implemented using a Time To Live (TTL) field in the header 32 of the communications packet 30.
  • TTL Time To Live
  • the communications packet 30 is multicast to each BTS 12, 14, 16 of the network 10 to describe how each BTS 12, 14, 16 is to communicate with one another according to the desired configuration data. Once the communications packet has been multicast to each BTS 12, 14, 16 various management messages 34, 36, 38 are sent between them to configure the network 10 according to the desired network configuration, and to verify that the desired network configuration has been implemented.
  • the TTL field in the header 32 is a number or counter that represents a limit on the period of time, or number of iterations, or transmissions in the network 10 that the packet 30 can experience before it should be discarded or declared to be invalid.
  • IP Internet Protocol
  • the TTL field is the 9 th octet of 20 in the header 32, and is 8- bits long.
  • the TTL field can be thought of as an upper bound on the time that the packet 30 can exist in the network 10. After expiry of this time the packet cannot be used to reconfigure the network 10 with the desired configuration.
  • the time period may alternatively be a predefined parameter modified at the management node 24 or at the individual network elements such as the BSCs 12, 14, 16.
  • a success or fail message 40 is transmitted from each BST 12, 14, 16 to the management node 24 to verify if the change has taken place. If the management node 24 receives at least one fail messages 40 this is an indication that one or more portions of the network 10 are not functioning correctly. Receipt of at least one failure message by the management node 24 is arranged to trigger the management node 24 to revert back to the existing network configuration. Such reverting back to the existing network configuration is achieved by multicasting the stored existing network configuration data in the memory 26 of the node 24 in the form of another packet 30 containing the existing network configuration data. The fail message 40 may be propagated throughout the network 10 as required, to initiate the BSCs 12, 14, 16 to revert back to the existing configuration.
  • Reverting back to the existing network configuration provides a way of minimising the disruption to network services in the event of a full or partial failure of the implementation of the desired network configuration. Such reverting back may be thought of as an automatic process in the event of failure of the network reconfiguration to the desired network configuration.
  • the management node 24 is arranged to revert back to the existing network configuration after a further period of time.
  • This further period of time may be implemented as a specific point in time in the future, for example one day or one week.
  • the further period of time may be implemented as a counter value that represents a further period of time before reverting back to the existing network configuration. It will be appreciated that the further period of time may also be implemented as a TTL field in a packet.
  • the management node 24 is arranged to cause the network 10 to revert back to the existing network configuration, which is stored in the memory 26 of the management node 24. This may be thought of as an automatic process which may not require the further input of an engineer.
  • a prompt message may be provided before reverting back to the existing network configuration.
  • Reverting back to the existing network configuration after a further period of time has the advantage of allowing the possibility to implement a planned temporary reconfiguration of the network, such as redirecting of network resources for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events.
  • the various counters described herein could be arranged to increment up to a predetermine value, or decrement from a predetermined value.
  • each BSC 12, 14, 16 may operate locally to store the existing configuration, apply the desired changes locally, and then verify the configuration changes locally.
  • each BSC 12, 14, 16 may operate under control of the management node 24.
  • each BSC 12, 14, 16 then verifies the changes with other BSCs 12, 14, 16.
  • These changes and verifications are all performed within the time period set to perform the desired configuration changes. If one of the BSCs 12, 14, 16 has not successfully implemented the desired configuration change locally within the time period each BSCs 12, 14, 16 is informed by a management message or a failure message, and then each BSC 12, 14, 16 reverts back to the stored existing configuration. Upon expiry of a further time period, indicating a planned temporary configuration change each BSC 12, 14, 16 reverts back to the stored existing configuration.
  • Figures 2 and 3 represent one diagram showing a more detailed description of the process for implementing an embodiment of the invention.
  • Figure 2 shows a first part of a flow diagram illustrating the configuration actions performed by a management node, generally designated 42.
  • Figure 3 shows a second part of a flow diagram illustrating the configuration actions performed by a network element, such as the BTS 12, 14, 16, and generally designated 80.
  • An ambition level 46 is selected which may be any one of three ambition levels 48, 50, 52.
  • Ambition level one 48 refers to a configuration change to only one network element, such as one of the BSCs 12, 14, 16, and where there is no dependency with another network element.
  • Ambition level two 50 refers to a configuration change to more than one network element, such as one of the BSCs 12, 14, 16, and where there is no dependency between them.
  • Ambition level 3 refers to a configuration change where there are dependencies between network elements for part, or all of, the desired configuration. Such a dependency might relate to a particular way in which the particular network element is allowed to operate with another network element, for example to provide a guaranteed service in a bidirectional link.
  • the network change required is then defined by determining the configuration data 54, based on the ambition level 48, 50, 52 selected. If ambition level one 48 has been selected the configuration data includes only one network element. If ambition level two 50 has been selected the configuration data includes a list of network elements. If ambition level three 52 has been selected the configuration data includes any data change dependencies between network elements. The definition of a dependency is flexible to permit customized roll back to the existing configuration rather than enforcing roll back on complete network reconfigurations.
  • a validity period 56 is then selected, which may be any one of three validity periods 58, 60, 62.
  • the validity periods 58, 60, 62 are implemented as a TTL value, as described above.
  • the validity period one 58 relates to a time period within which the desired network configuration must be made. For example, the validity period one 58 may be up to five minutes, or five hours, or longer depending on the complexity of the desired network configuration. If the desired network configuration is not implemented within the validity period one 58, the network reverts back to the existing network configuration as described below. This has the advantage of being able to revert back to the existing network configuration if the desired network configuration as not been successfully implemented.
  • the validity period one 58 may be a static period set for a particular network, or may be defined per configuration request.
  • the validity period two 60 relates to reverting back to the existing network configuration following a successful implementation of the desired network configuration after a predetermined period of time.
  • the validity period two 60 may be up to two weeks which is the length of time for which the desired network change is intended to be valid.
  • the validity period two 60 has the advantage of allowing a planned temporary network configuration to be implemented for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events.
  • the validity period two 60 may be defined per configuration request.
  • the validity period three 62 relates to reverting back to the existing network configuration at a fixed period of time in the future following a successful implementation of the desired network configuration. For example, the validity period three 62 may be set to a particular time on a particular day in the future.
  • the validity period three 62 also has the advantage of allowing a planned temporary network configuration to be implemented.
  • the validity period three 62 may be defined per configuration request.
  • a request is then issued 64 to the network 10 to change the network to the desired configuration data.
  • the request is sent to the network 10 in the form of a packet represented by the arrow 66.
  • the request also includes a start time for the configuration to commence, which is sent to a request timer 82 discussed below in connection with Figure 3.
  • the request is also sent to a request state device 68 which sets the status of the request state device 68 to "Request Sent".
  • the request state device 68 is also capable of receiving the result 70 of the attempted configuration from the network 10 in the form of a "Success" or "Fail” message represented by the arrow 72. According to receipt of the success/fail result messages, the request state device 68 may be set to "Success" or "Fail” as discussed below.
  • the request to change the network to the desired configuration data represented by the arrow 66 is received at the network element 80 as shown at 84.
  • the validity period(s) included in the request are checked 86, and any time periods are started by the request timer 82.
  • the request timer 82 is a local device of the network element 80, but it will be appreciated that a general network clock generally used for timing operations of the network could be used as the request timer 82.
  • the time periods are checked to determine if they are either a fixed time period, or have a fixed end time.
  • any further time periods are checked to determine whether the desired network configuration should only be a temporarily configuration before reverting back to a previous network configuration. Any further time periods may be either a fixed time period, or have a fixed end time.
  • a data backup is then performed 88 to store the existing configuration of the network element 80 for later use.
  • the data backup may be performed at the node 42.
  • the ambition level is checked 90 to verify the level of functionality required in the desired network configuration. If the ambition level one 92 is to be implemented the local network element 80 changes are implemented and then verified. If the ambition level two 94 is to be implemented the local network element 80 changes are implemented and then verified with other network elements. If the ambition level three 96 is to be implemented the local network element 80 changes are implemented and verified locally before verifying the changes with other dependent network elements illustrated by the arrow 97. Such verification with other dependent network elements is performed only where the desired configuration data has flagged that a dependency with another network element exists.
  • a "Success” message 98 is generated and transmitted to the request timer 82 to stop it. If the request timer 82 notes that a time period has been reached without receiving the "Success” message 98 the configuration process is stopped 100, and "Fail” message 102 is generated. The "Success” or “Fail” message 98, 102 is then transmitted 104 to the management node 42 of Figure 2 indicated by the arrow 72 where it is received 70 and passed to the request state device 68 which is correspondingly set to "Success" or "Fail". If the ambition level three was selected as shown at 106, the "Fail” result is also transmitted to dependent network elements indicated by the arrow 108.
  • the request timer 82 also notes if any further time periods have been reached after a successful implementation of the desired network configuration. If any such further time periods have been reached the request timer 82 implements reverting to a previous configuration 110 based on the data backup performed 88 to store the existing configuration. If reverting back to the previous configuration is based on ambition level one 112, or ambition level two 114 then a further "Success" message 98 or "Fail" message 102 is generated depending on whether reverting to the previous configuration was implemented or not, and the change is verified.
  • reverting back to the previous configuration is based on ambition level three 116 then a further "Success" message 98 or “Fail” message 102 is generated depending on whether reverting to the previous configuration was implemented or not.
  • the change based on ambition level three 116 is verified locally at the network element 80, and with other dependent network elements indicated by the arrow 118. It will be appreciated that such reverting back to a previous configuration 110 may be considered to be an automated process to restore the previous configuration, which is initiated upon expiry of any further time periods. Alternatively, such reverting back to the existing configuration may require confirmation from a user such as an engineer.
  • ambition level three 52 allows bulk configuration changes to be made. Such bulk configuration changes may require a dependency model to be used when applying the ambition level three, shown at 96 and 106 in Figure 3, to identify if there is interdependency between different parts in the new configuration.
  • a dependency model is intended to dictate that the network 10 will roll back the configuration for the parts for which the dependency which have been identified, rather than the individual network elements that have failed to realize or verify the required configuration, which may enforce a complete roll back to the previous configuration. This gives rise to a self organising network in terms of configuration deployment and roll back.
  • Figure 4 shows a flow diagram illustrating a method according to an embodiment of the present invention.
  • the method includes storing 130 existing configuration data relating to an existing configuration of a communications network, node or link. Desired configuration data is then created 132 relating to a desired configuration data of a communications network, node or link. A time period for which the desired configuration data is intended to be valid is defined 134 and the communications network, node or link is configured 136 with the desired configuration data within the time period.
  • the communications network, node or link is configured 138 with the existing configuration data.
  • a failure message 140 is transmitted to, for example, by the management node 24 or other network elements 12, 14, 16.
  • a success message is transmitted 142.
  • the method may further include configuring the communications network, node or link with the existing configuration data 138 to revert back to the existing configuration after a further time period 144.
  • time period or the further time period there may be many ways to implement the time period or the further time period, but in a preferred embodiment these are implemented as a TTL counter value 146.
  • the embodiments presented above have the advantage that reverting back to a previous configuration on failure to implement a desired configuration within the time period, or expiry of the further time period is quicker. This is due to the direct communication between affected network elements rather than relying on reporting back to the management node 24, 42 and waiting for the management node 24, 42 to make a decision and provide subsequent instructions. This minimizes the negative effect on network operation due to failed configuration changes.
  • Another advantage of the above embodiments is that any temporary planned change that is made to the network 10 will be removed when the network reverts back to a previous configuration. This cuts down on manual intervention and complexity that must be administered by an engineer or network management system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to improvements in or relating to configuration of a communications network. A method, a node, a communications packet, and a computer program product for configuring a communications network are described. Current configuration data relating to a current configuration is stored and desired configuration data relating to a desired configuration is created. A time period for which the desired configuration data is intended to be valid is also defined such that configuration with the desired configuration data is performed within the time period.

Description

Network Configuration Technical Field
The invention relates to improvements in or relating to configuration of a communications network, and in particular to a method, a node, a communications packet, and a computer program product for configuring a communications network.
Background
A typical communications network comprising of nodes in different geographical locations connected by links may be required to be reconfigured. This may be due to changing network requirements, or due to the addition of new network equipment. An example of such a required reconfiguration would be a new cell deployment plan for a mobile radio access network.
It is known to reconfigure a network using a management node acting in a network management role. A new configuration of the network may be described by configuration data which is applied to the network by the management node. The configuration data may describe how one or more nodes of the network, or one or more network links between nodes, or groups of nodes having links are to communicate with one another. Typically in order to realize these configuration changes in the network the nodes affected by the configuration data must make the desired configuration at the same time. The management node is a centralized control point for coordinating these configuration changes. One problem associated with configuring a network is that the nodes and links of the network may not experience the same operating load conditions prior to the new network configuration being implemented. This means that changes to the network may not be implemented at the same time. This problem may be made worse by the geographical spread of the nodes of the network, which may create additional delays in implementing a desired configuration. In some cases some or all of the desired configuration data may be lost due to the delay introduced by the different load conditions and/or geographical spread of the nodes of the network.
A consequence of failure to implement a desired network configuration is that only a partial implementation of the desired configuration may be applied. In some cases the whole network or portions of the network may be corrupted such that they do not function correctly. Furthermore the network configuration prior to the reconfiguration may also have been lost or corrupted. This can lead to a reduction of performance and the level of service offered by the network without providing the benefits of the desired configuration.
A further consequence of failure to implement the complete desired configuration is a complicated and time consuming effort by engineers to return the communications network back to its original configuration. This may involve many steps to determine which configuration changes have been made so that they can be undone requiring additional management traffic in the network. This process wastes time and the additional management traffic may introduce an additional latency into the network. Such additional management traffic may also succumb to the same problems that caused the initial failure to change to the desired configuration which may further waste time in returning the network back to its original configuration.
Summary An object of the present invention is to provide a way of improving the implementation of network changes whilst reducing or minimising the above-mentioned problems.
According to a first aspect of the invention there is provided a method of configuring a communications network, node or link. The method comprising storing current configuration data relating to a current configuration. The method comprising creating desired configuration data relating to a desired configuration. The method further comprising defining a time period for which the desired configuration data is intended to be valid. The method further comprising configuring the communications network, node or link with the desired configuration data within the time period.
Such a method has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration.
Preferably the method further includes configuring the communications network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data. Such a method allows reverting back to the current configuration and is possible because it has previously been stored. This has the advantage of avoiding wasted time, and avoiding the reduction of level of service that may be caused by additional management traffic in trying to revert back to the previous configuration if the desired network configuration was not successfully implemented, or only partially implemented.
Preferably the method further includes transmitting a failure message if the time period has expired without configuring the network, node or link with the desired configuration data. Preferably the method further includes transmitting a success message when the network, node or link has been configured with the desired configuration data. This provides the advantage of being able to know when the desired configuration has or has not been implemented.
Preferably the method further includes implementing the time period as a time-to-live counter value. Such a time-to-live counter value is a convenient way of implementing the time period in a header of a communications packet.
Preferably the method further includes configuring the communications network, node or link with the current configuration data to revert back to the current configuration after a further time period.
This has the advantage of allowing temporary configuration changes to be implemented. Such a temporary configuration has a duration from when the desired configuration has been implemented to when the further time period has expired. Since the current configuration data has been stored it is possible to revert back to the current configuration which may be advantageous to plan for network configuration in a coordinated manner, for temporary planned events such as redirecting of network resources for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events.
Preferably the method further includes implementing the further time period as a time- to-live counter value.
According to a second aspect of the invention there is provided a node to configure a communications network, node or link. The node comprising a memory to store current configuration data relating to a current configuration and desired configuration data relating to a desired configuration. The node operable to define a time period for which the desired configuration data is intended to be valid. The node being further operable to configure the communications network, node or link with the desired configuration data within the time period.
Such a node has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration. Preferably the node is operable to configure the network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
This has the advantage of avoiding wasted time, and avoiding reducing network performance due to trying to revert back to the previous configuration if the desired network configuration was not successfully implemented.
Preferably the node is operable to transmit a failure message if the time period has expired without configuring the network, node or link with the desired configuration data. Preferably the node is operable to transmit a success message when the network, node or link has been configured with the desired configuration data.
Preferably the time period is implemented as a time-to-live counter value, which is a convenient way of implementing the time period in the header of a communications packet.
Preferably the node is operable to configure the communications network, node or link with the current configuration data to revert back to the current configuration after a further time period.
This has the advantage of allowing temporary configuration changes to be implemented, such as redirecting of network resources for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events. Preferably the further time period is implemented as a time-to-live counter value.
According to a third aspect of the invention there is provided a communications packet operable to configure a communications network, node or link. The packet including desired configuration data relating to a desired configuration. The packet further including a counter value for defining a time period for which the desired configuration data is intended to be valid. The communications packet being operable to configure the communications network, node or link to the desired configuration data within the time period.
Such a packet has the advantage of setting a time period by when the desired network configuration is to be implemented. If the time period expires before configuration with the desired configuration data, the data becomes invalid such that it cannot be used for configuration.
The counter value may be arranged to increment up to a predetermined value, or decrement from a predetermined value. The counter value may be linked to a timing clock or timing cycle of the network which may be a convenient way to define the time period.
Preferably the communications packet further includes prior configuration data relating to a configuration of the network, node or link prior to the desired configuration data, and is operable to configure the network, node or link with the current configuration data if the time period has expired without successfully configuring the network, node or link with the desired configuration data.
This allows reverting back to the previous configuration if the desired network configuration was not successfully implemented which has the advantage of avoiding wasted time, and avoiding reducing network performance due to trying to undo a previous failed attempt at configuration.
Preferably the communications packet is operable to transmit a failure message if the time period has expired without configuring the network, node or link with the desired configuration data. Preferably the communications packet is operable to transmit a success message when the network, node or link has been configured with the desired configuration data.
Preferably the time period is implemented as a time-to-live counter value, which is a convenient way of implementing the time period in the header of the communications packet.
Preferably the communications packet is operable to configure the communications network, node or link with the prior configuration data after a further time period.
This has the advantage of allowing temporary configuration changes to be implemented, such as redirecting of network resources for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events. Preferably the further time period is implemented as a time-to-live counter value.
According to a fourth aspect of the invention there is provided a computer program product operable to perform the method according to the first aspect, or operable to control the node of the second aspect, or operable to implement the communications packet of the third aspect.
According to a fifth aspect there is provided a communications network configured using the method according to the first aspect, or using a node according to the second aspect, or operable with a communication packet according to the third aspect, or arranged to implement a computer program product according to the fourth aspect.
It will be appreciated that any preferred or optional features of one aspect of the invention may also be preferred or optional feature of other aspects of the invention.
Brief Description of the Drawings
Other features of the invention will be apparent from the following description of preferred embodiments shown by way of example only with reference to the accompanying drawings, in which;
Figure 1 shows a network undergoing network reconfiguration according to an embodiment of the invention; Figure 2 shows a first part of a flow diagram illustrating the configuration actions performed by a management node.
Figure 3 shows a second part of a flow diagram illustrating the configuration actions performed by a network element; and Figure 4 shows a flow diagram illustrating a method according to an embodiment of the present invention.
Detailed Description
Figure 1 shows a network undergoing network reconfiguration according to an embodiment of the invention, generally designated 10. The network 10 is a simplified version of a real life network comprising three network elements which are Base
Transceiver Stations (BTS) 12, 14, 16 each having a respective antenna 18, 20, 22 and is used to describe a broad overview of how the invention according to an embodiment of the invention is implemented. The network 10 also has a management node 24 for controlling the operations of each BTS 12, 14, 16. In Figure 1 solid lines represent physical links along which traffic can flow, whereas dashed lines are used to show the management messages travelling between each BTS 12, 14, 16 and the management node 24. It will be appreciated that there may be many nodes between each BTS 12, 14,
16 and the management node 24, which have been omitted for the purposes of clarity.
In Figure 1 the BTS 16 represents a new item of network equipment that has been added to the network 10. Prior to the addition of the new BTS 16, the two existing BTSs 12, 14 were in communication with one another according to an existing network configuration, and defined by existing network configuration data. According to an embodiment of the invention a memory 26 of the management node 24 is arranged to store the existing network configuration data for later use as described below. Such existing network configuration data is typically readily available to the management node 24 as the entity in control of the network 10. Alternatively, various management probe packets can be transmitted into the existing network 10 to determine the existing network configuration. It will also be appreciated that the memory 26 could be located in one or more of the BSCs 12, 14, 16.
Configuration data relating to a desired network configuration which includes the new BTS 16 is defined, for example by an engineer, and held in the memory 26. The desired configuration data is transmitted into the network 10 in the form of a communications packet 30, as shown at 28, to implement the desired network configuration. The existing network configuration data may include details of any dependencies that exist between different network elements. Such existing dependencies can be flagged in the packet 30 as required. It will be appreciated that the packet 30 may alternatively be arranged to be first transmitted from any network element and then propagated through the network from any other network element.
The communications packet 30 has a header 32 which has a field to define a time period for which the desired network configuration data is intended to be valid. This may be most conveniently implemented using a Time To Live (TTL) field in the header 32 of the communications packet 30. The communications packet 30 is multicast to each BTS 12, 14, 16 of the network 10 to describe how each BTS 12, 14, 16 is to communicate with one another according to the desired configuration data. Once the communications packet has been multicast to each BTS 12, 14, 16 various management messages 34, 36, 38 are sent between them to configure the network 10 according to the desired network configuration, and to verify that the desired network configuration has been implemented.
The TTL field in the header 32 is a number or counter that represents a limit on the period of time, or number of iterations, or transmissions in the network 10 that the packet 30 can experience before it should be discarded or declared to be invalid. Under the Internet Protocol (IP) the TTL field is the 9th octet of 20 in the header 32, and is 8- bits long. The TTL field can be thought of as an upper bound on the time that the packet 30 can exist in the network 10. After expiry of this time the packet cannot be used to reconfigure the network 10 with the desired configuration. It will be shown below with reference to Figures 2 and 3 that the time period may alternatively be a predefined parameter modified at the management node 24 or at the individual network elements such as the BSCs 12, 14, 16.
Depending on whether the network 10 has been successfully or unsuccessfully reconfigured with the desired configuration data within the time period defined in the TTL field, a success or fail message 40 is transmitted from each BST 12, 14, 16 to the management node 24 to verify if the change has taken place. If the management node 24 receives at least one fail messages 40 this is an indication that one or more portions of the network 10 are not functioning correctly. Receipt of at least one failure message by the management node 24 is arranged to trigger the management node 24 to revert back to the existing network configuration. Such reverting back to the existing network configuration is achieved by multicasting the stored existing network configuration data in the memory 26 of the node 24 in the form of another packet 30 containing the existing network configuration data. The fail message 40 may be propagated throughout the network 10 as required, to initiate the BSCs 12, 14, 16 to revert back to the existing configuration.
Reverting back to the existing network configuration provides a way of minimising the disruption to network services in the event of a full or partial failure of the implementation of the desired network configuration. Such reverting back may be thought of as an automatic process in the event of failure of the network reconfiguration to the desired network configuration.
In one embodiment the management node 24 is arranged to revert back to the existing network configuration after a further period of time. This further period of time may be implemented as a specific point in time in the future, for example one day or one week. Alternatively the further period of time may be implemented as a counter value that represents a further period of time before reverting back to the existing network configuration. It will be appreciated that the further period of time may also be implemented as a TTL field in a packet. After expiry of the further time period the management node 24 is arranged to cause the network 10 to revert back to the existing network configuration, which is stored in the memory 26 of the management node 24. This may be thought of as an automatic process which may not require the further input of an engineer. Alternatively a prompt message may be provided before reverting back to the existing network configuration. Reverting back to the existing network configuration after a further period of time has the advantage of allowing the possibility to implement a planned temporary reconfiguration of the network, such as redirecting of network resources for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events. It will be appreciated that the various counters described herein could be arranged to increment up to a predetermine value, or decrement from a predetermined value.
In summary, each BSC 12, 14, 16 may operate locally to store the existing configuration, apply the desired changes locally, and then verify the configuration changes locally. Alternatively each BSC 12, 14, 16 may operate under control of the management node 24. In both embodiments each BSC 12, 14, 16 then verifies the changes with other BSCs 12, 14, 16. These changes and verifications are all performed within the time period set to perform the desired configuration changes. If one of the BSCs 12, 14, 16 has not successfully implemented the desired configuration change locally within the time period each BSCs 12, 14, 16 is informed by a management message or a failure message, and then each BSC 12, 14, 16 reverts back to the stored existing configuration. Upon expiry of a further time period, indicating a planned temporary configuration change each BSC 12, 14, 16 reverts back to the stored existing configuration.
Figures 2 and 3 represent one diagram showing a more detailed description of the process for implementing an embodiment of the invention. Figure 2 shows a first part of a flow diagram illustrating the configuration actions performed by a management node, generally designated 42. Figure 3 shows a second part of a flow diagram illustrating the configuration actions performed by a network element, such as the BTS 12, 14, 16, and generally designated 80.
Referring firstly to Figure 2 a user, such as a telecoms engineer, performs a scheduled action 44 to prepare a configuration change request. An ambition level 46 is selected which may be any one of three ambition levels 48, 50, 52. Ambition level one 48 refers to a configuration change to only one network element, such as one of the BSCs 12, 14, 16, and where there is no dependency with another network element. Ambition level two 50 refers to a configuration change to more than one network element, such as one of the BSCs 12, 14, 16, and where there is no dependency between them. Ambition level 3 refers to a configuration change where there are dependencies between network elements for part, or all of, the desired configuration. Such a dependency might relate to a particular way in which the particular network element is allowed to operate with another network element, for example to provide a guaranteed service in a bidirectional link.
The network change required is then defined by determining the configuration data 54, based on the ambition level 48, 50, 52 selected. If ambition level one 48 has been selected the configuration data includes only one network element. If ambition level two 50 has been selected the configuration data includes a list of network elements. If ambition level three 52 has been selected the configuration data includes any data change dependencies between network elements. The definition of a dependency is flexible to permit customized roll back to the existing configuration rather than enforcing roll back on complete network reconfigurations.
A validity period 56 is then selected, which may be any one of three validity periods 58, 60, 62. The validity periods 58, 60, 62 are implemented as a TTL value, as described above. The validity period one 58 relates to a time period within which the desired network configuration must be made. For example, the validity period one 58 may be up to five minutes, or five hours, or longer depending on the complexity of the desired network configuration. If the desired network configuration is not implemented within the validity period one 58, the network reverts back to the existing network configuration as described below. This has the advantage of being able to revert back to the existing network configuration if the desired network configuration as not been successfully implemented. The validity period one 58 may be a static period set for a particular network, or may be defined per configuration request.
The validity period two 60 relates to reverting back to the existing network configuration following a successful implementation of the desired network configuration after a predetermined period of time. For example, the validity period two 60 may be up to two weeks which is the length of time for which the desired network change is intended to be valid. The validity period two 60 has the advantage of allowing a planned temporary network configuration to be implemented for mass call events, sports tournaments, conferences, disaster scene handling and special broadcast events. The validity period two 60 may be defined per configuration request. The validity period three 62 relates to reverting back to the existing network configuration at a fixed period of time in the future following a successful implementation of the desired network configuration. For example, the validity period three 62 may be set to a particular time on a particular day in the future. The validity period three 62 also has the advantage of allowing a planned temporary network configuration to be implemented. The validity period three 62 may be defined per configuration request.
Once the ambition level and has been selected and the validity period has been defined, a request is then issued 64 to the network 10 to change the network to the desired configuration data. The request is sent to the network 10 in the form of a packet represented by the arrow 66. The request also includes a start time for the configuration to commence, which is sent to a request timer 82 discussed below in connection with Figure 3. In Figure 2, the request is also sent to a request state device 68 which sets the status of the request state device 68 to "Request Sent". The request state device 68 is also capable of receiving the result 70 of the attempted configuration from the network 10 in the form of a "Success" or "Fail" message represented by the arrow 72. According to receipt of the success/fail result messages, the request state device 68 may be set to "Success" or "Fail" as discussed below.
In Figure 3 the request to change the network to the desired configuration data represented by the arrow 66 is received at the network element 80 as shown at 84. The validity period(s) included in the request are checked 86, and any time periods are started by the request timer 82. In the embodiment of Figure 3 the request timer 82 is a local device of the network element 80, but it will be appreciated that a general network clock generally used for timing operations of the network could be used as the request timer 82. On starting of the request timer 82 the time periods are checked to determine if they are either a fixed time period, or have a fixed end time. In addition any further time periods are checked to determine whether the desired network configuration should only be a temporarily configuration before reverting back to a previous network configuration. Any further time periods may be either a fixed time period, or have a fixed end time.
A data backup is then performed 88 to store the existing configuration of the network element 80 for later use. In an alternative embodiment the data backup may be performed at the node 42. The ambition level is checked 90 to verify the level of functionality required in the desired network configuration. If the ambition level one 92 is to be implemented the local network element 80 changes are implemented and then verified. If the ambition level two 94 is to be implemented the local network element 80 changes are implemented and then verified with other network elements. If the ambition level three 96 is to be implemented the local network element 80 changes are implemented and verified locally before verifying the changes with other dependent network elements illustrated by the arrow 97. Such verification with other dependent network elements is performed only where the desired configuration data has flagged that a dependency with another network element exists.
If the desired network configuration has been successfully implemented a "Success" message 98 is generated and transmitted to the request timer 82 to stop it. If the request timer 82 notes that a time period has been reached without receiving the "Success" message 98 the configuration process is stopped 100, and "Fail" message 102 is generated. The "Success" or "Fail" message 98, 102 is then transmitted 104 to the management node 42 of Figure 2 indicated by the arrow 72 where it is received 70 and passed to the request state device 68 which is correspondingly set to "Success" or "Fail". If the ambition level three was selected as shown at 106, the "Fail" result is also transmitted to dependent network elements indicated by the arrow 108.
The request timer 82 also notes if any further time periods have been reached after a successful implementation of the desired network configuration. If any such further time periods have been reached the request timer 82 implements reverting to a previous configuration 110 based on the data backup performed 88 to store the existing configuration. If reverting back to the previous configuration is based on ambition level one 112, or ambition level two 114 then a further "Success" message 98 or "Fail" message 102 is generated depending on whether reverting to the previous configuration was implemented or not, and the change is verified. If reverting back to the previous configuration is based on ambition level three 116 then a further "Success" message 98 or "Fail" message 102 is generated depending on whether reverting to the previous configuration was implemented or not. The change based on ambition level three 116 is verified locally at the network element 80, and with other dependent network elements indicated by the arrow 118. It will be appreciated that such reverting back to a previous configuration 110 may be considered to be an automated process to restore the previous configuration, which is initiated upon expiry of any further time periods. Alternatively, such reverting back to the existing configuration may require confirmation from a user such as an engineer.
It will be appreciated that ambition level three 52 allows bulk configuration changes to be made. Such bulk configuration changes may require a dependency model to be used when applying the ambition level three, shown at 96 and 106 in Figure 3, to identify if there is interdependency between different parts in the new configuration. Such a dependency model is intended to dictate that the network 10 will roll back the configuration for the parts for which the dependency which have been identified, rather than the individual network elements that have failed to realize or verify the required configuration, which may enforce a complete roll back to the previous configuration. This gives rise to a self organising network in terms of configuration deployment and roll back.
Whereas Figure 3 describes reverting back to a previous configuration it will be appreciated that this is implemented by each network element, such as each BSCs 12, 14, 16.
Figure 4 shows a flow diagram illustrating a method according to an embodiment of the present invention. The method includes storing 130 existing configuration data relating to an existing configuration of a communications network, node or link. Desired configuration data is then created 132 relating to a desired configuration data of a communications network, node or link. A time period for which the desired configuration data is intended to be valid is defined 134 and the communications network, node or link is configured 136 with the desired configuration data within the time period.
If the communications network, node or link has not been successfully configured with the desired configuration data within the time period, the communications network, node or link is configured 138 with the existing configuration data. In the event of such reverting back to the existing configuration data due to expiry of the time period a failure message 140 is transmitted to, for example, by the management node 24 or other network elements 12, 14, 16. In the event of successful configuration with the desired configuration data a success message is transmitted 142. The method may further include configuring the communications network, node or link with the existing configuration data 138 to revert back to the existing configuration after a further time period 144.
It will be appreciated that there may be many ways to implement the time period or the further time period, but in a preferred embodiment these are implemented as a TTL counter value 146.
The embodiments presented above have the advantage that reverting back to a previous configuration on failure to implement a desired configuration within the time period, or expiry of the further time period is quicker. This is due to the direct communication between affected network elements rather than relying on reporting back to the management node 24, 42 and waiting for the management node 24, 42 to make a decision and provide subsequent instructions. This minimizes the negative effect on network operation due to failed configuration changes.
Another advantage of the above embodiments is that any temporary planned change that is made to the network 10 will be removed when the network reverts back to a previous configuration. This cuts down on manual intervention and complexity that must be administered by an engineer or network management system.

Claims

Claims
1. A method (129) of configuring a communications network, node or link comprising; storing current configuration data (130) relating to a current configuration; creating desired configuration data (132) relating to a desired configuration; defining a first time period (134) for which the desired configuration data is intended to be valid; and configuring (136) the communications network, node or link with the desired configuration data within the first time period.
2. A method according to claim 1 and further including configuring (138) the communications network, node or link with the current configuration data if the first time period has expired without successfully configuring the network, node or link with the desired configuration data.
3. A method according to claim 1 or 2 and further including transmitting a failure message (140) if the first time period has expired without configuring the network, node or link with the desired configuration data.
4. A method according to claim 1 and further including transmitting a success message (142) when the network, node or link has been configured with the desired configuration data.
5. A method according to any preceding claim and further including implementing the first time period as a time-to-live counter value (146).
6. A method according to claim 1 and further including configuring the communications network, node or link with the current configuration data to revert back to the current configuration after a second time period (144).
7. A method according to claim 6 and further including implementing the second time period as a time-to-live counter value (146).
8. A node (24) to configure a communications network, node or link comprising a memory (26) to store current configuration data (130) relating to a current configuration and desired configuration data (132) relating to a desired configuration, and operable to define a first time period (134) for which the desired configuration data is intended to be valid, wherein the node is further operable to configure (136) the communications network, node or link with the desired configuration data within the first time period.
9. A node according to claim 8 operable to configure (138) the network, node or link with the current configuration data if the first time period has expired without successfully configuring the network, node or link with the desired configuration data.
10. A node according to claim 8 or 9 operable to transmit a failure message (140) if the first time period has expired without configuring the network, node or link with the desired configuration data.
11. A node according to claim 8 operable to transmit a success message (142) when the network, node or link has been configured with the desired configuration data.
12. A node according to any of claims 8 - 12 wherein the first time period is implemented as a time-to-live counter value (146).
13. A node according to claim 8 operable to configure the communications network, node or link with the current configuration data to revert back to the current configuration after a second time period (144).
14. A node according to claims 13 wherein the second time period is implemented as a time-to-live counter value (146).
15. A communications packet (30) operable to configure a communications network, node or link including; desired configuration data (132) relating to a desired configuration; and a counter value (134) for defining a first time period for which the desired configuration data is intended to be valid; wherein the communications packet is operable to configure (136) the communications network, node or link to the desired configuration data within the first time period.
16. A communications packet according to claim 15 wherein the first time period is implemented as a time-to-live counter value (146).
17. A Communications packet according to claim 15 or 16 and further including prior configuration data (130) relating to a configuration of the network, node or link prior to the desired configuration data, and being operable to configure (138) the communications network, node or link with the prior configuration data.
18. A computer program product operable to perform the method according to any of claims 1 - 7, or operable to control the node of any of claims 8 - 14, or operable to implement the communications packet of any of claims 15 - 17.
19. A communications network configured using the method according to any of claims 1 - 7, or using a node according to any of claims 8 - 14, or operable with a communication packet of any of claims 15 - 17, or arranged to implement a computer program product according to claim 18.
PCT/EP2008/057718 2008-06-18 2008-06-18 Network configuration WO2009152855A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2008/057718 WO2009152855A1 (en) 2008-06-18 2008-06-18 Network configuration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2008/057718 WO2009152855A1 (en) 2008-06-18 2008-06-18 Network configuration

Publications (1)

Publication Number Publication Date
WO2009152855A1 true WO2009152855A1 (en) 2009-12-23

Family

ID=40578811

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2008/057718 WO2009152855A1 (en) 2008-06-18 2008-06-18 Network configuration

Country Status (1)

Country Link
WO (1) WO2009152855A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152522B2 (en) 2010-10-22 2015-10-06 Hewlett-Packard Development Company, L.P. Methods for configuration management using a fallback configuration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998028880A1 (en) * 1996-12-20 1998-07-02 Mci Communications Corporation System and method for time-based real-time reconfiguration of a network
US20020157018A1 (en) * 2001-04-23 2002-10-24 Tuomo Syvanne Method of managing a network device, a management system, and a network device
WO2006064364A1 (en) * 2004-12-14 2006-06-22 Nokia Siemens Networks Oy Indicating a configuring status
EP1841251A1 (en) * 2006-03-31 2007-10-03 Nokia Siemens Networks Gmbh & Co. Kg Reconfiguration of radio networks

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998028880A1 (en) * 1996-12-20 1998-07-02 Mci Communications Corporation System and method for time-based real-time reconfiguration of a network
US20020157018A1 (en) * 2001-04-23 2002-10-24 Tuomo Syvanne Method of managing a network device, a management system, and a network device
WO2006064364A1 (en) * 2004-12-14 2006-06-22 Nokia Siemens Networks Oy Indicating a configuring status
EP1841251A1 (en) * 2006-03-31 2007-10-03 Nokia Siemens Networks Gmbh & Co. Kg Reconfiguration of radio networks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9152522B2 (en) 2010-10-22 2015-10-06 Hewlett-Packard Development Company, L.P. Methods for configuration management using a fallback configuration

Similar Documents

Publication Publication Date Title
CN103209439B (en) The method of a kind of data traffic monitoring, device and equipment
US9258736B2 (en) Broadcasting of data files and file repair procedure with regards to the broadcasted data files
US7145897B2 (en) Method and device for improving the transmission efficiency in a communication system with a layered protocol stack
EP2293637B1 (en) Method and apparatus for performing buffer status reporting
CN109714399A (en) Method for pushing and device, storage medium, the electronic device of notification message
EP2614614B1 (en) Dynamic configuration of interconnected devices for measuring performance characteristics in a network
CN113383505A (en) Signaling of de-jitter buffer capability for TSN integration
CN101686144A (en) Method and system for processing data and node device
WO2009152855A1 (en) Network configuration
CN104038557A (en) Batch upgrading method of equipment software in optical fiber connection tree shape network structure
Dimitriou et al. Sensenet: a wireless sensor network testbed
CN107483146A (en) The remote upgrade and information transmitting methods of a kind of wireless terminal
JP4638510B2 (en) Receiver and receiver control method
Olsen et al. Qrp01-5: Quantitative analysis of access strategies to remote information in network services
CN105634852A (en) Check processing method and device
EP2023524A2 (en) Communication control method, transmission device and computer program
US20090044068A1 (en) Method and device for counting transmission times of data unit, transmission device, and computer program
CN100401677C (en) Method for realizing clock synchronization of fixed network short message platform
Knezic et al. Towards extending the OMNeT++ INET framework for simulating fault injection in Ethernet-based Flexible Time-Triggered systems
Barroso-Fernández et al. Optimizing Gossiping for Asynchronous Fault-Prone IoT Networks with Memory and Battery Constraints
Yang Design of the application-level protocol for synchronized multimedia sessions
CN111064623B (en) Message processing method and device
Wen et al. A system architecture for managing complex experiments in wireless sensor networks
CN116236792A (en) Application program installation method, device, equipment and medium in cloud game scene
Storch et al. TACS Central Control Facility

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08761170

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08761170

Country of ref document: EP

Kind code of ref document: A1