EP2997700B1 - Verfahren für sichere netzwerkstatuskonfiguration und rollback in link-state-paketnetzwerken - Google Patents

Verfahren für sichere netzwerkstatuskonfiguration und rollback in link-state-paketnetzwerken Download PDF

Info

Publication number
EP2997700B1
EP2997700B1 EP13780197.3A EP13780197A EP2997700B1 EP 2997700 B1 EP2997700 B1 EP 2997700B1 EP 13780197 A EP13780197 A EP 13780197A EP 2997700 B1 EP2997700 B1 EP 2997700B1
Authority
EP
European Patent Office
Prior art keywords
configuration
network
network node
path
result
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP13780197.3A
Other languages
English (en)
French (fr)
Other versions
EP2997700A1 (de
Inventor
András CSÁSZÁR
János FARKAS
András Kern
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Telefonaktiebolaget LM Ericsson AB
Original Assignee
Telefonaktiebolaget LM Ericsson AB
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget LM Ericsson AB filed Critical Telefonaktiebolaget LM Ericsson AB
Publication of EP2997700A1 publication Critical patent/EP2997700A1/de
Application granted granted Critical
Publication of EP2997700B1 publication Critical patent/EP2997700B1/de
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0805Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
    • H04L43/0817Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking functioning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/03Topology update or discovery by updating link state protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS

Definitions

  • the present invention relates generally to link state protocols for packet-switched communication networks and, more particularly, to link state protocol extensions to enable network nodes to configure explicit paths and to determine whether the explicit path configuration is successful.
  • Link-state control protocols such as the Intermediate System to Intermediate System (IS-IS) or the Open Shortest Path First (OSPF) are distributed protocols that are most often used for the control of data packet routing and forwarding within a network domain.
  • IS-IS Intermediate System to Intermediate System
  • OSPF Open Shortest Path First
  • Each network node in the link state domain collects information on its adjacent neighbors by exchanging "Hello" messages with its neighbors. The network nodes in the link state domain then distribute the information on their neighbors by means of flooding link state messages.
  • Each network node in the link state domain maintains a topology database that provides a "map" of the network. Using the network "map", each network node determines the path to each possible destination on its own, which is typically the shortest path computed by the Dijkstra algorithm often referred to as Shortest Path First (SPF).
  • SPPF Shortest Path First
  • Each network node sets a local forwarding entry to the port through which a given destination is reachable according to the result of the path computation. This mechanism ensures that there
  • SPB Shortest Path Bridging
  • TLVs Type Length Values
  • the existing IS-IS features have been maintained in IEEE 802.1aq, and new features have been added for the control for Ethernet.
  • SPB uses shortest paths for forwarding and is also able to leverage multiple shortest paths. However, in certain cases it is desired to deviate from a default shortest path and to explicitly define the route of a path.
  • MAC Media Access Control
  • EPD Explicit Path Descriptor
  • RSVP Resource reSerVation Protocol
  • TE Traffic Engineering
  • GPLS Generalized MPLS
  • IEEE 802.1Qca also considers a Path Computation Element (PCE), which also participates in the link state routing protocol domain.
  • PCE Path Computation Element
  • the PCE is also aware of the topology of the Ethernet network. According to this standard, the PCE can also specify explicit paths.
  • PCEP Path Computation Element Protocol
  • PCE communicates with the head-end of the path to be provisioned and this head-end will signal the path using RSVP-TE protocol.
  • PCR Path Control and Reservation
  • IEEE 802.1Qca provides mechanism to signal an explicit path using a link state protocol
  • IP Internet Protocol
  • PCEP uses the Transport Control Protocol (TCP) that is also based on the IP. In order to use them, an auxiliary IP network, only for carrying the configuration messages, must be provisioned. This raises significant capital and operational expenditure issues.
  • RSVP-TE protocol entity must be deployed at all network nodes and the RSVP-TE protocol entities must be appropriately configured. This increases not only the costs of the network devices but also adds administrative burden.
  • PCEP notifies PCE about the success of path configurations, but it requires RSVP-TE to signal the Path Computation Client (PCC).
  • PCC Path Computation Client
  • the present disclosure enables a desired configuration of a switching network to be signaled by a requesting network node using the link state protocol and provides a mechanism that enables the requesting network node to determine whether the configuration is completed.
  • the requesting node may be any network node in the link state protocol domain, such as a PCE or a switching network node in the switching network.
  • the techniques can be used to configure an explicit path and to determine whether the configuration of the explicit path is successful.
  • a network node initiates a desired configuration by sending a link state message containing a configuration descriptor specifying the desired configuration and a predetermined type value.
  • the configuration descriptor may describe an explicit path for routing data traffic through a switching network.
  • the configuration message is propagated through the network by flooding.
  • Each network node receiving the configuration message is instructed to take appropriate action to implement the specified configuration and send a result report indicating a result of the configuration action.
  • the result report may be included in a link state message and propagated by flooding so that all network nodes are able to determine whether the configuration was successfully completed.
  • the explicit path may comprise strict hops or a combination of strict hops and loose hops. If the explicit path comprises only strict hops, it is enough to receive result reports from the network nodes involved in the explicit path. If the explicit path comprises loose hops, each network node should generate and send a result report network node. In this case, a network node involved in the explicit path sends a result report indicating the outcome (e.g., successful or failed) any configuration actions taken. A network node that is not involved in the explicit path may send a result report indicating that no action was taken (other than determining that it was not part of the explicit path). The requesting node may determine from the result report how the loose hops were resolved and which network nodes are involved in the explicit path.
  • Exemplary embodiments of the disclosure comprise a method implemented by a requesting network node in a communication network implementing a link state protocol for requesting a desired network configuration.
  • the method comprises sending 150 a configuration descriptor in a link state message to one or more peer network nodes in the link state domain of the requesting network nodes.
  • the configuration descriptor describes a desired network configuration requested by the requesting network node.
  • the method further comprises receiving 154, by the requesting network node, result reports from one or more of the peer network nodes.
  • the result reports are received in second link state messages and indicate the results of configuration actions by the peer network nodes responsive to the configuration descriptor.
  • the method further comprises determining 156, by the requesting network node, from the result reports whether the requested network configuration is successfully completed.
  • the requesting network node after determining that the requested network configuration was not successfully completed, autonomously cancels 158 any configurations changes made based on the configuration descriptor.
  • the requesting network node after disseminating a configuration descriptor of desired network configuration, disseminates 152 a result report contained in a second link state message.
  • the requesting network node comprises a path computation entity.
  • the requesting network node comprises a network controller for a switching network.
  • the requesting network node comprises a switching node in a switching network.
  • the configuration descriptor describes an explicit path in a switching network.
  • FIG. 1C Other embodiments of the disclosure, depicted in Fig. 1C , comprise a method implemented by a receiving network node in a communication network implementing a link state protocol for installing a network configuration specified by a requesting network node in the same link state domain as the receiving network node.
  • the receiving network node receives 160 a configuration descriptor for requested network configuration in a first link state message.
  • the receiving network node performs 162 appropriate configuration actions based on the configuration descriptor and sends 164 a result report to its peer network nodes (including the requesting network node) in a second link state message.
  • the result report indicates to the peer network nodes a result of the configuration action taken by the receiving network node.
  • the receiving network node stores the configuration descriptor, or a reference to the configuration descriptor in memory.
  • the receiving network node further receives 166 result reports from its peer network nodes indicating the results of configuration actions taken by the peer network nodes responsive to the configuration descriptor.
  • the receiving network node correlates the received result reports with the configuration descriptor and determines 168 based on the correlated result reports whether the requested network configuration was successfully completed.
  • the receiving network node after determining that the requested network configuration was not successfully completed, autonomously cancels 170 any configurations changes made based on the configuration descriptor.
  • the network node comprises an interface circuit for communicating with peer network nodes over communication network, and a control circuit coupled with the interface circuit and configured to carry out the method according to any one of the method claims.
  • Fig. 1 illustrates an exemplary packet-switched communication network 10.
  • the communication network 10 comprises a switching network 15 having a plurality of switching nodes 20-1, such as routers or switches, interconnected by communication links (not shown) for routing data traffic through the network 10 from a source to a destination.
  • the communication network 10 may further include one or more external nodes 20-2, such as Path Computation Elements (PCEs) or network controllers for Software Defined Networks (SDNs).
  • PCEs Path Computation Elements
  • SDNs Software Defined Networks
  • a PCE is an external node 20-2 that determines a suitable route for conveying data traffic between a source and a destination.
  • a network controller for an SDN network is an external node 20-2 that manages flow control to enable intelligent networking.
  • SDN controllers are based on protocols, such as OpenFlow, that allow the SDN controller to tell switching nodes 20-1 in the switching network where to send packets.
  • the switching nodes 20-1 and external nodes 20-2 are referred to herein generically as network nodes 20.
  • the communication network 10 may, for example, comprise an Internet Protocol (IP) network, Ethernet network, or other type of packet-switched network.
  • IP Internet Protocol
  • the communication network 10 uses a link state routing protocol, such as Open Shortest Path First (OPSF) or Intermediate System to Intermediate System (IS-IS), for calculating routes for forwarding of data packets or frames.
  • OPSF Open Shortest Path First
  • IS-IS Intermediate System to Intermediate System
  • each network node 20 in the link state protocol domain maintains a link state database (LSDB) describing the topology of the communication network 10.
  • Link state protocols use a process known as flooding to synchronize the LSDBs maintained by each network node 20.
  • Each network node 20 determines its local state, i.e. its usable ports and reachable neighbors, by exchanging "hello" messages with its immediate neighbors.
  • a network node 20 When a network node 20 detects a change in the network topology or the state of a link, it generates and sends a link state advertisement to each of its neighbors.
  • a network node 20 that receives a link state advertisement from a neighbor determines whether the link state advertisement contains a new or updated link state. If so, the receiving network node 20 updates its own LSDB and forwards the link state advertisement to its neighbors, except the one from which the link state advertisement message was received.
  • the flooding process ensures that, within a reasonable time, all network nodes 20 will receive the link state advertisement and thus have the same LSDB.
  • the LSDB provides each network node 20 with the same "map" of the network topology.
  • Each network node 20 involved in packet routing and forwarding i.e. each switching node, independently determines how to route packets through the network 10 based on this network map.
  • the network node 20 computes the shortest path to the destination using, for example, the Dijkstra shortest path algorithm.
  • the network node 20 then generates a forwarding rule for each possible destination according to the shortest path and saves the forwarding rule in a forwarding table. This mechanism ensures that packets will be routed over the shortest path.
  • the present disclosure enables a desired configuration to be signaled by a requesting network node 20 using the link state protocol and provides a mechanism that enables the requesting network node 20 to determine when the configuration is completed.
  • the requesting node may be any network node in the link state protocol domain, including an external node 20-2 (e.g., PCE) or a switching node 20-1 in the switching network.
  • a network node 20 initiates the desired configuration by sending a link state message containing a configuration descriptor specifying the desired configuration.
  • the configuration descriptor may comprise a network wide configuration descriptor that is sent to one or more network nodes 20.
  • the link state message containing the configuration descriptor referred to herein as a configuration message, is propagated through the network 10 by flooding and contains a type value indicating that the message is a configuration message.
  • Each network node 20 receiving the configuration message is instructed to take appropriate action to implement the specified configuration and send a result report indicating a result of the configuration action.
  • the result report may indicate that the configuration action was successful, that the configuration action was unsuccessful, or that no action was required.
  • the result report may be included in a link state message and propagated by flooding so that all network nodes 20 are able to determine whether the configuration was successfully completed.
  • a link state message including a result report is referred to herein as a result report message.
  • an explicit path to be signaled by the requesting network node 20 (e.g., PCE) using the link state protocol and enables the requesting network node 20 to determine when the configuration of the explicit path is completed.
  • the explicit path may contain strict hops and loose hops.
  • a network node 20 initiates the configuration of an explicit path by sending a configuration message containing an Explicit Path Descriptor (EPD) to its neighbors.
  • the EPD is a network wide configuration descriptor sent to two or more network nodes 20 describing an explicit path to be configured.
  • the EPD is propagated by flooding to each of the network nodes 20 in the switching network.
  • MAC Media Access Control
  • SPB Shortest Path Bridge
  • IEEE 802.1Qca The draft specification Media Access Control (MAC) Bridges and Virtual Bridged Local Area Networks-Amendment: Path Control and Reservation, P802.1 Qca/D0.0, undated (IEEE 802.1Qca) describes techniques for generating and distributing a configuration message in a Shortest Path Bridge (SPB) network using the IS-IS protocol. Relevant portions of IEEE 802.1Qca are attached hereto as Appendix A and incorporated herein by reference.
  • Each network node 20 receiving the configuration message with the EPD takes appropriate action to configure the explicit path and sends a result report indicating a result of the configuration action.
  • the result report may indicate that the explicit path configuration was successful, that the explicit path configuration was unsuccessful, or that no action was required.
  • the result report may be included in a link state message and propagated by flooding so that all network nodes 20 are able to determine whether the configuration changes were successfully completed.
  • the link state messages containing a configuration descriptor or result report comprise link state (LS) packet data units (PDUs) (LSPs).
  • the link state messages may comprise a link state requests, link state updates, or link state acknowledgements.
  • the result report message comprises an appropriate identifier of the network wide configuration descriptor and expresses the result of the configuration actions conducted by the network node 20.
  • the result report message may include:
  • the second realization applies a method similar to how the digest of a link descriptor is generated in Shortest Path Bridging (SPB) (IEEE 802.1aq, clause 28.4.6).
  • SPB Shortest Path Bridging
  • a plurality of databases are synchronized using a link state protocol.
  • an adequate identifier of the database in which the network wide configuration is included, can provide additional information about the encoded descriptor.
  • An adequate database identifier can be generated in a manner similar to how the topology digest is generated in SPB (IEEE 802.1aq, clause 28.4.6).
  • the scope of the result report is the whole content of the link state message. This means that if several configuration descriptors are included in a single link state message, the result is applied to all of them.
  • the network node 20 that initiates an explicit path configuration is allowed to insert one configuration descriptor per link state message.
  • a network node 20 may express the result of a configuration action in two ways: implicitly (implicit result encoding) or explicitly (explicit result encoding).
  • implicit result encoding the advertisement of the result report implies that the network node 20 sending the result report did not fail the configuration action, i.e., it was successful or did not need to take any action.
  • the failure to advertise a result report by a network node 20 is interpreted to mean that the configuration action failed.
  • explicit result encoding an explicit result code is included in the result report and/or the result is explicitly encoded in an attached data structure.
  • the result report may be added as a new information element into existing link state messages as described below.
  • new link state message may be defined for the result report.
  • the link state messages are propagated (flooded) as usual, i.e., no new mechanism is required.
  • the result report information elements can be carried within a Multi-Topology Capability (MT-Capability) Type Length Value (TLV), which is a top level TLV for Link State Packet Data Units (LSPs) that provides Multi-Topology context. It identifies the Multi-Topology Identifier for sub-TLVs in LSPs.
  • MT-Capability Multi-Topology Capability
  • LSPs Link State Packet Data Units
  • ISIS-SPB is an add-on to IS-IS, therefore, the TLVs used for IS-IS to carry the digest can be also used for ISIS-SPB.
  • the network state digest should be created in a new (sub) Type Length Value (TLV) advertised by routers.
  • TLV Type Length Value
  • the network state digest could be put in a new Type-Length-Value (TLV) advertised in a Router Link State Advertisement (LSA).
  • TLV Type-Length-Value
  • LSA Router Link State Advertisement
  • a network node 20 receiving a configuration descriptor disseminates a result report in the following cases: (1) when it executed all actions prescribed by the configuration instructions; or (2) no actions were needed to be executed. All network nodes 20, including the one initiating the configuration descriptor, receive the result report. Upon receiving a result report, a network node 20 determines that no error occurred during configuration at the reporting network node 20. After observing result reports from all other network nodes 20 of the domain, a network node 20 determines that the configuration specified by the configuration descriptor to be completed.
  • the network node 20 initiating the configuration descriptor is not able to detect whether other network nodes 20 failed to configure themselves according to configuration descriptor. To determine whether other network nodes 20 failed in executing the configuration, the initiating network node 20 starts a timer for each stored configuration descriptor. The network node 20 configures this timer when the configuration descriptor is stored in the local database by setting the timer. This value can be predefined or dynamically calculated based on measurements of link state protocol message related delays.
  • a network node 20 that receives a configuration descriptor to be applied by the network node 20 disseminates an explicit result report when it has executed all actions prescribed by the configuration instructions.
  • the result report includes an adequate identifier of the configuration descriptor and the result of the configuration.
  • the network node 20 stores a copy of the result report in its local database as well.
  • the result can be one of the following:
  • Each network node 20 that receives an explicit result report executes the procedure shown in Figure 2 .
  • the network node 20 saves the identifier of the network node 20 that advertised the result report, and the content of the result report.
  • the network node 20 looks for a matching configuration descriptor among the already received ones, to which the result report refers. If the network node 20 does not find a matching configuration descriptor, the network node 20 assumes that this configuration descriptor will arrive later, so it only needs to store the result report in a local store. As the result report has been already stored in the first step, the network node 20 finishes the process.
  • the network node 20 when the network node 20 receives a configuration descriptor, it looks for matching result reports in the local store and applies the below rules to the matching reports. For example, assume that the network node 20 has already received a result report with the result code set to "FAILED". Because the network node 20 already stored the result report, it immediately applies the result report when the configuration descriptor arrives. This means that without further processing, the descriptor will be considered and stored as failed.
  • the network node 20 After finding the matching descriptor, the network node 20 checks if all network nodes 20 of the domain responded with a result report referring to the matching descriptor. When responses from all network nodes 20 have been collected and the local configuration procedure has been finished, the network node 20 determines whether any of the received result reports referring to the configuration descriptor has the result code set to "FAILED.” If so, the network node 20 declares the whole configuration failed. On the other hand if all result codes are either "NO ACTIONS TAKEN" or "SUCCESSFUL", i.e., no nodes has responded with the FAILED result code, the network node considers the configuration described in the configuration instruction descriptor completed.
  • each network node 20 that collects and processes the result reports is able to detect whether a configuration was successfully carried out at other network nodes 20, or whether the configuration failed at one or more network nodes 20. Because the result reports are disseminated using link state messages, all network nodes 20 should be aware of the status of the overall configuration. Therefore, it is possible to implement a distributed rollback mechanism based on the distributed detection of a network configuration failure.
  • the triggers and procedures depend on how the result is encoded.
  • the expiration of a timer associated with the configuration descriptor indicates the failure of implementing the configuration descriptor.
  • the network node 20 rolls back all local configurations dictated by the failed configuration descriptor and removes the configuration descriptor from its local database.
  • the network node 20 After collecting the responses of all network nodes 20 of the communication network 10, the network node 20 declares the configuration failed if any of the network nodes 20 reported a "failure" to implement the configuration. After declaring a configuration failure, the node rolls back all local configurations dictated by the failed configuration descriptor and removes the configuration instruction descriptor from its local database.
  • the network node 20 that initiated the network wide configuration specified by a configuration descriptor may want to withdraw the configuration.
  • the configuration initiating network node 20 withdraws a network wide configuration by sending a second explicit result report message with result code set to "failed". Any network nodes 20 receiving this result report declares the matching configuration descriptor failed and roll back.
  • the initial draft version of the IEEE 802.1Qca describes a method for an Ethernet network in which a path initiating network node 20 constructs an EPD that lists all network nodes 20 along which the explicit path must be routed.
  • the path initiating node may comprise a switching node 20-1 or an external node 20-2 (e.g., PCE).
  • the path initiating network node 20 then disseminates this EPD in a link state message using IS ⁇ IS as a link state routing protocol.
  • the link state message is referred to as a link state (LS) packet data unit (PDU) (LSP).
  • LS link state
  • PDU packet data unit
  • IEEE 802.1Qca does not provide any mechanism for the path initiating network node 20 to discover whether the configuration of the explicit path was successful. Applying the result reporting techniques described above, the path initiating network node 20 will be able to detect whether other network nodes 20 along the path were able to configure the explicit path according to the EPD.
  • the EPD is implemented as a configuration descriptor.
  • the format and the details of the result reports depend on the particular embodiment/implementation.
  • a full result advertisement comprises an updated version of the EPD.
  • the EPD is extended to include a network node originator field and a status field.
  • An example of the result report used to implement full result advertisement is depicted on Figure 3 .
  • Another possible implementation of the result report for path configuration comprises a newly defined sub-TLV that is included in and advertised as part of a link state message (e.g., LSP for IS ⁇ IS).
  • the format of that sub-TLV is shown on Figure 4 .
  • the first digest encodes the Explicit Path Database (EPDB) stored by the network node 20 sending the result report, and the second digest encodes the EPD to which the report refers.
  • the result code field indicates the result of any configuration actions.
  • One possible implementation of the result report for path configuration comprises a newly defined sub-TLV that is included in and advertised as part of a LSP.
  • the format of that sub-TLV is shown on Figure 5 .
  • the first digest encodes the Explicit Path Database (EPDB) stored by the network node 20 sending the result report, and the second digest encodes the EPD to which the report refers.
  • the result code field indicates the result of any configuration actions.
  • all network nodes 20 that receive the configuration descriptor will respond with a result report. Also, the network nodes 20 collect the result reports from all other network nodes 20 before declaring the configuration completed or failed.
  • the network node 20 that receives an EPD, executes the procedure shown in Figure 6 . After receiving the EPD, network node 20 checks whether there are loose hops in the explicit path. If the explicit path is formed of only strict hops, the network node 20 checks if it is involved in the path as a strict hop. If yes, the network node 20 starts configuring itself according to EPD; otherwise it disseminates an implicit compact result report and finishes processing the EPD.
  • the network node 20 resolves all loose hops, determines path segments implementing the loose hops, and checks if it is along the path including the resolved path segment. If yes, the network node 20 starts configuring itself according to the EPD; otherwise, the network node 20 disseminates an implicit compact result report and finishes processing the EPD.
  • the network node 20 determines the configuration instructions and executes them. If all configuration instructions were successfully executed, e.g., the forwarding information base (FIB)/filtering database (FDB) and the port configuration actions were successfully completed, the network node 20 disseminates an implicit compact result report message. If any failure occurs during the configuration, the network node 20 rolls back the local configuration and finishes the processing without generating a result report message.
  • FIB forwarding information base
  • FDB filtering database
  • Each network node 20 in the link state protocol domain receives the implicit compact result report messages.
  • the network nodes 20 execute the procedure shown in Figure 7 upon receipt of an implicit compact result report. First, it stores the received report and looks for a matching EP Descriptor. Then it checks if the original EP Descriptor contains loose hop. If so, then it has to be determined from which network nodes 20 to expect report messages. If the EP is only comprised of strict hops, only the network nodes 20 of those hops are expected to respond; otherwise all network nodes 20 must reply report. Then the network node 20 checks if the strict hop network nodes 20 or all network nodes 20 have already responded with result report. If yes, the network node 20 declares the path configuration completed.
  • the network node 20 that receives an EPD performs the procedure shown in Figure 8 . After receiving the EPD, the network node 20 checks whether there are loose hops in the path. If the path is formed of only strict hops, the network node 20 checks if it is listed as a strict hop. If yes, the network node 20 starts configuring itself according to EPD; otherwise it generates explicit result report message (e.g. full path result report or explicit compact result report) with the result code set to "NO ACTIONS TAKEN".
  • explicit result report message e.g. full path result report or explicit compact result report
  • the network node 20 resolves all loose hops, determines path segments implementing the loose hops, and checks if it is along the path including the resolved path segments. If yes, the network node 20 starts configuring itself according to EPD; otherwise it generates an explicit result report with result code set to "NO ACTIONS TAKEN".
  • the network node 20 determines the configuration instructions and executes them. If all configuration instructions are successfully executed, e.g., the FIB/FDB and the port configuration actions were done successfully, the network node 20 disseminates an explicit result report message with the result code set to "SUCCESSFUL". If a failure occurs during the path configuration, the network node 20 rolls back the local configuration and generates an explicit result report message with result code set to "FAILED".
  • the network node 20 After generating the EP report message the network node 20 finishes the procedure.
  • Each network node 20 in the link state protocol domain receives the explicit result report messages.
  • a network node 20 receives an explicit result report, it performs the procedure shown in Figure 9 .
  • the network node 20 stores the received report and looks for a matching EPD. Then, the network node 20 checks whether the original EPD contains a loose hop in order to determine from which network nodes 20 it expects result report messages. If the explicit path is comprised only of strict hops, only the network nodes 20 along those hops are expected to respond; otherwise all network nodes 20 must respond.
  • the network node 20 checks whether the strict hop network nodes 20, or all network nodes 20, have already responded with a result report.
  • the network node 20 checks if result code of any of the result reports was set to "FAILED". If yes, the network node 20 declares that the path installation has failed, rolls back the local configuration (if needed), and removes the EPD from the EPDB. The network node 20 then finishes the procedure.
  • the explicit result report message may be implemented by an explicit compact result report sub-TLV (see Figure 5 ) in case of explicit compact result reporting, or by a full path result report sub-TLV ( Figure 3 ) for a full path advertisement.
  • the path initiating network node 20 which may comprise a control node (e.g. PCE), is allowed to loosely specify the explicit path, i.e., it does not determine all hops along the path and lets the intermediate network nodes 20 fill-in the unspecified segments of the path. Since the intermediate network nodes 20 run a Shortest Path First (SPF) or Constrained Shortest Path First (CSPF) algorithm locally to determine the exact route for the loose hops in the EPD, path segments implementing the loose hops are calculated in a distributed fashion. Therefore, the explicit path requesting network node 20 may not be aware of loose hop path segments.
  • SPF Shortest Path First
  • CSPF Constrained Shortest Path First
  • One embodiment of the disclosure provides a method for the path requesting network node 20 to determine the exact path has been configured in the network even if the path is not fully specified.
  • the network nodes 20, which provided implicit compact result report, have conducted local configuration as a consequence of an EPD. This means that these network nodes 20 are along the explicit path. By collecting these responses, a network node 20 becomes aware of the explicit path.
  • FIG 10 illustrates the main functional elements in a network node 20 according to one exemplary embodiment.
  • the network node 20 comprises an interface circuit 25, a control circuit 30, and memory 40.
  • the interface circuit 25 connects the network node 20 to a communication network 15.
  • the interface circuit 25 may comprise, for example, an Ethernet interface or other IP-based interface.
  • the control circuit 30 controls the operation of the network node 20 as previously described.
  • the control circuit 30 may comprise one or more processors, microcontrollers, hardware circuits, firmware, or a combination thereof.
  • the local memory 40 may comprise random access memory (RAM), read-only memory (ROM), Flash memory, or other type of memory.
  • the local memory may also include internal memory such as register files, L1 cache, L2 cache or other memory array in a microprocessor.
  • the local memory 40 stores computer program code and data used by the network node 20 to perform the operations as described herein.
  • the data stored by the local memory 40 includes, for example, the LSDB, routing tables, EPDB, FIB, FDB, and other data used for configuring paths and for routing data traffic.
  • the present disclosure enables a network node 20 in a link state domain, including a PCE or other external node 20-2, to be aware of the result of a configuration instruction relevant for plurality of network nodes 20 making use of link state protocol messages. Additionally, the disclosure provides mechanism for a network node 20 to autonomously roll back the previously installed updates dictated by a configuration instruction relevant for plurality of network nodes 20 without involving other protocols.
  • the methods described in this disclosure can be applied during the explicit path configuration specified by IEEE 802.1Qca. This allows the network node 20 that request the explicit path to determine whether the configuration of the path was accomplished. Furthermore, in some embodiments the network node 20 or external entity becomes aware of the exact explicit path even if finally installed in the communication network 10 even if the explicit path involves loose hops. These advantages are provided without involving signaling protocols.
  • the techniques described in this disclosure can be integrated into a link state control protocol, such as IS-IS or ISIS-SPB.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Claims (17)

  1. Verfahren in einem anfragenden Netzwerkknoten (20) in einem Kommunikationsnetzwerk (10), das ein Link-State-Protokoll zum Ermitteln, ob eine angefragte Netzwerkkonfiguration implementiert wurde, implementiert, das Verfahren umfassend:
    Senden (150), durch den anfragenden Netzwerkknoten (20), eines Konfigurationsbeschreibers innerhalb einer ersten Link-State-Nachricht, durch Leiten der ersten Link-State-Nachricht an einen oder mehrere Peer-Netzwerkknoten (20-1) im Kommunikationsnetzwerk (10), wobei der Konfigurationsbeschreiber die angefragte Netzwerkkonfiguration beschreibt;
    Empfangen (154), durch den anfragenden Netzwerkknoten (20), einer oder mehr Ergebnismeldungen innerhalb einer oder mehr Link-State-Nachrichten, die von dem einen oder den mehreren Peer-Netzwerkknoten (20-1) erstellt und durch das Kommunikationsnetzwerk (10) geleitet werden, wobei jede der einen oder mehreren Ergebnismeldungen ein Ergebnis eines Versuchs durch den jeweiligen erstellenden Peer-Netzwerkknoten anzeigt, eine Konfigurationshandlung als Reaktion auf den Konfigurationsbeschreiber auszuführen; und
    Ermitteln (156), durch den anfragenden Netzwerkknoten (20), ob die angefragte Netzwerkkonfiguration basierend auf der einen oder den mehreren Ergebnismeldungen implementiert wurde.
  2. Verfahren nach Anspruch 1, des Weiteren umfassend:
    als Reaktion auf ein Ermitteln, dass die angefragte Netzwerkkonfiguration nicht implementiert wurde, Abbrechen (158) von Konfigurationsänderungen, die basierend auf dem Konfigurationsbeschreiber gemacht werden; und wobei das Abbrechen optional ein Senden einer Ergebnismeldungsnachricht an den einen oder die mehreren Netzwerkknoten (20-1) umfasst, wobei die Ergebnismeldungsnachricht einen Ergebniscode umfasst, der einen Fehler anzeigt.
  3. Verfahren nach einem der Ansprüche 1 bis 2, des Weiteren umfassend:
    nach dem Senden des Konfigurationsbeschreibers innerhalb der ersten Link-State-Nachricht, Senden (152) einer Ergebnismeldung innerhalb einer zweiten Link-State-Nachricht, durch Leiten der zweiten Link-State-Nachricht an den einen oder die mehreren Peer-Netzwerkknoten (20-1) durch das Kommunikationsnetzwerk (10).
  4. Verfahren nach einem der Ansprüche 1 bis 3, wobei der anfragende Netzwerkknoten (20) eine Pfadberechnungsinstanz umfasst; und/oder
    wobei das Kommunikationsnetzwerk (10) ein Vermittlungsnetzwerk (15) umfasst; und
    der anfragende Netzwerkknoten (20) eine Netzwerksteuerung des Vermittlungsnetzwerks (15) umfasst.
  5. Verfahren nach einem der Ansprüche 1 bis 4, wobei:
    das Kommunikationsnetzwerk (10) ein Vermittlungsnetzwerk (15) umfasst; und
    der anfragende Netzwerkknoten (20) und der eine oder die mehreren Peer-Netzwerkknoten (20-1) Vermittlungsknoten des Vermittlungsnetzwerks (15) sind.
  6. Verfahren nach einem der Ansprüche 1 bis 5, wobei der Konfigurationsbeschreiber einen ausdrücklichen Pfad identifiziert, der einen oder mehrere strikte Hops umfasst, die einen exakten Pfad für den ausdrücklichen Pfad bestimmen; und/oder
    wobei der Konfigurationsbeschreiber einen ausdrücklichen Pfad identifiziert, der zumindest einen lockeren Hop umfasst und damit keinen exakten Pfad für den ausdrücklichen Pfad bestimmt.
  7. Verfahren nach Anspruch 1, wobei der Konfigurationsbeschreiber einen ausdrücklichen Pfad identifiziert, der zumindest einen lockeren Hop umfasst und somit keinen exakten Pfad für den ausdrücklichen Pfad bestimmt, wobei das Verfahren des Weiteren umfasst:
    Ermitteln, durch den anfragenden Netzwerkknoten (20), des exakten Pfads basierend auf der einen oder dem mehreren Ergebnismeldungen von dem einen oder den mehreren Peer-Netzwerkknoten (20-1).
  8. Verfahren nach einem der Ansprüche 1 bis 7, wobei jede der einen oder der mehreren Ergebnismeldungen eines oder mehrere umfasst von:
    dem Konfigurationsbeschreiber;
    einer kompakten Darstellung des Konfigurationsbeschreibers; und
    einer Kennung der ersten Link-State-Nachricht; und/oder
    wobei jede der einen oder mehreren Ergebnismeldungen der einen oder der mehreren Link-State-Nachrichten innerhalb eines Multi-Topology Capability Type Length Value Elements der jeweiligen Link-State-Nachricht enthalten ist.
  9. Verfahren, durch einen empfangenden Netzwerkknoten (20-1) in einem Kommunikationsnetzwerk (10) implementiert, das ein Link-State-Protokoll implementiert, zum Implementieren einer angefragten Netzwerkkonfiguration, die durch einen anfragenden Netzwerkknoten (20) bestimmt ist, das Verfahren umfassend:
    Empfangen (160) eines Konfigurationsbeschreibers, der die angefragte Netzwerkkonfiguration beschreibt, innerhalb einer ersten Link-State-Nachricht, die durch den anfragenden Netzwerkknoten (20) erstellt wird und durch das Kommunikationsnetzwerk (10) geleitet wird;
    Versuchen (162), eine Konfigurationshandlung basierend auf dem Konfigurationsbeschreiber auszuführen; und
    Senden (164) einer Ergebnismeldung in einer zweiten Link-State-Nachricht, durch Leiten der zweiten Link-State-Nachricht an einen oder mehrere Peer-Netzwerkknoten des Kommunikationsnetzwerks (10) und den anfragenden Netzwerkknoten (20), wobei die Ergebnismeldung ein Ergebnis des Versuchs, die Konfigurationshandlung auszuführen, anzeigt.
  10. Verfahren nach Anspruch 9, des Weiteren umfassend:
    Empfangen (166) einer oder mehrerer Ergebnismeldungen, die durch den einen oder die mehreren Peer-Netzwerkknoten erstellt werden, wobei jede der einen oder der mehreren Ergebnismeldungen ein Ergebnis eines Versuchs durch den jeweiligen erstellenden Netzwerkknoten, eine Konfigurationshandlung als Reaktion auf den Konfigurationsbeschreiber auszuführen, anzeigt.
  11. Verfahren nach Anspruch 10, des Weiteren umfassend:
    Ermitteln (168), ob die angefragte Netzwerkkonfiguration, die durch den anfragenden Netzwerkknoten (20) bestimmt ist, basierend auf der einen oder den mehreren Ergebnismeldungen implementiert wurde; und
    nach Ermitteln, dass die angefragte Netzwerkkonfiguration nicht implementiert wurde, optional Abbrechen (170) von Konfigurationsänderungen, die basierend auf dem Konfigurationsbeschreiber gemacht werden.
  12. Verfahren nach Anspruch 9 oder 10, wobei die Ergebnismeldung anzeigt, dass der empfangende Netzwerkknoten (20-1) versagt hat, die Konfigurationshandlung auszuführen, um die angefragte Netzwerkkonfiguration zu implementieren.
  13. Verfahren nach einem der Ansprüche 9 bis 11, wobei die Ergebnismeldung anzeigt, dass der empfangende Netzwerkknoten (20-1) die Konfigurationshandlung erfolgreich ausgeführt hat, um die angefragte Netzwerkkonfiguration zu implementieren.
  14. Verfahren nach einem der Ansprüche 9 bis 13, wobei der Konfigurationsbeschreiber einen ausdrücklichen Pfad im Kommunikationsnetzwerk (10) identifiziert; und wobei optional der ausdrückliche Pfad einen oder mehrere strikte Hops umfasst, die einen exakten Pfad für den ausdrücklichen Pfad bestimmen; oder
    wobei der ausdrückliche Pfad zumindest einen lockeren Hop umfasst und somit keinen exakten Pfad für den ausdrücklichen Pfad bestimmt.
  15. Verfahren nach Anspruch 9, wobei der Konfigurationsbeschreiber einen ausdrücklichen Pfad im Kommunikationsnetzwerk (10) identifiziert, der zumindest einen lockeren Hop umfasst und somit keinen exakten Pfad für den ausdrücklichen Pfad bestimmt, und wobei das Verfahren des Weiteren umfasst:
    Ermitteln, durch den empfangenden Netzwerkknoten (20-1), des exakten Pfads, basierend auf der einen oder den mehreren Ergebnismeldungen von dem einen oder den mehreren Peer-Netzwerkknoten (20-1).
  16. Verfahren nach einem der Ansprüche 9 bis 15, wobei die Ergebnismeldung eines oder mehrere umfasst von:
    dem Konfigurationsbeschreiber;
    einer kompakten Darstellung des Konfigurationsbeschreibers; und
    einer Kennung der ersten Link-State-Nachricht; und/oder
    wobei die Ergebnismeldung innerhalb eines Multi-Topology Capability Type Length Value Elements der zweiten Link-State-Nachricht enthalten ist.
  17. Netzwerkknoten (20) in einem Kommunikationsnetzwerk (10), umfassend:
    eine Schnittstellenschaltung (25), die konfiguriert ist, mit einem oder mehreren Netzwerkknoten des Kommunikationsnetzwerks (10) zu kommunizieren, und
    eine Steuerungsschaltung (30), die mit der Schnittstellenschaltung gekoppelt und konfiguriert ist, alle Schritte des Verfahren nach einem der Ansprüche 1 bis 16 durchzuführen.
EP13780197.3A 2013-05-13 2013-08-10 Verfahren für sichere netzwerkstatuskonfiguration und rollback in link-state-paketnetzwerken Not-in-force EP2997700B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361822696P 2013-05-13 2013-05-13
PCT/IB2013/056540 WO2014184625A1 (en) 2013-05-13 2013-08-10 Method for assured network state configuration and rollback in link-state packet networks

Publications (2)

Publication Number Publication Date
EP2997700A1 EP2997700A1 (de) 2016-03-23
EP2997700B1 true EP2997700B1 (de) 2017-03-15

Family

ID=49474648

Family Applications (1)

Application Number Title Priority Date Filing Date
EP13780197.3A Not-in-force EP2997700B1 (de) 2013-05-13 2013-08-10 Verfahren für sichere netzwerkstatuskonfiguration und rollback in link-state-paketnetzwerken

Country Status (3)

Country Link
US (1) US20160127223A1 (de)
EP (1) EP2997700B1 (de)
WO (1) WO2014184625A1 (de)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753828B (zh) * 2013-12-31 2019-10-25 华为技术有限公司 一种sdn控制器、数据中心系统和路由连接方法
JP6561127B2 (ja) 2015-02-03 2019-08-14 テレフオンアクチーボラゲット エルエム エリクソン(パブル) 時間認識パス計算
US9806997B2 (en) 2015-06-16 2017-10-31 At&T Intellectual Property I, L.P. Service specific route selection in communication networks
US10250444B2 (en) * 2015-07-02 2019-04-02 Perspecta Labs Inc. Hybrid SDN/legacy policy enforcement
US9762495B1 (en) * 2016-09-13 2017-09-12 International Business Machines Corporation Weighted distribution across paths of degraded quality
CN112565084A (zh) * 2019-09-10 2021-03-26 中国电信股份有限公司 基于pcep的关键路径信息转发方法、装置和系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110022728A1 (en) * 2009-07-22 2011-01-27 Telefonaktiebolaget Lm Ericsson (Publ) Link state routing protocols for database synchronization in gmpls networks
US20140269737A1 (en) * 2013-03-15 2014-09-18 Pradeep G. Jain System, method and apparatus for lsp setup using inter-domain abr indication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
US20160127223A1 (en) 2016-05-05
WO2014184625A1 (en) 2014-11-20
EP2997700A1 (de) 2016-03-23

Similar Documents

Publication Publication Date Title
EP3429141B1 (de) Segment-routing mit label-vermittelten pfaden für nicht segment-routingfähige router
US9178798B2 (en) Fast reroute using loop free alternate next hops for multipoint label switched paths
EP3264695B1 (de) Bandbreitenverwaltung für ressourcenreservierungsprotokoll-lsps und nichtressourcenreservierungsprotokoll-lsps
US9088485B2 (en) System, method and apparatus for signaling and responding to ERO expansion failure in inter-domain TE LSP
TWI499237B (zh) 廣播網路之標籤分配協定與內部閘道協定同步化
EP2997700B1 (de) Verfahren für sichere netzwerkstatuskonfiguration und rollback in link-state-paketnetzwerken
US8989048B2 (en) Node system ID change in link state protocol network
EP2282459A1 (de) Linkstatus-Routingprotokolle zur Datenbanksynchronisation in GMPLS-Netzwerken
EP2892188B1 (de) Verfahren zur bestimmung eines paketweiterleitungsweges, netzwerkvorrichtung und steuerungsvorrichtung
RU2521092C2 (ru) Синхронизация ldp и igp для широковещательных сетей
EP3055948B1 (de) Leitweglenkung von punkt-zu-mehrpunkt-diensten in einem mehrdomänennetzwerk
US20150326469A1 (en) Oam aided explicit path report via igp
US11425056B1 (en) Dynamic computation of SR-TE policy for SR-enabled devices connected over non-SR-enabled devices
US10554543B1 (en) Migrating data traffic between label switched paths (LSPs) based on per-LSP protocol priority value
EP2997701B1 (de) Netzwerkzustandsübersicht für konvergenzprüfung
US8675670B2 (en) Distribution of routes in a network of routers
CN109150716A (zh) 拓扑变化响应方法、路径计算客户端及路径计算系统
CN101621453A (zh) 保证差分业务流量工程网络配置参数一致的方法和系统
WO2019001487A1 (zh) 一种路径数据的删除方法、一种消息转发方法和装置
JP6377738B2 (ja) Rsvp−teシグナリングを処理するための方法及びシステム

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20151106

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAX Request for extension of the european patent (deleted)
REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602013018633

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012751000

Ipc: H04L0012725000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: H04L 12/703 20130101ALI20161129BHEP

Ipc: H04L 12/751 20130101ALI20161129BHEP

Ipc: H04L 12/717 20130101ALI20161129BHEP

Ipc: H04L 12/707 20130101ALI20161129BHEP

Ipc: H04L 12/26 20060101ALI20161129BHEP

Ipc: H04L 12/725 20130101AFI20161129BHEP

INTG Intention to grant announced

Effective date: 20161216

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 876589

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170415

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602013018633

Country of ref document: DE

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20170315

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170615

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170616

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 876589

Country of ref document: AT

Kind code of ref document: T

Effective date: 20170315

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170615

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170717

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170715

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602013018633

Country of ref document: DE

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

26N No opposition filed

Effective date: 20171218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170831

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170831

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20170831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170810

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20170810

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20180911

Year of fee payment: 13

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20130810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20170315

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190831

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602013018633

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: H04L0012725000

Ipc: H04L0045300000

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20220829

Year of fee payment: 10

Ref country code: DE

Payment date: 20220629

Year of fee payment: 10

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602013018633

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230810

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20240301