EP0859491A1 - Procédé de réacheminement dans des réseaux à structure hiérarchique - Google Patents

Procédé de réacheminement dans des réseaux à structure hiérarchique Download PDF

Info

Publication number
EP0859491A1
EP0859491A1 EP97400364A EP97400364A EP0859491A1 EP 0859491 A1 EP0859491 A1 EP 0859491A1 EP 97400364 A EP97400364 A EP 97400364A EP 97400364 A EP97400364 A EP 97400364A EP 0859491 A1 EP0859491 A1 EP 0859491A1
Authority
EP
European Patent Office
Prior art keywords
node
peer group
path
failure
upstream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP97400364A
Other languages
German (de)
English (en)
Other versions
EP0859491B1 (fr
Inventor
Yves T'joens
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alcatel Lucent SAS
Original Assignee
Alcatel SA
Alcatel Alsthom Compagnie Generale dElectricite
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alcatel SA, Alcatel Alsthom Compagnie Generale dElectricite filed Critical Alcatel SA
Priority to DE69735084T priority Critical patent/DE69735084T2/de
Priority to EP97400364A priority patent/EP0859491B1/fr
Priority to AT97400364T priority patent/ATE315861T1/de
Priority to AU53007/98A priority patent/AU740234B2/en
Priority to US09/023,370 priority patent/US6115753A/en
Priority to CA002227111A priority patent/CA2227111A1/fr
Priority to JP3633298A priority patent/JPH10243029A/ja
Publication of EP0859491A1 publication Critical patent/EP0859491A1/fr
Application granted granted Critical
Publication of EP0859491B1 publication Critical patent/EP0859491B1/fr
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • H04L45/04Interdomain routing, e.g. hierarchical routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/22Alternate routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/40Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5619Network Node Interface, e.g. tandem connections, transit switching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5625Operations, administration and maintenance [OAM]
    • H04L2012/5627Fault tolerance and recovery

Definitions

  • An additional characteristic feature of the present invention is that said local alternative path within said peer group is calculated based on path information stored in the entering node of said peer group, during the set-up of the connection, as is defined in claim 2.
  • a signalling procedure which may be used for implementing this rerouting method is defined in claim 7.
  • the distinct local part of the path within this distinct peer group is secured by the distinct path information including the identifier of the distinct outgoing node, and stored in the distinct entering node of each distinct peer group along the path.
  • This allows the local alternative path to always be established at the lowest possible level peer group. Since a path within a lower level peer group is smaller than a path within a higher level peer group, the smallest local alternative path is calculated, by this again reducing the computation time since less nodes are involved in this calculation.
  • the calculation of the local alternative path can be speeded up, since this information allows to re-use parts of the existing path within the peer group including the failure, so that a complete rerouting within the peer group itself may be replaced by only a partial rerouting within this peer group, thereby accelerating the rerouting procedure.
  • the present invention further relates to an ingress node, and egress node and a fault determination node, adapted to be used in a network wherein the rerouting method and signaling procedure according to the present invention is applied, as is defined in claims 13, 20 and 28 respectively.
  • peer groups B.1 and B.2 are grouped together into peer group B, at this level logical node B.1 referring to peer group leader B.1.1 and logical node B.2 referring to peer group leader B.2.3.
  • the logical nodes will in between themselves elect again a peer group leader, which will once again represent the entire peer group at an even higher level peer group.
  • A.2 is peer group leader in peer group A
  • B.1 is peer group leader in peer group B.
  • logical nodes A, B, and C then respectively refer to peer group leaders A.2, B.1 and C.1.
  • no peer group leader is elected.
  • Every peer group leader aggregates topology and reachability information concerning its peer group, and floods this information at the next hierarchical level peer group.
  • the information obtained from other peer groups at the next hierarchical level is injected in the lower level peer group by the peer group leader.
  • every node thus has an idea about the network configuration, not in full detail, but always the aggregated information injected by peer group leaders in the higher level peer groups of a node.
  • the full configuration that is available to node A1.1 is the information concerning peer group A.1, peer group A and the highest level peer group. This provides every lowest level node the capability to calculate a route to any other part of the network. This capability is used at call set-up time, for calculating a full path between a source or first node and a second or destination node, starting from the source node.
  • outgoing border nodes of a peer group along a path hereafter denoted as egress nodes, recognising themselves as the outgoing border nodes from the incoming DTL-stack, will drop part of the DTL stack, whereas incoming border nodes of a peer group along the path, hereafter denoted as ingress nodes, complete the DTL stack by building their view on the local peer group.
  • This DTL stored in each distinct ingress node in fact presents the distinct path information that is standard stored in each distinct incoming border node of each distinct peer group of any level along the calculated path.
  • Intermediate nodes of a peer group only follow directions indicated in the DTL.
  • This standardised PNNI set up message is not changed by the present invention.
  • this set up message is followed by a connect message sent back from the destination node towards the source node.
  • some extensions are however added to this connect message, for the first variant of the method according to the invention. These extensions on one hand consist of identifying the connection uniquely in this peer group along the path, by identifying the participating egress node, and passing this identifier from this participating egress node to the corresponding ingress node of the same peer group. This is performed by means of information added to the "connect" message of the PNNI signalling protocol.
  • Fig. 7 representing such a CPL table or stack including the individually built lists by each node, for the shown path of Fig. 1.
  • the upper row of the table should be interpreted as follows : node C.2 will identify itself as the egress node for the peer group C and of the highest level peer group. Node C.2 will therefore create a CPL with the peer group identifier of its highest level peer group, denoted PGI(C) and to be interpreted as the peer group identifier, denoted PGI, of the peer group including logical node C, in Fig. 1 being the highest level peer group.
  • Node B.2.2 however, knowing it is the ingress node of the lowest level peer group, B.2, pops this line concerning this peer group B.2, and denoted as [PGI(B.2.5),I(B.2.5)], of the stack, and stores this in its own memory.
  • B.2.2 is not the ingress node, so it will not be involved in path restoration on this level. In fact, it is node B.1.2 which will perform the restoration at the peer group B level.
  • the post failure phase starts with the detection of the failure.
  • both upstream and downstream neighbouring nodes to this failure location will detect this, and release the call with an error indication that this connection is network released due to a failure in the path.
  • These release messages inform all nodes they pass along the path that this connection cannot be sustained anymore.
  • the upstream release message is sent from the upstream neighbouring node towards the first node and the downstream release message is sent from the downstream neighbouring node towards the second node.
  • both upstream and downstream neighbouring nodes will determine this common lowest level peer group as the upstream, resp. downstream failure peer group.
  • the local path between node B.2.3 and node B.2.4 fails, whereby upstream neighbouring node B.2.3 and downstream node B.2.4 both identify this failure.
  • node B.2.3, resp B.2.4 determine peer group B.2 to be the upstream, resp downstream failure peer group.
  • the identifier, inherently specifying the level, of the upstream failure peer group is written in an upstream release message generated by node B.2.3, whereas the identifier, inherently specifying the level, of the downstream failure peer group is written in a downstream release message generated by node B.2.4. Both neighbouring nodes will then release the connection, by sending the upstream, resp. downstream release message along the previously calculated path in the upstream, resp downstream direction.
  • both upstream and downstream neighbouring nodes belong to a different lowest level peer group, but will determine, based on their local view of the network, and on the failure location, the upstream and downstream failure peer group to be this peer group at the lowest possible level, they both form part of. Still in this case both upstream and downstream failure peer groups are initially determined to be the same. In case however of failure of a border node of a peer group, this will be different. In the example shown in Fig.
  • node A.4.6 becomes inoperational
  • nodes A.4.4 and A.3.4 being respectively the upstream neighbouring node, and the downstream neighbouring node from the failure location
  • node A.4.4 detecting that its link between itself and node A.4.6 is not functional anymore, may interpret this failure as an inside link failure within its lowest level peer group A.4, and therefore decide this to be the upstream failure peer group.
  • downstream neighbouring node A.3.4 detects a failure on its link to node A.4.6, being an outside link to its peer group, and therefore decides the parent peer group, being peer group A, to be the failure peer group.
  • the upstream release message is passed from node to node in the upstream direction, whereby each node checks whether it is the ingress node of the identified upstream failure peer group.
  • Ingress nodes are adapted to perform this checking, by for instance comparing part of their path information stored during set up, indicating of which peer group they form the ingress node, with the upstream failure peer group identifier that they have extracted from the upstream release message . If the node is not the ingress node of the identified upstream failure peer group, it passes the upstream release message further in the upstream direction towards the next node, until the ingress node of the upstream failure peer group is reached.
  • Egress nodes are adapted to perform this checking by, for instance comparing the identifier of the downstream failure peer group, they extracted from the downstream release message, with part of their own identifier, indicating their own peer group.
  • the egress node of the downstream failure peer group will hold the release message, and start a reattachment supervision timer, the duration of which may be software controlled by for instance an operator. The reason for this is to block the downstream release message for passing to the second or destination node and the second user terminal, informing the latter that the complete connection has to be restored.
  • the ingress node of the upstream failure peer group will start recalculating the route within the identified upstream failure peer group based on the standardised PNNI information and from the extra information stored during the extended connect phase. Therefrom this ingress node knows the corresponding egress node of the upstream failure peer group, to which it has to recalculate a local alternative path. If the calculation was successful, a new local set-up message is sent, from this ingress node of the upstream failure peer group, to the corresponding egress node of this upstream failure peer group, carrying the connection identifier.
  • the egress node of the upstream failure peer group will receive the new local set-up with the connection identifier, and will switch the connection internally, even if the downstream release message should not yet have arrived in the particular egress node of the upstream failure peer group, which is for instance the case if the downstream release message should have been blocked by another node.
  • the downstream release message is received in the egress node of the upstream failure peer group, before this node receives the new local set-up message, the initiated reattachment timer will be stopped upon this receipt of the new local set-up and switchover takes place. This occurs mostly when both upstream and downstream failure peer groups are identical, and when the ingress node of the upstream failure peer group has succeeded to find a local alternative path within this peer group.
  • switchover takes place anyhow, whereby this egress node of the upstream failure peer group further generates a new release message, this time for sending it in the upstream direction, until the node which has blocked the original downstream release message is met.
  • This new upstream release message thereby clears the not used part of the old connection.
  • this node In another case where the reattachment timer in the egress node of the downstream failure peer group has expired, before a new local set-up is arrived in this node, this node generates a new downstream release message, with a new downstream peer group identifier, of the peer group of the next higher level including the former identified downstream peer group.
  • the first upstream node that the upstream release message finds along the path is node B.2.2.
  • This node recognises it is the ingress node of the upstream failure peer group B.2 for the released connection. From the information it has stored during the extended connect procedure, namely that for this connection, its corresponding egress node for this upstream peer group B.2, is node B.2.5, node B.2.2 calculates an alternative path from itself to its corresponding egress node B.2.5, based on standardised PNNI routing algorithms. Node B.2.2 will, if the calculation was successful, meaning an alternative path is indeed available, issue a local set-up message with the connection identifier to node B.2.5.
  • Node B.2.5 by receiving the downstream release message, extracting therefrom the downstream failure peer group identifier, and identifying from this and form its own peer group identifier that it is the egress node of this downstream failure peer group, had already started its reattachment timer. If now the local set-up message would arrive before the reattachment timer has expired, the new local set-up message takes precedence, and node B.2.5 switches to the new connection, using local switchover procedures which are commonly known by a person skilled in the art, and which will therefore not be described in this document. The reattachment timer is also stopped upon receiving of this local set-up message.
  • the upstream release message is passed back from node A.4.4 in the upstream direction towards node A.4.5.
  • This node has to check whether it is an ingress node of upstream failure peer group A.4, otherwise pass the upstream release message back to the next node in the upstream direction. Since A.4.5 is the ingress node of peer group A.4, it will try to calculate an alternative path to egress node of A.4, being A.4.6.
  • each ingress node has to add a unique identifier for itself within its peer group, and the connection.
  • the egress node also has to extract this information from the set-up message. All this information will then allow the egress node to recalculate an alternative path in case of failure. In this case, the egress node becomes the master whereas the ingress node becomes the slave.
  • the release message may as well include an identification of the failure location. This information can inform the node charged with the recalculation which part of the previously calculated route is still intact. This information can then be further used during the recalculation, thereby possible shortening the time needed to find a valid alternative route.
EP97400364A 1997-02-18 1997-02-18 Procédé de réacheminement dans des réseaux à structure hiérarchique Expired - Lifetime EP0859491B1 (fr)

Priority Applications (7)

Application Number Priority Date Filing Date Title
DE69735084T DE69735084T2 (de) 1997-02-18 1997-02-18 Leitwegumlenkungsverfahren in hierarchischen strukturierten Netzwerken
EP97400364A EP0859491B1 (fr) 1997-02-18 1997-02-18 Procédé de réacheminement dans des réseaux à structure hiérarchique
AT97400364T ATE315861T1 (de) 1997-02-18 1997-02-18 Leitwegumlenkungsverfahren in hierarchischen strukturierten netzwerken
AU53007/98A AU740234B2 (en) 1997-02-18 1998-02-09 Method of re-routing in networks
US09/023,370 US6115753A (en) 1997-02-18 1998-02-13 Method for rerouting in hierarchically structured networks
CA002227111A CA2227111A1 (fr) 1997-02-18 1998-02-17 Methode de reroutage dans des reseaux a structure hierarchique
JP3633298A JPH10243029A (ja) 1997-02-18 1998-02-18 階層構造のネットワークで経路変更を行う方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP97400364A EP0859491B1 (fr) 1997-02-18 1997-02-18 Procédé de réacheminement dans des réseaux à structure hiérarchique

Publications (2)

Publication Number Publication Date
EP0859491A1 true EP0859491A1 (fr) 1998-08-19
EP0859491B1 EP0859491B1 (fr) 2006-01-11

Family

ID=8229715

Family Applications (1)

Application Number Title Priority Date Filing Date
EP97400364A Expired - Lifetime EP0859491B1 (fr) 1997-02-18 1997-02-18 Procédé de réacheminement dans des réseaux à structure hiérarchique

Country Status (7)

Country Link
US (1) US6115753A (fr)
EP (1) EP0859491B1 (fr)
JP (1) JPH10243029A (fr)
AT (1) ATE315861T1 (fr)
AU (1) AU740234B2 (fr)
CA (1) CA2227111A1 (fr)
DE (1) DE69735084T2 (fr)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002032055A2 (fr) * 2000-10-09 2002-04-18 Asta Networks, Incorporated Regulation progressive et repartie de trafic reseau selectionne destine a un noeud de reseau
EP1216540A1 (fr) * 1999-09-06 2002-06-26 Alcatel Modele de reseau a distribution recurrente du trafic ip/donnees
WO2003030468A2 (fr) * 2001-09-27 2003-04-10 Siemens Aktiengesellschaft Procede et dispositif d'adaptation de chemins a commutation d'etiquettes dans des reseaux paquets
WO2005117360A1 (fr) * 2004-05-31 2005-12-08 Huawei Technologies Co., Ltd. Procede de selection de trajet pour application de voie d'acheminement de restriction de zone de liaison
WO2008117004A1 (fr) * 2007-03-23 2008-10-02 British Telecommunications Public Limited Company Localisation de défauts
EP2037625A1 (fr) * 2007-09-14 2009-03-18 Nokia Siemens Networks Oy Procédé de protection d'un service de réseau
WO2009138133A1 (fr) 2008-05-12 2009-11-19 Telefonaktiebolaget Lm Ericsson (Publ) Réacheminement de trafic dans un réseau de communications
CN101350761B (zh) * 2007-07-18 2011-12-28 华为技术有限公司 实现路径建立、计算的方法、装置及系统
US8102176B2 (en) 2005-08-31 2012-01-24 T2 Biosystems, Inc. NMR device for detection of analytes

Families Citing this family (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
SE9604491L (sv) * 1996-12-05 1998-06-06 Ericsson Telefon Ab L M Anordning och förfarande i överföringssystem
US6735176B1 (en) * 1998-03-26 2004-05-11 Nortel Networks Limited Dynamic bandwidth management and rerouting
JP3111986B2 (ja) * 1998-06-05 2000-11-27 日本電気株式会社 Atm通信網のノード装置及び故障警報通知方法
DE69840809D1 (de) * 1998-09-05 2009-06-18 Ibm Verfahren zur Erzeugung der optimalen complexen PNNI Knotendarstellungen in bezug auf begrenzte Kosten
US6487600B1 (en) * 1998-09-12 2002-11-26 Thomas W. Lynch System and method for supporting multimedia communications upon a dynamically configured member network
US6856627B2 (en) * 1999-01-15 2005-02-15 Cisco Technology, Inc. Method for routing information over a network
US7352692B1 (en) 1999-01-15 2008-04-01 Cisco Technology, Inc. Resource reservation scheme for path restoration in an optical network
US6990068B1 (en) 1999-01-15 2006-01-24 Cisco Technology, Inc. Virtual path restoration scheme using fast dynamic mesh restoration in an optical network
US6801496B1 (en) * 1999-01-15 2004-10-05 Cisco Technology, Inc. Network addressing scheme for reducing protocol overhead in an optical network
US7428212B2 (en) * 1999-01-15 2008-09-23 Cisco Technology, Inc. Best effort technique for virtual path restoration
US7764596B2 (en) 2001-05-16 2010-07-27 Cisco Technology, Inc. Method for restoring a virtual path in an optical network using dynamic unicast
US6912221B1 (en) 1999-01-15 2005-06-28 Cisco Technology, Inc. Method of providing network services
US6631134B1 (en) * 1999-01-15 2003-10-07 Cisco Technology, Inc. Method for allocating bandwidth in an optical network
US6532237B1 (en) * 1999-02-16 2003-03-11 3Com Corporation Apparatus for and method of testing a hierarchical PNNI based ATM network
US7050432B1 (en) * 1999-03-30 2006-05-23 International Busines Machines Corporation Message logging for reliable multicasting across a routing network
US6456600B1 (en) * 1999-04-28 2002-09-24 3Com Corporation Complex node representation in an asynchronous transfer mode PNNI network
US6487204B1 (en) * 1999-05-12 2002-11-26 International Business Machines Corporation Detectable of intrusions containing overlapping reachabilities
ATE320130T1 (de) * 1999-08-06 2006-03-15 Ibm Adressenverwaltung in hierarchischen pnni-netzen
ATE299642T1 (de) * 1999-10-15 2005-07-15 Cit Alcatel Ein kommunikationsnetz zum austausch von datenpaketen von atm verbindungen sowie verfahrens und netzwerkknoten für dieses kommunikationsnetz
US6535990B1 (en) * 2000-01-10 2003-03-18 Sun Microsystems, Inc. Method and apparatus for providing fault-tolerant addresses for nodes in a clustered system
US6785726B1 (en) 2000-05-08 2004-08-31 Citrix Systems, Inc. Method and apparatus for delivering local and remote server events in a similar fashion
US6789112B1 (en) 2000-05-08 2004-09-07 Citrix Systems, Inc. Method and apparatus for administering a server having a subsystem in communication with an event channel
US6785713B1 (en) 2000-05-08 2004-08-31 Citrix Systems, Inc. Method and apparatus for communicating among a network of servers utilizing a transport mechanism
US7370223B2 (en) * 2000-09-08 2008-05-06 Goahead Software, Inc. System and method for managing clusters containing multiple nodes
US20020144143A1 (en) * 2000-12-15 2002-10-03 International Business Machines Corporation Method and system for network management capable of restricting consumption of resources along endpoint-to-endpoint routes throughout a network
US7518981B1 (en) 2001-02-23 2009-04-14 Cisco Technology, Inc. Method and system for a graceful reroute of connections on a link
US6836392B2 (en) * 2001-04-24 2004-12-28 Hitachi Global Storage Technologies Netherlands, B.V. Stability-enhancing underlayer for exchange-coupled magnetic structures, magnetoresistive sensors, and magnetic disk drive systems
US7477594B2 (en) * 2001-05-16 2009-01-13 Cisco Technology, Inc. Method for restoring a virtual path in an optical network using 1:N protection
US7000006B1 (en) * 2001-05-31 2006-02-14 Cisco Technology, Inc. Implementing network management policies using topology reduction
US8762568B1 (en) 2001-07-06 2014-06-24 Cisco Technology, Inc. Method and apparatus for inter-zone restoration
US20030014516A1 (en) * 2001-07-13 2003-01-16 International Business Machines Corporation Recovery support for reliable messaging
US6766482B1 (en) 2001-10-31 2004-07-20 Extreme Networks Ethernet automatic protection switching
US7082531B1 (en) 2001-11-30 2006-07-25 Cisco Technology, Inc. Method and apparatus for determining enforcement security devices in a network topology
US7636937B1 (en) 2002-01-11 2009-12-22 Cisco Technology, Inc. Method and apparatus for comparing access control lists for configuring a security policy on a network
US7209975B1 (en) * 2002-03-15 2007-04-24 Sprint Communications Company L.P. Area based sub-path protection for communication networks
US7180866B1 (en) * 2002-07-11 2007-02-20 Nortel Networks Limited Rerouting in connection-oriented communication networks and communication systems
US8224987B2 (en) * 2002-07-31 2012-07-17 Hewlett-Packard Development Company, L.P. System and method for a hierarchical interconnect network
US20050188108A1 (en) * 2002-10-31 2005-08-25 Volera, Inc. Enriched tree for a content distribution network
US7603481B2 (en) 2002-10-31 2009-10-13 Novell, Inc. Dynamic routing through a content distribution network
US7539771B2 (en) * 2003-06-06 2009-05-26 Microsoft Corporation Organizational locality in prefix-based structured peer-to-peer overlays
US7403485B2 (en) * 2004-08-16 2008-07-22 At&T Corp. Optimum construction of a private network-to-network interface
US20060067337A1 (en) * 2004-09-30 2006-03-30 Netravali Arun N Methods and devices for generating a hierarchical structure for the internet
JP4506387B2 (ja) * 2004-09-30 2010-07-21 ブラザー工業株式会社 情報通信システム、ノード装置、及びオーバーレイネットワーク形成方法等
US20070097883A1 (en) * 2005-08-19 2007-05-03 Yigong Liu Generation of a network topology hierarchy
US20070160069A1 (en) * 2006-01-12 2007-07-12 George David A Method and apparatus for peer-to-peer connection assistance
US7689648B2 (en) * 2007-06-27 2010-03-30 Microsoft Corporation Dynamic peer network extension bridge
JP4893533B2 (ja) * 2007-08-24 2012-03-07 コニカミノルタホールディングス株式会社 ネットワーク接続管理方法、および情報処理装置
US9294563B2 (en) * 2013-02-27 2016-03-22 Omnivision Technologies, Inc. Apparatus and method for level-based self-adjusting peer-to-peer media streaming
EP2804343B1 (fr) * 2013-05-16 2016-05-18 NTT DoCoMo, Inc. Procédé de mappage d'une demande de topologie de réseau sur un réseau physique, produit de programme informatique, système de communication mobile et plate-forme de configuration de réseau
KR102409158B1 (ko) 2016-05-10 2022-06-14 엘에스일렉트릭(주) 슬레이브 디바이스 제어 방법
WO2018127943A1 (fr) * 2017-01-04 2018-07-12 三菱電機株式会社 Dispositif de transmission et procédé d'ajout de trajet
CN111488088B (zh) * 2020-04-07 2022-05-06 Oppo广东移动通信有限公司 设备状态标识方法、装置及智能终端

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0639911A1 (fr) * 1993-08-18 1995-02-22 Koninklijke KPN N.V. Routage dans un réseau de communication hiérarchique

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08242240A (ja) * 1995-03-06 1996-09-17 Hitachi Ltd Atm交換機およびパス切替方法
EP0781007B1 (fr) * 1995-12-21 2003-03-12 Siemens Aktiengesellschaft Procédé pour produire des informations de routage dans un réseau de communication ATM
US5831975A (en) * 1996-04-04 1998-11-03 Lucent Technologies Inc. System and method for hierarchical multicast routing in ATM networks
US5940396A (en) * 1996-08-21 1999-08-17 3Com Ltd. Method of routing in an asynchronous transfer mode network
US5903559A (en) * 1996-12-20 1999-05-11 Nec Usa, Inc. Method for internet protocol switching over fast ATM cell transport
US5946316A (en) * 1997-01-17 1999-08-31 Lucent Technologies, Inc. Dynamic distributed multicast routing protocol

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0639911A1 (fr) * 1993-08-18 1995-02-22 Koninklijke KPN N.V. Routage dans un réseau de communication hiérarchique

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
HADAMA H ET AL: "VIRTUAL PATH RESTORATION TECHNIQUES BASED ON CENTRALIZED CONTROL FUNCTIONS", ELECTRONICS & COMMUNICATIONS IN JAPAN, PART I - COMMUNICATIONS, vol. 78, no. 3, 1 March 1995 (1995-03-01), pages 13 - 26, XP000527391 *
IWATA A ET AL: "ATM ROUTING ALGORTHMS WITH MULTIPLE QOS REQUIREMENTS FOR MULTIMEDIA INTERNETWORKING", IEICE TRANSACTIONS ON COMMUNICATIONS, vol. E79-B, no. 8, August 1996 (1996-08-01), pages 999 - 1007, XP000628636 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1216540A1 (fr) * 1999-09-06 2002-06-26 Alcatel Modele de reseau a distribution recurrente du trafic ip/donnees
EP1216540A4 (fr) * 1999-09-06 2005-01-05 Cit Alcatel Modele de reseau a distribution recurrente du trafic ip/donnees
WO2002032055A2 (fr) * 2000-10-09 2002-04-18 Asta Networks, Incorporated Regulation progressive et repartie de trafic reseau selectionne destine a un noeud de reseau
WO2002032055A3 (fr) * 2000-10-09 2002-08-15 Asta Networks Inc Regulation progressive et repartie de trafic reseau selectionne destine a un noeud de reseau
US6801503B1 (en) 2000-10-09 2004-10-05 Arbor Networks, Inc. Progressive and distributed regulation of selected network traffic destined for a network node
WO2003030468A2 (fr) * 2001-09-27 2003-04-10 Siemens Aktiengesellschaft Procede et dispositif d'adaptation de chemins a commutation d'etiquettes dans des reseaux paquets
WO2003030468A3 (fr) * 2001-09-27 2003-08-21 Siemens Ag Procede et dispositif d'adaptation de chemins a commutation d'etiquettes dans des reseaux paquets
US7684420B2 (en) 2004-05-31 2010-03-23 Huawei Technologies Co., Ltd. Method for implementing cross-domain constraint routing
WO2005117360A1 (fr) * 2004-05-31 2005-12-08 Huawei Technologies Co., Ltd. Procede de selection de trajet pour application de voie d'acheminement de restriction de zone de liaison
US8704517B2 (en) 2005-08-31 2014-04-22 T2 Biosystems, Inc. NMR device for detection of analytes
US8624592B2 (en) 2005-08-31 2014-01-07 T2 Biosystems, Inc. NMR device for detection of analytes
US8344731B2 (en) 2005-08-31 2013-01-01 T2 Biosystems, Inc. NMR device for detection of analytes
US8334693B2 (en) 2005-08-31 2012-12-18 T2 Biosystems, Inc. NMR device for detection of analytes
US8310231B2 (en) 2005-08-31 2012-11-13 T2 Biosystems, Inc. NMR device for detection of analytes
US8310232B2 (en) 2005-08-31 2012-11-13 T2 Biosystems, Inc. NMR device for detection of analytes
US8102176B2 (en) 2005-08-31 2012-01-24 T2 Biosystems, Inc. NMR device for detection of analytes
US8213319B2 (en) 2007-03-23 2012-07-03 British Telecommunications Plc Fault location
WO2008117004A1 (fr) * 2007-03-23 2008-10-02 British Telecommunications Public Limited Company Localisation de défauts
CN101350761B (zh) * 2007-07-18 2011-12-28 华为技术有限公司 实现路径建立、计算的方法、装置及系统
WO2009033534A1 (fr) * 2007-09-14 2009-03-19 Nokia Siemens Networks Oy Procédé de protection d'un service de réseau
EP2037625A1 (fr) * 2007-09-14 2009-03-18 Nokia Siemens Networks Oy Procédé de protection d'un service de réseau
WO2009138133A1 (fr) 2008-05-12 2009-11-19 Telefonaktiebolaget Lm Ericsson (Publ) Réacheminement de trafic dans un réseau de communications
US9391874B2 (en) 2008-05-12 2016-07-12 Telefonaktiebolaget L M Ericsson (Publ) Re-routing traffic in a communications network

Also Published As

Publication number Publication date
CA2227111A1 (fr) 1998-08-18
EP0859491B1 (fr) 2006-01-11
US6115753A (en) 2000-09-05
JPH10243029A (ja) 1998-09-11
ATE315861T1 (de) 2006-02-15
DE69735084T2 (de) 2006-08-31
AU5300798A (en) 1998-08-20
AU740234B2 (en) 2001-11-01
DE69735084D1 (de) 2006-04-06

Similar Documents

Publication Publication Date Title
US6115753A (en) Method for rerouting in hierarchically structured networks
US6272139B1 (en) Signaling protocol for rerouting ATM connections in PNNI environments
US7180866B1 (en) Rerouting in connection-oriented communication networks and communication systems
US6122753A (en) Fault recovery system and transmission path autonomic switching system
US5805593A (en) Routing method for setting up a service between an origination node and a destination node in a connection-communications network
US9013984B1 (en) Method and apparatus for providing alternative link weights for failed network paths
EP1201100B1 (fr) Procede et dispositif de reacheminement rapide dans un reseau oriente connexion
US6026077A (en) Failure restoration system suitable for a large-scale network
US6549513B1 (en) Method and apparatus for fast distributed restoration of a communication network
US8059528B2 (en) Method of estimating restoration capacity in a network
US20020093954A1 (en) Failure protection in a communications network
US7058845B2 (en) Communication connection bypass method capable of minimizing traffic loss when failure occurs
US7012887B2 (en) Method for restoring diversely routed circuits
US7126921B2 (en) Packet network providing fast distribution of node related information and a method therefor
US8203931B2 (en) Communication network protection systems
NZ315056A (en) Determining an additional route in a fully or partly meshed communications network of nodes, comprising sending a route-finder signature from a node to a neighbouring node
US6421316B1 (en) Point-to-multipoint connection restoration
CN107547369B (zh) 流量切换方法及装置
Jajszczyk et al. Recovery of the control plane after failures in ASON/GMPLS networks
WO2012071909A1 (fr) Procédé et dispositif de récupération de services
US20030043427A1 (en) Method of fast circuit recovery using local restoration
US7590051B1 (en) Method and apparatus for redialing a connection on a communication network
CA2494875A1 (fr) Methode et dispositif de declenchement de l'acheminement des messages dans un reseau de communication
US7023793B2 (en) Resiliency of control channels in a communications network
US7855949B1 (en) Method and apparatus for bundling signaling messages for scaling communication networks

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE DE ES FR GB IT SE

AX Request for extension of the european patent

Free format text: AL;LT;LV;RO;SI

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALCATEL

RAP3 Party data changed (applicant data changed or rights of an application transferred)

Owner name: ALCATEL

17P Request for examination filed

Effective date: 19990219

AKX Designation fees paid

Free format text: AT BE DE ES FR GB IT SE

RBV Designated contracting states (corrected)

Designated state(s): AT BE DE ES FR GB IT SE

17Q First examination report despatched

Effective date: 20010301

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE DE ES FR GB IT SE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060111

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060111

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: IT

Payment date: 20060228

Year of fee payment: 10

REF Corresponds to:

Ref document number: 69735084

Country of ref document: DE

Date of ref document: 20060406

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060411

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20060422

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20061012

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20070216

Year of fee payment: 11

Ref country code: DE

Payment date: 20070216

Year of fee payment: 11

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20070212

Year of fee payment: 11

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20080218

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20081031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080902

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080229

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20080218

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20070218