EP2030378A1 - Uninterrupted network control message generation during local node outages - Google Patents
Uninterrupted network control message generation during local node outagesInfo
- Publication number
- EP2030378A1 EP2030378A1 EP06771449A EP06771449A EP2030378A1 EP 2030378 A1 EP2030378 A1 EP 2030378A1 EP 06771449 A EP06771449 A EP 06771449A EP 06771449 A EP06771449 A EP 06771449A EP 2030378 A1 EP2030378 A1 EP 2030378A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- state machine
- messages
- cache
- network
- nodes
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0677—Localisation of faults
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/28—Routing or path finding of packets in data switching networks using route fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/58—Association of routers
Definitions
- the present invention generally relates to computer networks.
- the present invention relates to packet switching and control plane protocols.
- Packet switching networks include control plane protocols, such as the spanning tree protocol (STP), the generic attribute registration protocol (GARP) and its version for virtual local area networks, the VLAN registration protocol (GVRP), the link aggregation control protocol (LACP), Y.1711 fast failure detection (FFD), and reservation protocol (RSVP) refresh.
- Control protocols have the responsibility to, for example, control the topology and distribution of how layer 2 (L2) traffic flows through the network. These protocols are realized in the state machines running on each participating network element. Once a stable network configuration has been reached, the protocols tend to repeat the same messages they send to the network. Different messages usually result from an operator or defect driven change in the network.
- a failure in participating in the protocol by a network element leads to traffic rearrangements once a timeout period ranging from a few milliseconds to a few seconds is exceeded.
- traffic rearrangements involve the entire network.
- the packet control protocols fall into one of three categories. They are (1 ) unprotected; (2) protected via proprietary communication with the neighbor network elements prior to control plane outages; or (3) protected by standardized graceful restart technology, which requires interaction with neighbor network elements shortly before or after a protocol outage.
- the result will, in general, be that the traffic flow through the network is reconfigured. During the time of reconfiguration, traffic loss will occur in parts of the network that can be as large as the entire network domain.
- Exemplary embodiments of the present invention prevent packet network reconfiguration and associated traffic loss by providing uninterrupted network control message generation, during local node outages.
- a message cache receives a number of sent messages from a protocol state machine for a local node and forwards them to other nodes in the network.
- the message cache also receives messages from the nodes.
- the message cache stores both the sent and received messages in a buffer.
- the message cache Upon failure of the protocol state machine, the message cache sends messages to and receives messages from the nodes, so long as the buffer remains valid.
- the messages may be sent periodically to the nodes.
- the message cache may determine whether the buffer is valid based on the messages in the buffer and messages received from the nodes after the
- the method may also include switching to a standby protocol state machine, upon failure of the active protocol state machine, where the standby protocol state machine includes another buffer replicating the first buffer.
- Another embodiment is a computer readable medium storing instructions for performing this method for providing uninterrupted network control message generation during local node outages.
- Yet another embodiment is a system for providing uninterrupted network control message generation during local node outages, including a protocol state machine and a message cache.
- the protocol state machine generates messages.
- the message cache receives the messages from the protocol state machine and forwards them to nodes in the network.
- the message cache stores both the sent and received messages in one or more buffers.
- the message cache Upon failure of the protocol state machine, the message cache sends messages to and receives message from the nodes, so long as the message cache remains valid.
- the message cache may include a timer for sending periodic messages to the nodes and a status control determining whether the message cache is valid.
- the system may include a worker node and a protection node, each having protocol state machines and message caches so that the protection node is able to become active when the worker node fails.
- the protection message cache may replicate the worker message cache, while the worker protocol state machine is active.
- Figure 1 is a block diagram illustrating an exemplary embodiment of a cache concept for a default case, when a state machine for a control plane protocol is active
- Figure 2 is a block diagram illustrating the exemplary embodiment of the cache concept of Figure 1 for a control plane failure case, when the protocol state machine is unavailable and the network state is stable;
- Figure 3 is a block diagram illustrating the exemplary embodiment of the cache concept of Figure 1 for a control plane failure case, when the protocol state machine is unavailable and the network state is unstable;
- Figure 4 is a block diagram illustrating an exemplary embodiment of a cache concept for a default case, when two instances of a state machine exist (worker and protection), the worker state machine being active, the protection state machine being standby, and each being associated with a cache;
- Figure 5 is a block diagram illustrating the exemplary embodiment of the cache concept of Figure 4 for an intermediate state when the worker state machine was active and failed, the protection state machine in standby state is recovering (from standby to full operation), but the network state is stable;
- Figure 6 is a block diagram illustrating the exemplary embodiment of the cache concept of Figure 4 when the protection state machine is active and the worker state machine is standby (after a switch over from worker to protection);
- Figure 7 is a chart showing selected state transitions and events on a time line for the exemplary embodiment of the cache concept of Figure 4;
- Figure 8 is a block diagram illustrating an exemplary embodiment of a distributed cache.
- the network element should maintain a stable network if the only cause of instability is the equipment protection switch, i.e., for the case of a single failure (e.g., circuit pack defect) but also for the case of operator driven events such as manual switches.
- the network element should minimize network impact in case a network is already undergoing a reconfiguration, e.g., due to a remote network element failure, while simultaneously the protection switch is required due to local defect (double failure) or operator commands.
- Exemplary embodiments of the present invention achieve these goals not only for this L2 Ethernet example, but more broadly for any failure (e.g., hardware defect) causing a temporary unavailability of the local control plane of any network for many protocols.
- the network element behavior may be described by three states.
- the state machine In the first state, the state machine is fully operable and reacting to all requests.
- the state machine In the second state, the state machine is not available but the cache maintains PDU sending until a change in the network happens, which invalidates the cache, or the state machine becomes operable.
- both the state machine and the cache are not available, e.g., due to an ongoing reconfiguration in the network while the state machine is inoperable, or due to the protocol state machine and cache not being synchronized.
- Exemplary embodiments of the caching concept are derived from the observation that in a stable network, the spanning tree protocol nodes distribute identical PDUs to their neighbors repeatedly. A network defect or network change is detected, if no PDUs have been received by a spanning tree node during three consecutive sending periods or the content of a PDU is different from the preceding PDU. Thus, in an otherwise stable network topology, the activity of a spanning tree protocol machine can be suspended for an indefinite amount of time, as long as the periodic sending of PDUs is maintained. Thus, the caching concept uses this fact so that the network demands for PDUs are satisfied from the cache, without the need for all of the configuration, protocol state machines, and the like being started and synchronized.
- the caching concept relieves the demand regarding recovery speed of all software components, except the one operating the cache (which is in hot standby). There are certain times when the cache can be considered valid for PDU sending and other times when the cache needs to be invalidated. Note that within a stable network topology, to some extent, even new services can be established (e.g., forwarding traffic can be modified in terms of new quality of service (QoS) parameters, new customers (distinguished by C-VLANs) can be added to a service provider (802.1 ad) network, etc.).
- QoS quality of service
- a packet switched network is a network in which messages or fragments of messages (packets) are sent to their destination through the most expedient route, as determined by a routing algorithm.
- a control plane is a virtual network function used to set up, maintain, and terminate data plane connections. It is virtual in the sense that it is distributed over network nodes that need to interoperate to realize the function.
- a data plane is a virtual network path used to distribute data between nodes. Some networks may disaggregate control and forwarding planes as well.
- the term cache refers to any storage managed to take advantage of locality of access.
- a message cache stores messages. The message cache is instantiated and its
- 468327 1 messages are kept in a synchronous state with the messages that the control plane sends/receives to/from the network.
- the cache satisfies the demands of the network by sending the cached messages. Once the control plane recovers, the cache again follows the control operation and keeps in sync.
- Unstable networks are those where the traffic flow distribution has not reached a stable state, such as power on scenarios of a network element. Double failures are those scenarios where, in addition to a control plane outage in one network element, other network elements experience defects or operator driven reconfigurations.
- FIG. 1 illustrates an exemplary embodiment of a cache concept 100 for a default case, when a state machine 102 for a control plane protocol is active.
- the control plane protocol may be any kind of protocol, e.g., STP, VLAN registration protocol, LACP, Y.1711 FFD, or RSVP refresh.
- the protocol state machine 102 communicates (via intermediate hardware layers) with the neighboring nodes 106 and the rest of the network 108.
- this embodiment includes a message cache 104 interposed between the protocol state machine 102 and the network 108.
- the protocol state machine 102 sends messages to the message cache 104, which then forwards those messages to the network 108.
- the message cache 104 captures communication between the protocol state machine 102 and the network by storing both sent messages 110 and received messages 112 in buffers.
- the message cache 104 also includes a timer 114 and a status control 116.
- the state machine 102 may convey additional state information
- the contents of the message cache 104 vary depending on the control plane protocol implemented.
- the message cache 104 stores what is needed to temporarily serve the needs of the network 108 in the case of a failure of the state machine 102.
- Figure 2 illustrates the exemplary embodiment of the cache concept 100 of Figure 1 for a control plane failure case, when the protocol state machine 102 is unavailable and the network state is stable.
- the message cache 104 protects against situations where the protocol state machine is unavailable, for any reason by temporarily continuing to serve the network. For example, the processor holding the protocol state machine 102 may be rebooting.
- the message cache 104 generally continues to send messages from the buffers so that neighboring nodes 106 in the network 108 do not become aware that the protocol state machine 102 is unavailable. Communication to the neighboring nodes 106 is mimicked based on information stored in the message cache 104.
- the message cache 104 bridges at least a portion of the time that the protocol state machine 102 is unavailable.
- Protocols that periodically send the same message (e.g., hello message, update message) to the neighboring nodes 106 can easily be mimicked.
- the message cache 104 uses the timer 114 to send messages stored in the sent messages buffer 110 periodically in the same manner as the protocol state machine 102. As a result, the neighboring nodes 106 do not detect any change in the protocol state machine 102.
- the message cache 104 receives messages from neighboring nodes 106 and stores them in the received message buffer 112.
- the message cache 104 is able to detect any event or change (e.g., state change) in the network 108 that would make the message cache 104 invalid by examining the status control 116 and the received messages.
- the status control 116 determines whether the message cache 104 is valid or invalid. When the message cache 104 becomes invalid, it ceases sending messages because it cannot properly react to the event or change in the network 108.
- the message cache 104 is a simplified component to simulate at least a portion of the protocol state machine 102. An efficient implementation of the message cache 104 probably does not simulate the complete behavior of the
- the degree of simplicity or complexity of the message cache 104 may vary depending on the control plane protocol implemented.
- the message cache may simulate transition between two or more states of the protocol state machine 102 with logic in the status control 116.
- the message cache may be implemented in hardware, firmware, or software (e.g., field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC)).
- FPGA field-programmable gate array
- ASIC application-specific integrated circuit
- the message cache 104 continues to mimic the protocol state machine so long as it remains valid, which may be a short time or the entire time the protocol state machine is unavailable, depending on circumstances. Some protocols require updates in the milliseconds range, while others require updates in the seconds range. This embodiment is not limited to any particular protocol or degree of complexity of the status control logic 116.
- Figure 3 illustrates the exemplary embodiment of the cache concept 100 of Figure 1 for a control plane failure case, when the protocol state machine 102 is unavailable and the network state is unstable.
- the message cache 104 transitions into an invalid state.
- the status control 116 determines that some event occurred, making the network state unstable so that simulation of the protocol state machine 102 by the message cache 104 must stop according to the particular protocol implemented.
- the neighboring nodes 106 may become aware that the protocol state machine 102 is failed or otherwise unavailable, as if no message cache 104 were present.
- Figure 4 illustrates an exemplary embodiment of a cache concept 400 for a default case, when two instances of a state machine exist (worker and protection), the worker state machine being active, the protection state machine being standby, and each being associated with a cache.
- This embodiment is a particular realization of a control plane protocol in a particular context; however, the invention is not limited to any particular implementation. In this embodiment, network availability is improved by caching messages.
- a blade server is a server chassis housing multiple thin, modular electronic circuit boards,
- Each blade is a server on a card, containing processors, memory, integrated network controllers, and input/output (I/O) ports. Blade servers increasingly allow the inclusion of functions, such as network switches and routers as individual blades.
- the state machines (SMs) for two such blades are shown in Figure 4: a worker state machine 406 for a worker packet switch (PS) 402 and a protection state machine 408 for a protection PS 404.
- the worker state machine 408 is initially active and the protection state machine 406 is initially standby and soon to become active.
- the two instances (active/standby) of the protocol state machine are located on different hardware (e.g., CPUs) but still within the same network node.
- This embodiment illustrates the worker state machine 406 and the protection state machine 404 for a spanning tree protocol (STP); however, the invention is not limited to any particular protocol.
- a spanning tree protocol provides a loop free topology for any bridged network.
- the IEEE standard 802.1 D defines STP.
- the worker PS 402 and protection PS 404 each include a STP state machine 406, 408 for a specific independent bridge partition (IBP) (e.g., one Ethernet switch instance) and timers 416, 412.
- IBP independent bridge partition
- a network bridge (a/k/a network switch) connects multiple network segments (e.g., partitions, domains) and forwards traffic from one segment to another.
- These state machines 406, 408 are in a control plane and create messages for sending to neighboring nodes 106 in the rest of the network 108.
- a worker cache 410 is interposed between the worker state machine 406 and the network 108.
- Figure 4 illustrates an initial state where the worker state machine 406 is active, sending/receiving messages to/from the network 108 and storing messages in the worker cache 410.
- the worker cache 410 stores both the messages sent out 412 and the messages received 414.
- Bridge protocol data units (BPDUs) are the frames that carry the STP information.
- a switch sends a BPDU frame using a unique MAC address of a port itself as a source address and a destination address of the STP multicast address.
- a protection cache 418 is synchronized with the worker cache 410 by cache replication for the protection state machine 408, which is in a warm standby state, waiting to be started.
- Figure 5 illustrates the exemplary embodiment of the cache concept 400 of Figure 4 for an intermediate state when the worker state machine 406 was active and failed (e.g., software crash), the protection state machine 408 in standby state is recovering (from standby to full operation), but the network state is stable.
- This intermediate state occurs, because there is a delay between the time when the worker state machine 406 fails and the time when the protection state machine 408 is ready (i.e.,, started after boot up) to serve the network 108.
- the protection cache 418 is now the active cache and operates as described for Figure 2.
- Figure 6 illustrates the exemplary embodiment of the cache concept of
- Figure 4 when the protection state machine 408 is active and the worker state machine is standby (after a switch over from worker to protection). Comparing Figures 4 and 6, the protection state machine 408 in the scenario illustrated by Figure 6 behaves similarly to the worker state machine 406 in the scenario illustrated by Figure 4, i.e., behaving as the active state machine.
- the protection cache 418 stores both the messages sent out 420 and the messages received 422 and, thus, operates in the same way as in Figure 4. While the protection state machine 408 is active, messages in the protection cache 418 are replicated to the worker cache 410.
- Figure 7 is a chart showing selected state transitions and events on a time line for the worker state machine 406, protection state machine 408, and protection cache 418 of Figure 4.
- Figure 7 illustrates various combinations of states when the protection cache 418 is valid and can be used temporarily to serve the needs of the network 108 and when the protection cache 418 is invalid and cannot be used.
- Figure 7 illustrates several scenarios. The first scenario is from Ti to T 5 , the second is from T 5 to T 9 , and the third is from T 9 to T 12 .
- the first scenario starts at Ti.
- T-i when the worker state machine 406 is in an active state and the protection state machine 408 is in a synchronizing state, the protection cache 418 is invalid and replicates the worker cache 410.
- the protection state machine 408 is initially in the synchronizing state, because the protection PS 404 blade has been added to the network element.
- T 2 When synchronization is completed at T 2 , the protection state
- the protection state machine 408 transitions from starting-up to active and the protection cache 418 is updating (i.e., taking a passive role by continuing the synchronizing with the active protocol state machine 408).
- the worker state machine 406 transitions from synchronizing to standby. After this is done, at T 5 , the protection state machine 408 is active and the worker state machine 406 is standby.
- the second scenario starts at T 5 .
- the worker state machine 406 is active, the protection state machine 408 is synchronizing, and the protection cache 418 is invalid.
- the protection state machine 408 transitions from synchronizing to standby and the protection cache 418 is ready and inactive.
- a network reconfiguration occurs at T 7 (e.g., a network element fails)
- the worker state machine 406 transitions from active to reconfiguring and the protection cache 418 becomes invalid at T 7 .
- the worker state machine 406 handles changing state in the network. After the network has stabilized at Te, the worker state machine 406 transitions from reconfiguring to active and the protection cache 418 becomes ready and inactive again.
- the third scenario starts at T 9 and differs from the second scenario in the ordering of the events.
- the worker state machine 406 is active, the protection state machine 408 is synchronizing, and the protection cache 418 is invalid.
- a network reconfiguration occurs during the interval from T 9 to Tn.
- the worker state machine 406 transitions from active to reconfiguring.
- the protection state machine 408 transitions from synchronizing to standby.
- the protection cache 418 does not transition from invalid to ready, inactive, until T 12 , when the worker state machine 406 transitions from reconfiguring to active.
- each independent bridge partition has its own cache implementation to guarantee independent operations and reconfigurations.
- each port has a certain port state. Depending on the state of the bridge, PDUs are sent, received, or both.
- the cache not only remembers the PDUs that are sent or received, but also that no PDUs have to be sent or received. Note that on some ports PDU
- caches 468327 1 13 sending/receiving will stop at some point during the network convergence process, i.e., the cache is filled only after the network converges.
- caches are kept in hot-standby mode.
- caches carry a flag indicating whether they are valid for PDU generation.
- Various situations may lead to invalidating the cache, e.g., ongoing reconfigurations in the network, provisioning which demands calculation of the spanning tree and changes in BPDUs, etc.
- the cache on the active PS is updated by incoming and outgoing PDUs.
- the cache on the standby PS is immediately invalidated in the following conditions: when network provided PDUs differ from the cache content and when PDUs differ from the cache content. Note that both differences indicate a change in the network, which can only be handled by a working spanning tree state machine. Any replication of outdated PDUs may lead to serious impact on customer traffic and convergence of the spanning tree. For example, loops could be created. Note that it is the cache on the protection (standby) PS that is invalidated in case of an active worker PS. In the case where the worker PS is failing and the protection PS is in transition from standby to active, the protection PS' cache is invalidated. Note that it may be necessary to change all port states to discarding when the cache is invalidated on a just recovering PS.
- the cache may be declared valid only when the topology has converged.
- an active state machine is required. Note that the end of the network convergence period can either be told by the protocol state machine or it is derived from a sufficiently long stable network state. This may require tracking changes in PDUs over several seconds. This adds to the time the system (network) is vulnerable for equipment protection switches, but only after a possibly traffic affecting network configuration already happened. Note that after a switch-overswitchover and in a stable network, the PDUs generated from the state machine after its recovery will be unchanged to those in the cache, i.e., in this situation, the topology can be considered converged when both hold. The cache was active and is set to inactive by the first PDU send from the state machine. All PDUs in the cache
- 468327 1 14 have at least once been updated by PDUs from the state machine since the time the cache was deactivated.
- the cache may be declared valid only when the standby PS is fully synchronized.
- there is timer triggering of PDU generation from the cache In the event that the protection PS status changes to active PDUs is sent from the cache it is flagged valid. To this end an appropriate repetition timer (and distribution over the allowed period) is started.
- the state in which PDUs are created from the cache starts with the activation status, provided the cache is flagged valid. It ends when either different PDUs are received from the network or when the state machine has fully recovered. This can be recognized by the fact that the state machine starts sending PDUs to the network.
- the first PDU can be used as a trigger to stop the cache activity, because the state machine is capable of sending out all remaining PDUs in the required time interval.
- FIG 8 illustrates an exemplary embodiment of a distributed cache. This example shows how the message cache may be distributed within a system as opposed to a single message cache for a system.
- the periodic message cache 810 is distributed on two input/output (I/O) packs 802. The number of I/O packs is, of course, not limited to two.
- Each I/O pack 802 includes packet forwarding hardware 810 and a board controller 808.
- a local node 804 includes packet forwarding hardware 812 and one or more central packet control plane processors 814.
- the central packet control plane processor 814 sends updates to the periodic message caches 810 on the board controllers 808 of the I/O packs 802.
- the periodic message cache 810 sends outgoing periodic messages via packet forwarding hardware 810 in the I/O pack 802.
- the periodic message caches 810 simulate a control plane protocol, when the control plane state machine is unavailable or fails.
- Application protocols include any protocols that have periodic outgoing messages with constant contents, such as (R)STP, GVRP, RSVP, open shortest path first (OSPF), intermediate system-to-intermediate system (IS-IS or ISIS), Y.1711 , FFD, etc.
- message caches may be implemented broadly in many other ways for many different system architectures. For
- message caches may be on several hardware blades, on several computer processing units (CPUs), on several threads within one CPU, in FPGAs, ASICs and the like.
- Embodiments of the present invention may be implemented in one or more computers in a network system.
- Each computer comprises a processor as well as memory for storing various programs and data.
- the memory may also store an operating system supporting the programs.
- the processor cooperates with conventional support circuitry such as power supplies, clock circuits, cache memory, and the like as well as circuits that assist in executing the software routines stored in the memory.
- the computer also contains input/output (I/O) circuitry that forms an interface between the various functional elements communicating with the computer.
- Embodiments of the present invention may also be implemented in hardware or firmware, e.g., in FPGAs or ASICs.
- the present invention may be implemented as a computer program product wherein computer instructions, when processed by a computer, adapt the operation of the computer such that the methods and/or techniques of the present invention are invoked or otherwise provided.
- Instructions for invoking the inventive methods may be stored in fixed or removable media, transmitted via a data stream in a broadcast media or other signal-bearing medium, and/or stored within a working memory within a computing device operating according to the instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
Description
Claims
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2006/020681 WO2007139542A1 (en) | 2006-05-30 | 2006-05-30 | Uninterrupted network control message generation during local node outages |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2030378A1 true EP2030378A1 (en) | 2009-03-04 |
EP2030378A4 EP2030378A4 (en) | 2010-01-27 |
Family
ID=38778944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP06771449A Withdrawn EP2030378A4 (en) | 2006-05-30 | 2006-05-30 | Uninterrupted network control message generation during local node outages |
Country Status (5)
Country | Link |
---|---|
EP (1) | EP2030378A4 (en) |
JP (1) | JP2009539305A (en) |
KR (1) | KR101017540B1 (en) |
CN (1) | CN101461196A (en) |
WO (1) | WO2007139542A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101616028B (en) * | 2009-06-25 | 2012-02-29 | 中兴通讯股份有限公司 | Method and system for uninterrupted upgrade of communication program service |
JP5728783B2 (en) * | 2011-04-25 | 2015-06-03 | 株式会社オー・エフ・ネットワークス | Transmission apparatus and transmission system |
CN102571425B (en) | 2011-12-28 | 2014-09-17 | 杭州华三通信技术有限公司 | Method and device for smoothly restarting border gateway protocol |
KR101954310B1 (en) | 2014-01-17 | 2019-03-05 | 노키아 솔루션스 앤드 네트웍스 게엠베하 운트 코. 카게 | Controlling of communication network comprising virtualized network functions |
US9860336B2 (en) | 2015-10-29 | 2018-01-02 | International Business Machines Corporation | Mitigating service disruptions using mobile prefetching based on predicted dead spots |
US10534598B2 (en) | 2017-01-04 | 2020-01-14 | International Business Machines Corporation | Rolling upgrades in disaggregated systems |
US11153164B2 (en) | 2017-01-04 | 2021-10-19 | International Business Machines Corporation | Live, in-line hardware component upgrades in disaggregated systems |
CN109889367B (en) * | 2019-01-04 | 2021-08-03 | 烽火通信科技股份有限公司 | Method and system for realizing LACP NSR in distributed equipment not supporting NSR |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020089990A1 (en) * | 2001-01-11 | 2002-07-11 | Alcatel | Routing system providing continuity of service for the interfaces associated with neighboring networks |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11136258A (en) * | 1997-11-04 | 1999-05-21 | Fujitsu Ltd | Cell read synchronization control method |
US7050187B1 (en) * | 2000-04-28 | 2006-05-23 | Texas Instruments Incorporated | Real time fax-over-packet packet loss compensation |
US6757248B1 (en) * | 2000-06-14 | 2004-06-29 | Nokia Internet Communications Inc. | Performance enhancement of transmission control protocol (TCP) for wireless network applications |
JP4021841B2 (en) * | 2003-10-29 | 2007-12-12 | 富士通株式会社 | Control packet processing apparatus and method in spanning tree protocol |
JP3932994B2 (en) * | 2002-06-25 | 2007-06-20 | 株式会社日立製作所 | Server handover system and method |
US7499401B2 (en) * | 2002-10-21 | 2009-03-03 | Alcatel-Lucent Usa Inc. | Integrated web cache |
US20050201375A1 (en) | 2003-01-14 | 2005-09-15 | Yoshihide Komatsu | Uninterrupted transfer method in IP network in the event of line failure |
US7355975B2 (en) * | 2004-04-30 | 2008-04-08 | International Business Machines Corporation | Method and apparatus for group communication with end-to-end reliability |
JP2005341282A (en) * | 2004-05-27 | 2005-12-08 | Nec Corp | System changeover system |
-
2006
- 2006-05-30 CN CNA2006800547591A patent/CN101461196A/en active Pending
- 2006-05-30 EP EP06771449A patent/EP2030378A4/en not_active Withdrawn
- 2006-05-30 JP JP2009513106A patent/JP2009539305A/en active Pending
- 2006-05-30 WO PCT/US2006/020681 patent/WO2007139542A1/en active Application Filing
- 2006-05-30 KR KR1020087029207A patent/KR101017540B1/en not_active IP Right Cessation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020089990A1 (en) * | 2001-01-11 | 2002-07-11 | Alcatel | Routing system providing continuity of service for the interfaces associated with neighboring networks |
Non-Patent Citations (1)
Title |
---|
See also references of WO2007139542A1 * |
Also Published As
Publication number | Publication date |
---|---|
EP2030378A4 (en) | 2010-01-27 |
CN101461196A (en) | 2009-06-17 |
JP2009539305A (en) | 2009-11-12 |
WO2007139542A1 (en) | 2007-12-06 |
KR101017540B1 (en) | 2011-02-28 |
KR20090016676A (en) | 2009-02-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101099822B1 (en) | Redundant routing capabilities for a network node cluster | |
US7304940B2 (en) | Network switch assembly, network switching device, and method | |
US7453797B2 (en) | Method to provide high availability in network elements using distributed architectures | |
US8873377B2 (en) | Method and apparatus for hitless failover in networking systems using single database | |
CN102439903B (en) | Method, device and system for realizing disaster-tolerant backup | |
US6941487B1 (en) | Method, system, and computer program product for providing failure protection in a network node | |
US7269133B2 (en) | IS-IS high availability design | |
US7787365B1 (en) | Routing protocol failover between control units within a network router | |
WO2007139542A1 (en) | Uninterrupted network control message generation during local node outages | |
US20080225699A1 (en) | Router and method of supporting nonstop packet forwarding on system redundant network | |
US20110134931A1 (en) | Virtual router migration | |
US20050050136A1 (en) | Distributed and disjoint forwarding and routing system and method | |
JPH11154979A (en) | Multiplexed router | |
JP2005503055A (en) | Method and system for implementing OSPF redundancy | |
JP5941404B2 (en) | Communication system, path switching method, and communication apparatus | |
JP2005160000A (en) | Apparatus and method for processing control packet in spanning tree protocol | |
WO2011120423A1 (en) | System and method for communications system routing component level high availability | |
JP2006246152A (en) | Packet transfer apparatus, packet transfer network, and method for transferring packet | |
US7184394B2 (en) | Routing system providing continuity of service for the interfaces associated with neighboring networks | |
CN113992571B (en) | Multipath service convergence method, device and storage medium in SDN network | |
US11979286B1 (en) | In-service software upgrade in a virtual switching stack | |
KR100917603B1 (en) | Routing system with distributed structure and control method for non-stop forwarding thereof | |
JP2015138987A (en) | Communication system and service restoration method in communication system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20081230 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU LV MC NL PL PT RO SE SI SK TR |
|
AX | Request for extension of the european patent |
Extension state: AL BA HR MK YU |
|
RAP3 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: LUCENT TECHNOLOGIES NETWORK SYSTEMS GMBH Owner name: LUCENT TECHNOLOGIES INC. |
|
A4 | Supplementary search report drawn up and despatched |
Effective date: 20100104 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04L 12/24 20060101AFI20091223BHEP |
|
17Q | First examination report despatched |
Effective date: 20110418 |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: LUCENT TECHNOLOGIES NETWORK SYSTEMS GMBH Owner name: ALCATEL-LUCENT DEUTSCHLAND AG |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: ALCATEL-LUCENT DEUTSCHLAND AG |
|
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20121204 |