US20060010441A1 - Network management system - Google Patents

Network management system Download PDF

Info

Publication number
US20060010441A1
US20060010441A1 US10/509,133 US50913305A US2006010441A1 US 20060010441 A1 US20060010441 A1 US 20060010441A1 US 50913305 A US50913305 A US 50913305A US 2006010441 A1 US2006010441 A1 US 2006010441A1
Authority
US
United States
Prior art keywords
node
nodes
master module
master
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/509,133
Inventor
Elke Jahn
Niraj Agrawal
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BENGURION UNIVERSITY OF NEGEV
Original Assignee
BENGURION UNIVERSITY OF NEGEV
LIGHTMAZE SOLUTIONS AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BENGURION UNIVERSITY OF NEGEV, LIGHTMAZE SOLUTIONS AG filed Critical BENGURION UNIVERSITY OF NEGEV
Assigned to BENGURION UNIVERSITY OF THE NEGEV reassignment BENGURION UNIVERSITY OF THE NEGEV ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: AGRAWAL, NIRAL, JAHN, ELKE
Publication of US20060010441A1 publication Critical patent/US20060010441A1/en
Assigned to LIGHTMAZE SOLUTIONS AG reassignment LIGHTMAZE SOLUTIONS AG CORRECTIVE ASSIGNMENT TO CORRECT THE CONVEYING PARTY NAME AND THE RECEIVING PARTY NAME, PREVIOUSLY RECORDED AT REEL 017016, FRAME 0051. Assignors: JAHN, ELKE, AGRAWAL, NIRAJ
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0287Protection in WDM systems
    • H04J14/0297Optical equipment protection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/042Network management architectures or arrangements comprising distributed management centres cooperatively managing the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/04Network management architectures or arrangements
    • H04L41/044Network management architectures or arrangements comprising hierarchical management structures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0278WDM optical network architectures
    • H04J14/0283WDM ring architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04JMULTIPLEX COMMUNICATION
    • H04J14/00Optical multiplex systems
    • H04J14/02Wavelength-division multiplex systems
    • H04J14/0278WDM optical network architectures
    • H04J14/0284WDM mesh architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0077Labelling aspects, e.g. multiprotocol label switching [MPLS], G-MPLS, MPAS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0079Operation or maintenance aspects
    • H04Q2011/0081Fault tolerance; Redundancy; Recovery; Reconfigurability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04QSELECTING
    • H04Q11/00Selecting arrangements for multiplex systems
    • H04Q11/0001Selecting arrangements for multiplex systems using optical switching
    • H04Q11/0062Network aspects
    • H04Q2011/0088Signalling aspects

Definitions

  • the present invention belongs to the field of communication systems, especially of optical communication networks, more particularly, to dense wavelength division multiplexed optical networks with arbitrary topology, e.g., point-two-point, ring, mesh, etc.
  • a node is physically linked to another using one or more optical fibres (cf. FIG. 1 ).
  • Each of the fibres can carry as many as one hundred or more communication channels, i.e., wavelengths in WDM (Wavelength Division Multiplex) or Dense WDM (DWDM) systems.
  • WDM Widelength Division Multiplex
  • DWDM Dense WDM
  • Each of the wavelengths may carry signals with data rates up to 10 Gbit/s or even higher.
  • each fibre is carrying several terabits of information.
  • such networks are managed by a network management system which is adapted especially for a single existing network.
  • the network management system must be reconfigured by manually adapting of the hardware and software of several nodes. This is an expensive and time-consuming work, especially in the case of meshed networks.
  • the known network management systems are not able to be implemented in networks with an arbitrary topology without manual adaptation of the network management system.
  • the object of the invention is realized by a method according to claim 1 and a network management system according to claim 11 .
  • the sub-claims provide preferable embodiments of the present invention.
  • the management system is able to manage the whole network and provides intelligence for efficient and optimal use of network resources.
  • the management system comprises preferably various software modules in each node.
  • One software module is a node manager, for example, which takes care of the network management activities.
  • a node manager in each node communicates with other node managers in other nodes through the supervisory network.
  • the supervisory network is formed with the help of supervisory channels between the various nodes of the network.
  • a physical supervisory channel between two nodes in the network might be carried over optical fibre or other types of transport media.
  • Node managers in different nodes might communicate over logical supervisory connections established over one or more physical supervisory channels between various nodes. These logical supervisory connections might be configured manually or with the help of software modules in one or more network nodes. In a preferred embodiment this is done by using a software module called NetProc, which is described in the application PCT/EP03/102704, which has been filed on 14 Mar. 2003 by the same applicant, and which is incorporated by reference into the present application.
  • the NetProc provides the following supervisory network features:
  • a preferred supervisory network has the flexibility to be configured by standard protocols like OSPF, MPLS or by using NetProc. Following features apply:
  • a node module is provided in each node manager.
  • the module could be implemented in form of software or hardware or both.
  • the node module in each node provides an interface to the hardware of the corresponding node.
  • the hardware settings of the respective node could be amended and/or monitored.
  • At least one node manager is provided with a master module.
  • the master module could also be implemented in form of software and/or hardware.
  • the master module communicates through supervisory connections with the various node modules and controls the various amendments carried out by the different node modules and/or processes the hardware settings of the different nodes monitored by the corresponding node modules.
  • not only one but several or all of the node managers in the different nodes comprise a master module.
  • the master module has an active state and a passive state which the master module might be set to. Further preferably, at a given time only one master module is allowed to be set to the active state.
  • Such a master module might be called the Master and all the other master modules, which are in a passive state, might be called Deputy Master (DM).
  • Master Only the master module that is in the active state (Master) controls the different amendments of hardware settings carried out by the node modules and processes the hardware settings monitored by the node modules.
  • FIG. 1 shows a preferable first architecture of a node manager
  • FIG. 2 shows the established supervisory connections between corresponding different nodes
  • FIG. 3 shows a second preferable architecture of a node manager with an attached master controller
  • FIGS. 4 and 5 show reduced supervisory connections used in the shown second architecture.
  • the functions of a node manager ( 1 ) according to the embodiment shown in FIG. 1 are separated into two main modules.
  • the node module ( 2 ) takes care of the activities local to a node. Every node has a node module ( 2 ), which connects to one or more master modules ( 3 ) located at the same node or other nodes using the supervisory channel.
  • the node module ( 2 ) provides interface to the hardware and allows the master module ( 3 ) to make any changes or informs the master module ( 3 ) of any changes in the hardware properties.
  • the second module called master module ( 3 ) is present in one or several or all nodes.
  • the master module ( 3 ) includes MasterProc ( 5 ) for global and local network management, DBProc for database related tasks and features, Interface to GUI ( 4 ) to support the hardware element management and local and global network management. This is shown in FIG. 1 .
  • the term “Proc” denotes one or more software modules with predetermined functionality.
  • GUI Graphical User Interface
  • the GUI is also used to input (or enter), output (or view), and modify various parameters and/or properties related to the local and/or global network management.
  • the GUI is connected to the master module ( 3 ) (cf. FIG. 1 ).
  • the functions of a master module ( 3 ) include
  • the master module ( 3 ) according to the shown embodiment provides the following interfaces
  • master modules ( 3 ) located in several network nodes, at a given time only one master module ( 3 ) may be active. Such a master module ( 3 ) might be designated as the Master and all the other master modules ( 3 ) as a Deputy Master (DM). Further, a master module ( 3 ) performs the tasks of the Master or a Deputy Master depending on the configuration. Such a configuration can be done statically or dynamically. It may also be done manually or automatically.
  • the node module ( 2 ) in each node needs a connection to the master module ( 3 ) and vice-versa. This connection is set-up over the supervisory channel using NetProc or equivalent software modules.
  • the Master located in a particular node coordinates all the network management activities.
  • the Master is an essential part of the network management and needs to be functional all the time. It therefore becomes important to make sure that there is a backup or standby module, which takes over when the Master fails for some reason.
  • one or more Deputy Masters are designated as the backup or standby to the Master. These Deputy Masters take over the functions of the master module ( 3 ) when the Master fails.
  • the master module ( 3 ) has different functionality based on whether it is the Master or a Deputy Master.
  • the nodes where the Master and a Deputy Master are located are termed as the master node and a DM node, respectively.
  • a full set of supervisory connections implies a supervisory connection between all pair of nodes.
  • a reduced set of supervisory connections is defined as a set of those connections between a pair of nodes in which one of the nodes is the master node.
  • a node preferably is always initialised to be a Deputy Master node. Following protocol is used in determining as to which node acts as a Master at a given time: 1) All nodes periodically exchange Heartbeat messages among each other, the contents of which are used to determine as to which node is the master node and also to monitor the status of master node by the various Deputy Master nodes. 2) A Heartbeat message contains the node ID of the sender node as well as its status, either Master or Deputy-Master. 3) The receiving node first examines the status of all the received Heartbeat Messages within a certain time interval.
  • ft receives a Master status in any of the received Heartbeat messages, it remains in the same state as before without altering its status. If it does not receive a Master status in any of its Heartbeat messages, it compares its ID with other received IDs. If its ID is smaller than the received IDs It assumes the role of Master otherwise it remains in the same state as before without altering its status. As an alternative, if on start-up a node does not receive Heartbeat message from other nodes after sending a configurable number of Heartbeat messages it assumes the role of the Master. 4) If and only if the existing Master fails the new Master election process takes place. Master election is done by processing heartbeat messages as discussed above.
  • the master module ( 3 ) in master node takes over the operations of the network and performs the network management functions.
  • the change of role of a particular node from a Deputy Master node to a master node should be performed as quickly and as seamlessly as possible to have minimum disruption in network operation.
  • the master node and Deputy Master nodes perform additional functions for fault-tolerance. These include among other functions database synchronization between master node and Deputy Master nodes.
  • the node manager corresponding to a first architecture is shown in FIG. 1 .
  • the master node and all the Deputy Master nodes are connected through the supervisory channel configured by NetProc or an equivalent software module.
  • each node module in each node uses such supervisory connections ( 10 ) between each pair of nodes, each node module in each node sends all node-related information to the master node and to all the Deputy Master nodes as shown in FIG. 2 , e.g., for a four node network. Exchange of heartbeat messages and related processing is done as discussed previously in this document.
  • the database in the master node and a Deputy Master node needs to be synchronized at all times. This ensures correct operation when the master node fails and a new master node is elected. After a new master node is elected, it sends the current dump (state) of the database to all other Deputy Master nodes before resuming its duty as a master node. This makes sure that the database in all nodes are synchronized before the nodes begin their management function.
  • both the master node and all Deputy Master nodes receive messages from node modules in all nodes. Thus, the master module in each node updates the database located in that particular node.
  • the difference in the functionality of Master versus Deputy Master is that a node acting as Deputy Master does not send any message to other nodes but only receives all node-related messages.
  • the primary function of a Deputy Master node in this architecture is to perform the database synchronization. When a node comes up again after a failure and a master node already exists then the restored node requests for the current dump of the database from the master node.
  • master controller In the second architecture, there is an additional software module running at a node, namely, master controller as shown in FIG. 3 .
  • the so-called master controller ( 4 ) is a module, which could be implemented by software and/or hardware.
  • the Node Module ( 2 ) and master controller ( 7 ) are active in all nodes of the network. However, the master module ( 3 ) is active only in the master node. In this architecture, it is the master controller ( 7 ) which takes part in master-election and role-change related steps, e.g., database synchronization.
  • the Node Module ( 2 ) and master controller ( 7 ) are started in each node.
  • the master module ( 3 ) is not started initially.
  • the master controllers ( 7 ) in various nodes by exchanging and processing heartbeat messages among each other elect a particular node as the master node. Thereafter, it starts the master module, ( 3 ) only in the master node (cf. FIG. 3 ).
  • the master controller ( 7 ) in each node is connected to all other master controllers ( 7 ) in other nodes through the supervisory channel.
  • the Node Module in different nodes is connected only to the master module ( 3 ) as shown in FIG. 4 through a reduced set of supervisory connections ( 10 ).
  • the master controller ( 7 ) in that node dynamically and automatically re-configures the connection between the node modules ( 2 ) and the new master module ( 3 ) as shown in FIG. 5 .
  • This dynamic reconfiguration is done using NetProc or other similar software modules and the master controller present in each node.
  • the master controller sends a re-configure message to NetProc in each node, with the node ID of the new master node.
  • the NetProc in each node on receiving the message re-configures the connections so that all the nodes have a logical supervisory connection to the new master node.
  • the nodes can also be statically connected as in architecture 1 and the dynamic reconfiguration step can be avoided.
  • the master controller ( 7 ) does the database synchronization between a pair of nodes. After a new master node is elected, the master controller ( 7 ) sends the current dump (state) of the database to all the master controllers( 7 ) in Deputy Master nodes before starting the master module processes in the master node. This makes sure that the database in all nodes are synchronized before the nodes begin the management function.
  • the master module ( 3 ) informs the master controller ( 7 ) of any changes in database and these changes are sent to all other master controllers ( 7 ) in other nodes in the network.
  • the master controller ( 7 ) in other Deputy Master nodes on receiving the changes from the master node updates the local database. This keeps the database synchronized with the master node. When a node comes up again after a failure and a master node already exists then the restored node requests for the current dump of the database from the master node.

Abstract

There is provided a network management system and a method of managing a network, especially an optical network, that includes a plurality of nodes that are interconnected in an arbitrary topology so as to be capable of carrying traffic between selected nodes. The method includes the steps of providing a supervisory network by means of supervisory channels between the node, providing a node manager including one or more software modules in each node, establishing supervisory connections over one or more of the supervisory channels between selected nodes through which the node manager communicates with other node managers in other nodes, providing a node module in each node manager that provides an interface to the hardware settings of the node, providing a master module in at least one node manager, establishing supervisory connections over one or more supervisory channels between selected nodes through which the master module communicates with the node modules, and amending and/or monitoring hardware settings in selected nodes with respective node module of the node. Controlling the amendments carried out by the node modules and/or processing of the monitored hardware settings is carried out by the master module.

Description

  • The present invention belongs to the field of communication systems, especially of optical communication networks, more particularly, to dense wavelength division multiplexed optical networks with arbitrary topology, e.g., point-two-point, ring, mesh, etc.
  • The soaring demand for virtual private networks, storage area networking, and other new high speed services are driving bandwidth requirements that test the limits of today's optical communications systems. In an optical network, a node is physically linked to another using one or more optical fibres (cf. FIG. 1). Each of the fibres can carry as many as one hundred or more communication channels, i.e., wavelengths in WDM (Wavelength Division Multiplex) or Dense WDM (DWDM) systems. Thus, for example, for a node with three neighbours as many as three hundred or more wavelength signals originate or terminate or pass through a given node. Each of the wavelengths may carry signals with data rates up to 10 Gbit/s or even higher. Thus each fibre is carrying several terabits of information. This is a tremendous amount of bandwidth and information that must be managed automatically, reliably, rapidly, and efficiently. It is evident that large amount of bandwidth needs to be provisioned. Fast and automatic provisioning enables network bandwidth to be managed on demand in a flexible, dynamic, and efficient manner. Another very important feature of such DWDM networks is reliability or survivability in presence of a failure such as an inadvertent fibre-cut, various types of hardware and software faults, etc. In such networks, in case of a failure, the user data is automatically rerouted to its destination via an alternate or restoration path.
  • In general, such networks are managed by a network management system which is adapted especially for a single existing network. However, when the existing network, especially its topology, is changed, the network management system must be reconfigured by manually adapting of the hardware and software of several nodes. This is an expensive and time-consuming work, especially in the case of meshed networks. Furthermore, the known network management systems are not able to be implemented in networks with an arbitrary topology without manual adaptation of the network management system.
  • It is an object of the present invention to overcome the disadvantages of the state of the art and especially to provide a network management system that could be implemented in a network with an arbitrary topology, and which provides a highly flexible and reliable managing of the network.
  • The object of the invention is realized by a method according to claim 1 and a network management system according to claim 11. The sub-claims provide preferable embodiments of the present invention.
  • In the network, especially the optical network, which is managed by the method according to the present invention multiple nodes are interconnected in an arbitrary topology. The management system is able to manage the whole network and provides intelligence for efficient and optimal use of network resources. The management system comprises preferably various software modules in each node. One software module is a node manager, for example, which takes care of the network management activities. A node manager in each node communicates with other node managers in other nodes through the supervisory network. The supervisory network is formed with the help of supervisory channels between the various nodes of the network. A physical supervisory channel between two nodes in the network might be carried over optical fibre or other types of transport media. Node managers in different nodes might communicate over logical supervisory connections established over one or more physical supervisory channels between various nodes. These logical supervisory connections might be configured manually or with the help of software modules in one or more network nodes. In a preferred embodiment this is done by using a software module called NetProc, which is described in the application PCT/EP03/102704, which has been filed on 14 Mar. 2003 by the same applicant, and which is incorporated by reference into the present application. The NetProc provides the following supervisory network features:
      • 1) Supervisory connection establishment between two network nodes. Each node can have one or more NetProcs. This architecture allows establishment of a direct logical supervisory connection between any arbitrary pair of nodes interconnected by the supervisory channel. Fault-tolerant or redundant connections through two or more paths. In a preferred embodiment these paths are node and link disjoint, as will be described in more detail. The management system uses NetProc's services to exchange messages with other nodes. Any supervisory data is sent through one or several or all of the available redundant connections. Each message is given a sequence number. On the receiving end the duplicate messages are discarded and only one, for example the first, of the arriving message is passed on to the supervisory management layer.
      • 2) Hardware fault and software error detection on all paths of the supervisory channel and the associated auto-recovery to re-establish the supervisory channel. Error checking in the data transmission is done by using sequence numbers on the messages. The status of each connection is monitored by sending keep-alive messages at regular intervals. In the event that a reply to keep-alive message is not received within a specified time the connection is explicitly closed and the two nodes try to re-establish connection between themselves. The closing of connection(s) and attempts to re-establish them are done automatically.
      • 3) Relaying information reliably to one or more network managers running on one or more network nodes or other work stations.
      • 4) The management of the network is carried out by a node manager present in each node or at one or more nodes or other centralized locations. The various node managers communicate using the NetProc.
  • A preferred supervisory network has the flexibility to be configured by standard protocols like OSPF, MPLS or by using NetProc. Following features apply:
      • The supervisory network topology is automatically discovered with the help of OSPF. Each node manager executes a single OSPF and the OSPF in each node is configured to talk with neighbouring nodes.
      • The nodes discover their neighbours and exchange Link State Advertisements. Once the Link State adjacencies are formed and the OSPF converges on the topology, each node possesses the routing table and is able to reach other nodes over the supervisory channel.
      • The status of the supervisory channel is monitored by OSPF and in the event of link failure the alternate routes are configured. Fault-tolerant connections are set up using two or more Label Switched Paths over two or more disjoint paths to each destination. Thus a signalling message sent to a node travels through multiple Label Switched Paths and reaches its appropriate destination.
  • According to the present invention a node module is provided in each node manager. Thereby, the module could be implemented in form of software or hardware or both. The node module in each node provides an interface to the hardware of the corresponding node. By each node module the hardware settings of the respective node could be amended and/or monitored.
  • At least one node manager is provided with a master module. The master module could also be implemented in form of software and/or hardware. The master module communicates through supervisory connections with the various node modules and controls the various amendments carried out by the different node modules and/or processes the hardware settings of the different nodes monitored by the corresponding node modules.
  • Preferably, not only one but several or all of the node managers in the different nodes comprise a master module. Preferably, in this case the master module has an active state and a passive state which the master module might be set to. Further preferably, at a given time only one master module is allowed to be set to the active state. Such a master module might be called the Master and all the other master modules, which are in a passive state, might be called Deputy Master (DM). Only the master module that is in the active state (Master) controls the different amendments of hardware settings carried out by the node modules and processes the hardware settings monitored by the node modules.
  • Preferable embodiments of the present invention will be described in the following with reference to the accompanying drawings, in which
  • FIG. 1 shows a preferable first architecture of a node manager;
  • FIG. 2 shows the established supervisory connections between corresponding different nodes;
  • FIG. 3 shows a second preferable architecture of a node manager with an attached master controller;
  • FIGS. 4 and 5 show reduced supervisory connections used in the shown second architecture.
  • The functions of a node manager (1) according to the embodiment shown in FIG. 1 are separated into two main modules. The node module (2) takes care of the activities local to a node. Every node has a node module (2), which connects to one or more master modules (3) located at the same node or other nodes using the supervisory channel. Among other things, the node module (2) provides interface to the hardware and allows the master module (3) to make any changes or informs the master module (3) of any changes in the hardware properties. The second module called master module (3) is present in one or several or all nodes. The master module (3) includes MasterProc (5) for global and local network management, DBProc for database related tasks and features, Interface to GUI (4) to support the hardware element management and local and global network management. This is shown in FIG. 1. Thereby, the term “Proc” denotes one or more software modules with predetermined functionality.
  • In addition to the node manager (1), there is a Graphical User Interface (GUI), which is used to input (or enter), output (or view), and modify various parameters and/or properties related to the node hardware. The GUI is also used to input (or enter), output (or view), and modify various parameters and/or properties related to the local and/or global network management. The GUI is connected to the master module (3) (cf. FIG. 1).
  • The functions of a master module (3) include
      • Receiving/sending node information from/to one or more nodes, reading, writing, and updating the database (DB) and providing an interface to the GUI.
      • Accepting user and/or hardware commands for modifying and/or updating node properties and sending them to the relevant nodes. Such commands may also be received from other nodes.
      • Processing network management related commands and messages, e.g., demand information from the user, which includes creation of demand, selection of one or more demand-paths, starting and stopping traffic for a demand, etc.
      • Monitoring the status of demands and providing protection or restoration actions in the event of one or more faults and/or errors in a demand.
      • Exchange of heartbeat messages and related processing
      • Database synchronization
  • The master module (3) according to the shown embodiment provides the following interfaces
      • Interface to the node module (2) in one or several or all nodes
      • Interface to the database
      • Interface to the GUI (4) in one or several or all nodes
  • Although there are several master modules (3) located in several network nodes, at a given time only one master module (3) may be active. Such a master module (3) might be designated as the Master and all the other master modules (3) as a Deputy Master (DM). Further, a master module (3) performs the tasks of the Master or a Deputy Master depending on the configuration. Such a configuration can be done statically or dynamically. It may also be done manually or automatically.
  • The node module (2) in each node needs a connection to the master module (3) and vice-versa. This connection is set-up over the supervisory channel using NetProc or equivalent software modules.
  • The Master located in a particular node coordinates all the network management activities. The Master is an essential part of the network management and needs to be functional all the time. It therefore becomes important to make sure that there is a backup or standby module, which takes over when the Master fails for some reason. For this purpose one or more Deputy Masters are designated as the backup or standby to the Master. These Deputy Masters take over the functions of the master module (3) when the Master fails. The master module (3) has different functionality based on whether it is the Master or a Deputy Master. The nodes where the Master and a Deputy Master are located are termed as the master node and a DM node, respectively. Finally, a full set of supervisory connections between all pairs of nodes which contain master module (3) are required in order to manage the redundancy and fault-tolerance with respect to the Master functionality. A full set of supervisory connections implies a supervisory connection between all pair of nodes. A reduced set of supervisory connections is defined as a set of those connections between a pair of nodes in which one of the nodes is the master node.
  • As the node manager software first comes up, a node preferably is always initialised to be a Deputy Master node. Following protocol is used in determining as to which node acts as a Master at a given time: 1) All nodes periodically exchange Heartbeat messages among each other, the contents of which are used to determine as to which node is the master node and also to monitor the status of master node by the various Deputy Master nodes. 2) A Heartbeat message contains the node ID of the sender node as well as its status, either Master or Deputy-Master. 3) The receiving node first examines the status of all the received Heartbeat Messages within a certain time interval. If ft receives a Master status in any of the received Heartbeat messages, it remains in the same state as before without altering its status. If it does not receive a Master status in any of its Heartbeat messages, it compares its ID with other received IDs. If its ID is smaller than the received IDs It assumes the role of Master otherwise it remains in the same state as before without altering its status. As an alternative, if on start-up a node does not receive Heartbeat message from other nodes after sending a configurable number of Heartbeat messages it assumes the role of the Master. 4) If and only if the existing Master fails the new Master election process takes place. Master election is done by processing heartbeat messages as discussed above. 5) In case two nodes assume for any reason the unintended role of a master node it is resolved using the following protocol. Among the different master nodes the node with the lowest ID number retains the role of Master, all other master nodes revert their role to being a Deputy Master node.
  • Based on the contents of heartbeat messages there may be other procedures for selecting as to which master module acts as the Master, for example the master module in the node with the largest ID.
  • After the election is over, the master module (3) in master node takes over the operations of the network and performs the network management functions. The change of role of a particular node from a Deputy Master node to a master node should be performed as quickly and as seamlessly as possible to have minimum disruption in network operation. The master node and Deputy Master nodes perform additional functions for fault-tolerance. These include among other functions database synchronization between master node and Deputy Master nodes.
  • In the following sections two architectures for handling redundancy and fault-tolerance are presented.
  • The node manager corresponding to a first architecture is shown in FIG. 1. The master node and all the Deputy Master nodes are connected through the supervisory channel configured by NetProc or an equivalent software module. Using such supervisory connections (10) between each pair of nodes, each node module in each node sends all node-related information to the master node and to all the Deputy Master nodes as shown in FIG. 2, e.g., for a four node network. Exchange of heartbeat messages and related processing is done as discussed previously in this document.
  • The database in the master node and a Deputy Master node needs to be synchronized at all times. This ensures correct operation when the master node fails and a new master node is elected. After a new master node is elected, it sends the current dump (state) of the database to all other Deputy Master nodes before resuming its duty as a master node. This makes sure that the database in all nodes are synchronized before the nodes begin their management function. During normal operation, both the master node and all Deputy Master nodes receive messages from node modules in all nodes. Thus, the master module in each node updates the database located in that particular node. The difference in the functionality of Master versus Deputy Master is that a node acting as Deputy Master does not send any message to other nodes but only receives all node-related messages. The primary function of a Deputy Master node in this architecture is to perform the database synchronization. When a node comes up again after a failure and a master node already exists then the restored node requests for the current dump of the database from the master node.
  • In the second architecture, there is an additional software module running at a node, namely, master controller as shown in FIG. 3. The so-called master controller (4) is a module, which could be implemented by software and/or hardware.
  • The Node Module (2) and master controller (7) are active in all nodes of the network. However, the master module (3) is active only in the master node. In this architecture, it is the master controller (7) which takes part in master-election and role-change related steps, e.g., database synchronization. When the nodes come up for the first time, the Node Module (2) and master controller (7) are started in each node. The master module (3) is not started initially. The master controllers (7) in various nodes by exchanging and processing heartbeat messages among each other elect a particular node as the master node. Thereafter, it starts the master module, (3) only in the master node (cf. FIG. 3).
  • The master controller (7) in each node is connected to all other master controllers (7) in other nodes through the supervisory channel. The Node Module in different nodes is connected only to the master module (3) as shown in FIG. 4 through a reduced set of supervisory connections (10).
  • When the master node changes, e.g., from node 1 to node 2, the master controller (7) in that node, dynamically and automatically re-configures the connection between the node modules (2) and the new master module (3) as shown in FIG. 5.
  • This dynamic reconfiguration is done using NetProc or other similar software modules and the master controller present in each node. The master controller sends a re-configure message to NetProc in each node, with the node ID of the new master node. The NetProc in each node on receiving the message re-configures the connections so that all the nodes have a logical supervisory connection to the new master node. The nodes can also be statically connected as in architecture 1 and the dynamic reconfiguration step can be avoided.
  • Exchange of heartbeat messages and related processing is done as discussed previously in this document.
  • The master controller (7) does the database synchronization between a pair of nodes. After a new master node is elected, the master controller (7) sends the current dump (state) of the database to all the master controllers(7) in Deputy Master nodes before starting the master module processes in the master node. This makes sure that the database in all nodes are synchronized before the nodes begin the management function. The master module (3) informs the master controller (7) of any changes in database and these changes are sent to all other master controllers (7) in other nodes in the network. The master controller (7) in other Deputy Master nodes on receiving the changes from the master node updates the local database. This keeps the database synchronized with the master node. When a node comes up again after a failure and a master node already exists then the restored node requests for the current dump of the database from the master node.

Claims (20)

1. A method of managing a network, that includes a plurality of nodes that are interconnected in an arbitrary topology so as to be capable of carrying traffic between said plurality of nodes, the method comprising the steps of:
providing a supervisory network by means of supervisory channels between the nodes of said plurality of nodes;
providing a node manager which is one or more software modules in each one of said plurality of nodes;
establishing supervisory connections over one or more of the supervisory channels between selected nodes of said plurality of nodes through which the node manager communicates with other node managers in other nodes of said plurality of nodes;
providing a node module in each node manager that provides an interface to hardware settings of each of said plurality of nodes that is associated with the node module;
providing a master module in at least one node manager;
establishing supervisory connections over one or more supervisory channels between the selected nodes of said plurality of nodes, said supervisory connections providing communication between the master module and the node modules; and
performing a function selected from the group consisting of amending hardware settings in the selected nodes, monitoring hardware settings in the selected nodes, and a combination thereof, with the node module of each of the selected nodes,
wherein controlling the amendments carried out by the node modules and processing the monitored hardware settings is carried out by the master module.
2. The method of managing a network according to claim 1, comprising the further steps of:
providing a master module in each of at least two node managers, wherein each master module is in a state selected from the group consisting of an active state and a passive state; and
setting a first of the at least two master modules to the active state and maintaining or setting the other of the at least two master modules to the passive state,
wherein controlling the amendments carried out by the node modules and processing the monitored hardware settings is carried out only by the first master module.
3. The method of managing a network according to claim 2, wherein the setting of the state of the at least two master modules is done automatically.
4. The method of managing a network according to claim 3, further comprising the steps of:
periodically generating heartbeat messages in each node of said plurality of nodes and exchanging these messages among all of said plurality of nodes, wherein each heartbeat message contains information about the state of the master module of a respective node of said plurality of nodes; and
processing the received heartbeat message in each node of said plurality of nodes and setting the state of the master module in the respective node depending on information in the received messages, so that a single master module of all of said plurality of nodes is always in the active state.
5. The method of managing a network according to claim 4, further comprising the step of providing each master module with an initial passive state when the node manager of the respective node of said plurality of nodes is initialized, and wherein changing the state of the master module in the respective node of said plurality of nodes is made according to a decision selected from the group consisting of:
if the master module of the respective node of said plurality of nodes is in the passive state and the respective node of said plurality of nodes receives at least one heartbeat message that contains information about a master module of another node of said plurality of nodes being in the active state, the master module of the respective node of said plurality of nodes remains in the passive state; and
if the master module of the respective node of said plurality of nodes is in the passive state and the respective node of said plurality of nodes receives no heartbeat message that contains information about a master module of another node of said plurality of nodes being in the active state within a predetermined time interval, the master module of the respective node of said plurality of nodes changes into the active state.
6. The method of managing a network according to claim 4, wherein each heartbeat message generated in each node of said plurality of nodes further contains a node ID of the respective node of said plurality of nodes in which the message is generated, and wherein changing of the state of the master module in the respective node of said plurality of nodes is made according a decision selected from the group consisting of:
if the master module of the respective node of said plurality of nodes is in the passive state and the respective node of said plurality of nodes receives at least one heartbeat message that contains information about a master module of another node of said plurality of nodes being in the active state, the master module of the respective node of said plurality of nodes remains in the passive state;
if the master module of the respective node of said plurality of nodes is in the passive state and the respective node of said plurality of nodes receives no heartbeat message that contains information about a master module of another of said plurality of nodes being in the active state within a predetermined time, the respective node of said plurality of nodes compares the node ID with other received node IDs using a predetermined procedure, and depending on the result of this procedure, especially if the node ID is smaller than the other received node IDs, the master module of the respective node of said plurality of nodes changes into the active state;
if the master module of the respective node of said plurality of nodes is in the active state and the node receives no heartbeat message that contains information about a master module of another of said plurality of nodes being in the active state within a predetermined time, the master module of the respective node of said plurality of nodes remains in the active state;
if the master module of the respective node of said plurality of nodes is in the active state and the respective node of said plurality of nodes receives at least one heartbeat message that contains information about a master module of another of said plurality of nodes being in the active state, the respective node of said plurality of nodes compares the node ID of the node of said plurality of nodes with other received node IDs using a predetermined procedure and depending on the result of this procedure, especially if the node ID is not smaller than the other received node IDs, the master module of the respective node of said plurality of nodes changes into the passive state.
7. The method of managing a network according to claim 1, comprising the further steps of:
communicating between the node module in each node of said plurality of nodes and the master module through a set of supervisory connections selected from the group consisting of a full set of supervisory connections and a reduced set of supervisory connections,
wherein in the full set of supervisory connections, each node module communicates with all of the master modules present in one or more nodes of said plurality of nodes, especially whether in the active state or passive state, and
wherein in the reduced set of supervisory connections, each node module communicates only with a single master module present in one of said plurality of nodes.
8. The method of managing a network according to claim 4, comprising the further step of:
providing a master controller module in each node of said plurality of nodes which is connected to the master module of the respective node,
wherein master controller modules of different nodes of said plurality of nodes generate, exchange and process the heartbeat messages and control the state of the master module of the respective node.
9. The method of managing a network according to claim 8, wherein the node module in each node of said plurality of nodes communicates only with the master module in the active state, and in the case of changing the state of the master module to the active state and a further master module to the passive state, the supervisory connections through which the communication takes place are reconfigured.
10. The method of managing a network according to claim 9, wherein the master controller module of the node of said plurality of nodes having the further master module that has been changed to the active state sends a reconfigure message to each node of the plurality of nodes that contains the node ID of the node of said plurality of nodes having the further master module.
11. The method of managing a network according to claim 2, comprising the further steps of:
providing a database containing information relating to a hardware state of each node of said plurality of nodes and local and global network management activities in each node of said plurality of nodes;
synchronizing the database in each node of said plurality of nodes according to the following steps:
before the first master module is set to the active state, a first node of said plurality of nodes, that is associated with the first master module and includes a current state of the database, sends the current state of the database to all other nodes of said plurality of nodes,
the receiving nodes of said plurality of nodes that receive the current state of the database, synchronize the database in each receiving node with the current state of the database.
12. The method of managing a network according to claim 11, comprising the further steps of:
the master module in each receiving node of said plurality of nodes informs a master controller in each receiving node of said plurality of nodes of any changes in the database of the receiving node of said plurality of nodes;
the master controller sends the changes in the database of the receiving node of the plurality of nodes to all other master controllers in all other nodes of the plurality of nodes;
when one of the plurality of nodes comes up after a failure the master controller in the one of the plurality of nodes that comes up after a failure requests the current state of the database from the master controller of the first node of said plurality of nodes to synchronize the database of the one node that comes up after a failure with the database of the first node of said plurality of nodes.
13. A network management system of a network including a plurality of nodes which are interconnected in an arbitrary topology so as to be capable of carrying traffic between said plurality of nodes, comprising:
a supervisory network interconnecting the plurality of nodes, that is provided by supervisory channels between the plurality of nodes;
a node manager associated with each one of said plurality of nodes that communicates with other node managers through a supervisory connection established over one or more supervisory channels between selected nodes of said plurality of nodes;
a node module associated with each node manager that provides an interface to the hardware of the node of said plurality of nodes that is associated with the node module and allows for amending and monitoring of amendments of hardware settings of the node of said plurality of nodes that is associated with the node module; and
a master module associated with at least one node manager that is connected to the various node modules through the supervisory connections established over the one or more supervisory channels between selected nodes,
wherein the master module provides functionality for controlling the node modules and amending the hardware settings and for processing the hardware settings monitored by the node modules.
14. The network management system according to claim 13, further comprising an interface associated with the master module to support one or more Graphical User Interfaces located in one or more nodes of the plurality of nodes.
15. The network management system according to claim 13, further comprising one or more software modules included in the master module for global and local network management.
16. The network management system according to claim 13, wherein at least one node manager has the master module, and
wherein each master module can be set to a passive state or to an active state, wherein only in the active state the master module has the functionality for controlling the node modules and amending the hardware settings and for processing the hardware settings monitored by the node modules, and wherein in the passive state the master module has functionality for performing database synchronization.
17. The network management system according to claim 16, further comprising a master controller module associated with each node of said plurality of nodes for setting the state of the master module.
18. A network management system of a network including a plurality of nodes which are interconnected in an arbitrary topology so as to be capable of carrying traffic between selected nodes, comprising:
a supervisory network interconnecting the plurality of nodes, that is provided by supervisory channels between the plurality of nodes;
a node manager associated with each one of said plurality of nodes that communicates with other node managers through a supervisory connection established over one or more supervisory channels between the selected nodes of said plurality of nodes;
a node module associated with each node manager that provides an interface to the hardware of the node of said plurality of nodes that is associated with the node module and allows for amending and monitoring of amendments of hardware settings of the node of said plurality of nodes that is associated with the node module; and
a master module associated with at least one node manager that is connected to the various node modules through the supervisory connections established over the one or more supervisory channels between selected nodes,
wherein the master module provides functionality for controlling the node modules and amending the hardware settings and for processing the hardware settings monitored by the node modules, and according to one of claims 13 to 17,
wherein the network management system is managed by a method according to claim 1.
19. The method of managing a network according to claim 7, wherein each node module communicates only with a single master module in an active state present in one node in the reduced set of supervisory connections.
20. The network management system according to claim 15, further comprising one or more software modules in the master module for database related tasks and features for a database containing information relating to a hardware state of each node and local and global network management activities in each node.
US10/509,133 2002-03-27 2003-03-27 Network management system Abandoned US20060010441A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
EP020070082 2002-03-27
EP02007008A EP1357690B1 (en) 2002-03-27 2002-03-27 Intelligent optical network element
PCT/EP2003/002704 WO2003081826A2 (en) 2002-03-27 2003-03-14 Supervisory channel in an optical network system
WOPCT/EP03/02704 2003-03-14
PCT/EP2003/003201 WO2003081845A2 (en) 2002-03-27 2003-03-27 Network management system

Publications (1)

Publication Number Publication Date
US20060010441A1 true US20060010441A1 (en) 2006-01-12

Family

ID=28051740

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/509,276 Abandoned US20060013149A1 (en) 2002-03-27 2003-03-14 Suprvisory channel in an optical network system
US10/509,133 Abandoned US20060010441A1 (en) 2002-03-27 2003-03-27 Network management system
US10/401,176 Expired - Fee Related US7123806B2 (en) 2002-03-27 2003-03-27 Intelligent optical network element

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/509,276 Abandoned US20060013149A1 (en) 2002-03-27 2003-03-14 Suprvisory channel in an optical network system

Family Applications After (1)

Application Number Title Priority Date Filing Date
US10/401,176 Expired - Fee Related US7123806B2 (en) 2002-03-27 2003-03-27 Intelligent optical network element

Country Status (7)

Country Link
US (3) US20060013149A1 (en)
EP (3) EP1357690B1 (en)
JP (2) JP2005521330A (en)
AT (2) ATE299319T1 (en)
AU (2) AU2003227068A1 (en)
DE (2) DE60204940T2 (en)
WO (2) WO2003081826A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050210081A1 (en) * 2004-03-18 2005-09-22 Alcatel Data synchronization method
US20080177741A1 (en) * 2007-01-24 2008-07-24 Oracle International Corporation Maintaining item-to-node mapping information in a distributed system
US20090125578A1 (en) * 2007-10-22 2009-05-14 Phoenix Contact Gmbh & Co. Kg System for operating at least one non-safety-critical and at least one safety-critical process
US20150172094A1 (en) * 2013-12-17 2015-06-18 Tsinghua University Component-based task allocation method for extensible router
US10379966B2 (en) * 2017-11-15 2019-08-13 Zscaler, Inc. Systems and methods for service replication, validation, and recovery in cloud-based systems
US20200019335A1 (en) * 2018-07-16 2020-01-16 International Business Machines Corporation Site-centric alerting in a distributed storage system

Families Citing this family (55)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002099946A1 (en) * 2001-06-05 2002-12-12 Stern Thomas E A system and method of fault restoration in communication networks
US7558193B2 (en) * 2002-08-12 2009-07-07 Starent Networks Corporation Redundancy in voice and data communications systems
US20040131072A1 (en) * 2002-08-13 2004-07-08 Starent Networks Corporation Communicating in voice and data communications systems
US7965719B2 (en) * 2002-12-11 2011-06-21 Broadcom Corporation Media exchange network supporting multiple broadband network and service provider infrastructures
US7545736B2 (en) * 2003-03-31 2009-06-09 Alcatel-Lucent Usa Inc. Restoration path calculation in mesh networks
US7689693B2 (en) * 2003-03-31 2010-03-30 Alcatel-Lucent Usa Inc. Primary/restoration path calculation in mesh networks based on multiple-cost criteria
US7646706B2 (en) * 2003-03-31 2010-01-12 Alcatel-Lucent Usa Inc. Restoration time in mesh networks
US8867333B2 (en) * 2003-03-31 2014-10-21 Alcatel Lucent Restoration path calculation considering shared-risk link groups in mesh networks
US7451340B2 (en) * 2003-03-31 2008-11-11 Lucent Technologies Inc. Connection set-up extension for restoration path establishment in mesh networks
US8296407B2 (en) * 2003-03-31 2012-10-23 Alcatel Lucent Calculation, representation, and maintenance of sharing information in mesh networks
US7606237B2 (en) * 2003-03-31 2009-10-20 Alcatel-Lucent Usa Inc. Sharing restoration path bandwidth in mesh networks
US7643408B2 (en) * 2003-03-31 2010-01-05 Alcatel-Lucent Usa Inc. Restoration time in networks
US7500013B2 (en) 2004-04-02 2009-03-03 Alcatel-Lucent Usa Inc. Calculation of link-detour paths in mesh networks
US8111612B2 (en) 2004-04-02 2012-02-07 Alcatel Lucent Link-based recovery with demand granularity in mesh networks
WO2005103897A1 (en) * 2004-04-23 2005-11-03 Matsushita Electric Industrial Co., Ltd. Network resource management device
US20070195825A1 (en) * 2004-04-30 2007-08-23 Yi-Sheng Wang Satellite Communication System and Method
JP2006025293A (en) * 2004-07-09 2006-01-26 Matsushita Electric Ind Co Ltd Signal transmission device
US7450851B2 (en) * 2004-08-27 2008-11-11 Fujitsu Limited System and method for modularly scalable architecture for optical networks
US8572234B2 (en) * 2004-11-30 2013-10-29 Hewlett-Packard Development, L.P. MPLS VPN fault management using IGP monitoring system
JP4291281B2 (en) * 2005-02-03 2009-07-08 富士通株式会社 Information processing system, calculation node, and information processing system control method
WO2006085313A2 (en) * 2005-02-09 2006-08-17 Enure Networks Ltd. Device, method, and system for module level network supervision
US7995569B2 (en) * 2005-04-01 2011-08-09 Nortel Networks Limited Virtual routers for GMPLS networks
US9172489B2 (en) * 2005-06-30 2015-10-27 Infinera Corporation Discovery of an adjacent network element within a network data plane
US7433596B2 (en) * 2005-12-23 2008-10-07 The Boeing Corporation Bi-directional, full-duplex, one-wire communications link for use in fiber optic transceivers
US8068824B2 (en) * 2006-09-29 2011-11-29 Avaya, Inc. Automated reconnection of interrupted voice call session
US7765385B2 (en) * 2007-04-18 2010-07-27 International Business Machines Corporation Fault recovery on a parallel computer system with a torus network
WO2009012409A2 (en) * 2007-07-17 2009-01-22 Opvista Incorporated Optical ring networks having node-to-node optical communication channels for carrying data traffic
US20090052444A1 (en) * 2007-08-24 2009-02-26 At&T Bls Intellectual Property, Inc. Methods, systems, and computer program products for providing multi-service communication networks and related core networks
US7975166B2 (en) * 2008-03-05 2011-07-05 Alcatel Lucent System, method and computer readable medium for providing redundancy in a media delivery system
EP2251999B1 (en) * 2009-05-13 2013-08-28 ADVA Optical Networking SE Data transmission method and network for transmitting a digital optical signal over optical transmission links and networks
US8396952B2 (en) * 2009-08-12 2013-03-12 International Business Machines Corporation Provisioning and commissioning a communications network with a virtual network operations center and interface
US8488960B2 (en) * 2009-08-12 2013-07-16 International Business Machines Corporation Synchronizing events on a communications network using a virtual command interface
US8504660B2 (en) * 2009-08-12 2013-08-06 International Business Machines Corporation Validation of the configuration of a data communications network using a virtual network operations center
US9485050B2 (en) 2009-12-08 2016-11-01 Treq Labs, Inc. Subchannel photonic routing, switching and protection with simplified upgrades of WDM optical networks
US20110191626A1 (en) * 2010-02-01 2011-08-04 Sqalli Mohammed H Fault-tolerant network management system
US8705741B2 (en) 2010-02-22 2014-04-22 Vello Systems, Inc. Subchannel security at the optical layer
CN102907057B (en) * 2010-05-20 2015-12-02 惠普发展公司,有限责任合伙企业 Exchange in network equipment
US8499336B2 (en) 2010-11-23 2013-07-30 Cisco Technology, Inc. Session redundancy among a server cluster
EP2464039B1 (en) * 2010-12-06 2013-03-06 Alcatel Lucent Transponder and related network node for an optical transmission network
US8542999B2 (en) 2011-02-01 2013-09-24 Vello Systems, Inc. Minimizing bandwidth narrowing penalties in a wavelength selective switch optical network
US9204207B2 (en) * 2011-11-01 2015-12-01 Plexxi Inc. Hierarchy of control in a data center network
US9054796B2 (en) * 2011-11-17 2015-06-09 Finisar Corporation Dual optical electrical conversion module
US9184846B2 (en) * 2013-03-17 2015-11-10 Finisar Corporation Pluggable optical host and network I/O optoelectronic module
US9331894B2 (en) * 2013-05-31 2016-05-03 International Business Machines Corporation Information exchange in data center systems
US9686140B2 (en) * 2014-07-02 2017-06-20 Verizon Patent And Licensing Inc. Intelligent network interconnect
US10498619B2 (en) 2014-07-08 2019-12-03 Hewlett Packard Enterprise Development Lp Discovering connection of passive cables
WO2016007133A1 (en) * 2014-07-08 2016-01-14 Hewlett-Packard Development Company, L.P. Discovering connection of passive cables
CN105790825B (en) * 2014-12-25 2020-08-14 中兴通讯股份有限公司 Method and device for hot backup of controller in distributed protection
US9749723B2 (en) * 2015-03-05 2017-08-29 Huawei Technologies Co., Ltd. System and method for optical network
US10002091B2 (en) * 2015-03-26 2018-06-19 Honeywell International Inc. Master/slave management for redundant process controller modules
US10110438B2 (en) * 2015-12-07 2018-10-23 Ciena Corporation Control plane discovery of services
US10034407B2 (en) * 2016-07-22 2018-07-24 Intel Corporation Storage sled for a data center
US10476816B2 (en) 2017-09-15 2019-11-12 Facebook, Inc. Lite network switch architecture
CN109560864B (en) * 2017-09-26 2021-10-19 中兴通讯股份有限公司 Data transmission method and device
US10694271B2 (en) * 2018-09-20 2020-06-23 Infinera Corporation Systems and methods for decoupled optical network link traversal

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363416B1 (en) * 1998-08-28 2002-03-26 3Com Corporation System and method for automatic election of a representative node within a communications network with built-in redundancy

Family Cites Families (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4763317A (en) * 1985-12-13 1988-08-09 American Telephone And Telegraph Company, At&T Bell Laboratories Digital communication network architecture for providing universal information services
EP0594198B1 (en) * 1992-10-22 1999-01-27 Cabletron Systems, Inc. Crossbar switch for synthesizing multiple backplane interconnect topologies in communications system
US5436750A (en) * 1993-05-07 1995-07-25 Nec Corporation Optical repeatered transmission with fault locating capability
US5539564A (en) * 1993-09-22 1996-07-23 Nippon Telegraph And Telephone Corporation Point-to-multipoint optical transmission system
US5532864A (en) * 1995-06-01 1996-07-02 Ciena Corporation Optical monitoring channel for wavelength division multiplexed optical communication system
JP3455345B2 (en) * 1995-10-23 2003-10-14 富士通株式会社 REMOTE OPTICAL SIGNAL CONTROL DEVICE, OPTICAL SIGNAL LEVEL CONTROL METHOD FOR REMOTE OPTICAL SIGNAL CONTROL DEVICE, OPTICAL SIGNAL TRANSMITTER, AND MONITORING SIGNAL LIGHT LEVEL CONTROL METHOD IN OPTICAL SIGNAL TRANSMITTER
US6005694A (en) * 1995-12-28 1999-12-21 Mci Worldcom, Inc. Method and system for detecting optical faults within the optical domain of a fiber communication network
JP3639383B2 (en) * 1996-04-15 2005-04-20 富士通株式会社 Optical transmission system
US5970193A (en) * 1996-10-24 1999-10-19 Nortel Networks Corporation Data communications structures relating to data shelf configurations
DE69734346T2 (en) * 1996-11-13 2006-05-18 Nippon Telegraph And Telephone Corp. Device for terminating an optical path
US5914794A (en) * 1996-12-31 1999-06-22 Mci Communications Corporation Method of and apparatus for detecting and reporting faults in an all-optical communications system
US5986783A (en) * 1997-02-10 1999-11-16 Optical Networks, Inc. Method and apparatus for operation, protection, and restoration of heterogeneous optical communication networks
WO1999044317A2 (en) * 1998-02-24 1999-09-02 Telefonaktiebolaget Lm Ericsson (Publ) Protection of wdm-channels
US6272154B1 (en) * 1998-10-30 2001-08-07 Tellium Inc. Reconfigurable multiwavelength network elements
JP3674357B2 (en) * 1999-02-08 2005-07-20 富士通株式会社 Transmission line monitoring and control device
US6310690B1 (en) * 1999-02-10 2001-10-30 Avanex Corporation Dense wavelength division multiplexer utilizing an asymmetric pass band interferometer
US6163595A (en) * 1999-04-29 2000-12-19 Nortel Networks Limited Way finding with an interactive faceplate
WO2001055854A1 (en) * 2000-01-28 2001-08-02 Telcordia Technologies, Inc. Physical layer auto-discovery for management of network elements
US7190896B1 (en) * 2000-05-04 2007-03-13 Nortel Networks Limited. Supervisory control plane over wavelength routed networks
US7257120B2 (en) * 2000-11-17 2007-08-14 Altera Corporation Quality of service (QoS) based supervisory network for optical transport systems
JP3813063B2 (en) * 2001-02-01 2006-08-23 富士通株式会社 Communication system and wavelength division multiplexing apparatus
US6795316B2 (en) * 2001-12-21 2004-09-21 Redfern Broadband Networks, Inc. WDM add/drop multiplexer module

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6363416B1 (en) * 1998-08-28 2002-03-26 3Com Corporation System and method for automatic election of a representative node within a communications network with built-in redundancy

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050210081A1 (en) * 2004-03-18 2005-09-22 Alcatel Data synchronization method
US7912858B2 (en) * 2004-03-18 2011-03-22 Alcatel Data synchronization method
US20080177741A1 (en) * 2007-01-24 2008-07-24 Oracle International Corporation Maintaining item-to-node mapping information in a distributed system
US8671151B2 (en) * 2007-01-24 2014-03-11 Oracle International Corporation Maintaining item-to-node mapping information in a distributed system
US20090125578A1 (en) * 2007-10-22 2009-05-14 Phoenix Contact Gmbh & Co. Kg System for operating at least one non-safety-critical and at least one safety-critical process
US8549136B2 (en) * 2007-10-22 2013-10-01 Phoenix Contact Gmbh & Co. Kg System for operating at least one non-safety-critical and at least one safety-critical process
US20150172094A1 (en) * 2013-12-17 2015-06-18 Tsinghua University Component-based task allocation method for extensible router
US9710312B2 (en) * 2013-12-17 2017-07-18 Tsinghua University Component-based task allocation method for extensible router
US10379966B2 (en) * 2017-11-15 2019-08-13 Zscaler, Inc. Systems and methods for service replication, validation, and recovery in cloud-based systems
US20200019335A1 (en) * 2018-07-16 2020-01-16 International Business Machines Corporation Site-centric alerting in a distributed storage system
US10649685B2 (en) * 2018-07-16 2020-05-12 International Business Machines Corporation Site-centric alerting in a distributed storage system

Also Published As

Publication number Publication date
WO2003081845A3 (en) 2004-06-03
EP1357690B1 (en) 2005-07-06
EP1488573A2 (en) 2004-12-22
DE60316131D1 (en) 2007-10-18
WO2003081845A2 (en) 2003-10-02
DE60204940D1 (en) 2005-08-11
ATE372621T1 (en) 2007-09-15
US20030215232A1 (en) 2003-11-20
JP2005521334A (en) 2005-07-14
AU2003227068A1 (en) 2003-10-08
JP2005521330A (en) 2005-07-14
ATE299319T1 (en) 2005-07-15
EP1491000A2 (en) 2004-12-29
WO2003081826A2 (en) 2003-10-02
DE60204940T2 (en) 2006-04-20
US20060013149A1 (en) 2006-01-19
WO2003081826A3 (en) 2004-07-29
AU2003226727A1 (en) 2003-10-08
US7123806B2 (en) 2006-10-17
EP1357690A1 (en) 2003-10-29
EP1491000B1 (en) 2007-09-05
AU2003227068A8 (en) 2003-10-08

Similar Documents

Publication Publication Date Title
EP1491000B1 (en) Network management system
US8165466B2 (en) Network operating system with topology autodiscovery
Sengupta et al. From network design to dynamic provisioning and restoration in optical cross-connect mesh networks: An architectural and algorithmic overview
US7881183B2 (en) Recovery from control plane failures in the LDP signalling protocol
US7471625B2 (en) Fault recovery system and method for a communications network
JP3662901B2 (en) Fault repair method by quasi-central processing of optical layer
US7372806B2 (en) Fault recovery system and method for a communications network
US7293090B1 (en) Resource management protocol for a configurable network router
CN100373848C (en) Transport network restoration method supporting extra traffic
EP1829256B1 (en) Synchronisation in a communications network
WO2004075494A1 (en) Device and method for correcting a path trouble in a communication network
US7414985B1 (en) Link aggregation
EP1302035A2 (en) Joint ip/optical layer restoration after a router failure
US7035209B2 (en) Control communications in communications networks
EP1146682A2 (en) Two stage, hybrid logical ring protection with rapid path restoration over mesh networks
US20140040476A1 (en) Method and system for network restructuring in multilayer network
EP2285046B1 (en) Method and apparatus for realizing interaction of optical channel data unit protection tangency rings
EP1915024B1 (en) Method and apparatus for optimization of redundant link usage using time sensitive paths
CN116054929A (en) Service protection system
Xin et al. On design and architecture of an IP over WDM optical network control plane
Interface S tandards S tandar
Li et al. Reliable optical network design
Kim Efficient Design and Management of Reliable Optical Networks
Li et al. A GMPLS based control plane testbed for end-to-end services
CA2390586A1 (en) Network operating system with topology autodiscovery

Legal Events

Date Code Title Description
AS Assignment

Owner name: BENGURION UNIVERSITY OF THE NEGEV, ISRAEL

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAHN, ELKE;AGRAWAL, NIRAL;REEL/FRAME:017016/0051

Effective date: 20050531

AS Assignment

Owner name: LIGHTMAZE SOLUTIONS AG, GERMANY

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE CONVEYING PARTY NAME AND THE RECEIVING PARTY NAME, PREVIOUSLY RECORDED AT REEL 017016, FRAME 0051;ASSIGNORS:JAHN, ELKE;AGRAWAL, NIRAJ;REEL/FRAME:017313/0911;SIGNING DATES FROM 20050804 TO 20050805

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION