US20080065764A1 - Method and system for reduced distributed event handling in a network environment - Google Patents

Method and system for reduced distributed event handling in a network environment Download PDF

Info

Publication number
US20080065764A1
US20080065764A1 US11/923,317 US92331707A US2008065764A1 US 20080065764 A1 US20080065764 A1 US 20080065764A1 US 92331707 A US92331707 A US 92331707A US 2008065764 A1 US2008065764 A1 US 2008065764A1
Authority
US
United States
Prior art keywords
network
proxy node
node
proxy
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/923,317
Inventor
Ruotao Huang
Ram Iyer
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CipherMax Inc
Original Assignee
MaXXan Systems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by MaXXan Systems Inc filed Critical MaXXan Systems Inc
Priority to US11/923,317 priority Critical patent/US20080065764A1/en
Assigned to CIPHERMAX, INC. reassignment CIPHERMAX, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: MAXXAN SYSTEMS, INC.
Assigned to CIPHERMAX, INC. reassignment CIPHERMAX, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: IYER, RAM GANESAN, HUANG, RUOTAO
Assigned to CIPHERMAX, INC. reassignment CIPHERMAX, INC. CHANGE OF NAME (SEE DOCUMENT FOR DETAILS). Assignors: MAXXAN SYSTEMS, INC.
Publication of US20080065764A1 publication Critical patent/US20080065764A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/12Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
    • H04L67/125Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks involving control of end-device applications over a network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services

Definitions

  • the present application is related to computer networks. More specifically, the present application is related to a system, apparatus and method for handling multiple instances of events while avoiding duplication of work in a distributed network environment.
  • distributed network nodes typically notify the network's central network management server software application of any changes in the state of the network or of individual nodes.
  • the network management application or software may be run on one network server or simultaneously on a plurality of such servers.
  • Such network management applications typically represent a single point, management interface for network administration.
  • distributed events are those events that may affect the network as a whole.
  • a distributed event is the removal of a device port's entry from an associated Distributed Name Server. Such an event is considered a distributed event because it affects the Distributed Name Server on all of the network's Fibre Channel switches, for example.
  • Node-specific events are typically concerned only with the state of an individual node.
  • One example of a node-specific event is a FAN_FAILURE alarm.
  • a FAN_FAILURE alarm is considered a node-specific event because it does not generally affect any nodes in the network other than the node where it originates.
  • Attempts to resolve the issue of double-handling include giving the multiple copies of the same event the same identity tag.
  • the network management application when the network management application receives notification of events, the network management application will begin by examining the identity tags. By examining the identity tags, the network management application can group those events with the same identity tags together, thereby enabling the network management application to handle or process the same event only once.
  • identity tags are impractical to implement.
  • the need for the nodes to communicate with each other to agree on the identity tag every time they are going to send a notice of an event results in excessive network overhead.
  • the network management application generally has to keep a history of all the received tags in order to perform tag association.
  • the present invention overcomes the above-identified problems as well as other shortcomings and deficiencies by providing a system, apparatus and method for reducing the double-handling of distributed event messages in a computer network environment.
  • distributed event handling may be reduced by maintaining the availability of a proxy node that is responsible for reporting the distributed events to a network management station (“NMS”).
  • NMS network management station
  • the present invention provides the technical advantage of properly handling multiple instances of the same event received from the network nodes in a distributed network environment without double-handling while at the same time being able to receive and handle events unique to each individual network node.
  • the present invention further provides technical advantages through the reduction of instances of double-handling which simultaneously reduces usage of the network's processing resources.
  • network resource usage may be reduced by sending only one copy of each distributed event to the network management station (“NMS”) and its associated applications for processing.
  • NMS network management station
  • a further technical advantage provided by the present invention stems from the distributed event handling that is performed primarily by the network management station, eliminating processing efforts from network nodes. Such elimination is invaluable when using the network management station to monitor and manage the networks of third-party nodes.
  • the present invention provides the advantage of reducing distributed event double-handling without consuming network management resources by pushing the elimination of redundant distributed event messages down to the network nodes.
  • FIG. 1 is a schematic drawing depicting a computer network formed in accordance with teachings of the present invention
  • FIG. 2 is a flow diagram depicting a method for reducing the repeated handling of distributed network events according to teaching of the present invention
  • FIG. 3 is a schematic drawing diagram depicting an alternate embodiment of a computer network formed in accordance with teachings of the present invention.
  • FIG. 4 is a flow diagram depicting a method for reducing distributed event messaging through the maintenance of a proxy node by and among the network nodes, according to teachings of the present invention.
  • FIGS. 1 through 4 of the drawings like numerals being used for like and corresponding parts of the various drawings.
  • FIG. 1 Illustrated in FIG. 1 is an exemplary embodiment of a computer network incorporating teachings of the present invention.
  • the computer network indicated generally at 100 , preferably includes one or more network management stations 103 .
  • the network management station 103 may be any computing device capable of performing the methods described herein as well as capable of communicating with nodes 106 , 109 and 112 or the like via communication network 115 .
  • the network management station 103 may include a monitor or display 118 , a central processing unit 121 , one or more user input devices (not expressly shown). Examples of user input devices included, but are not limited to, a computer mouse, keyboard, touchscreen, voice recognition hardware and software, as well as other input devices.
  • the central processing unit 121 may take many forms.
  • the central processing unit 121 may be a mainframe computer, a server computer a desktop computer, a laptop computer, application blade or any other computer device capable of responding to event messages generated and communicated by the network nodes 106 , 109 , and 112 as well as monitoring, repairing or otherwise managing computer network 100 .
  • the network management station 103 preferably operates one or more network management applications in order to maximize the uptime, effectiveness, utilization and other operational characteristics of the computer network 100 .
  • network management applications are typically employed on a central network management station to provide a single point of network management for network administration.
  • Network management applications are typically operable to monitor, configure, test or otherwise manage generally all aspects of their associated computer networks and the computing components coupled to those networks.
  • a network management application may be configured to detect the addition of network nodes to the computer network.
  • the network management application may be able to detect the availability of individual network nodes or other devices coupled to the computer network.
  • the network management application is able to address event messages, distributed or node-specific, generated by the network nodes as well as to perform other network management functions and operations.
  • a network management application may include a single software application or a plurality of software applications cooperating to achieve various network management functions.
  • the communication network 115 may include such network configurations as a local area network (“LAN”), wide area network (“WAN”), metropolitan area network (“MAN”), storage area network (“SAN”), their substantial equivalents or any combination of these and/or other network configurations.
  • the communication network 115 may use physical or wireline communication protocols and media including, but not limited to, metal wires and cables made of copper or aluminum, fiber-optic lines, and cables constructed of other metals or composite materials satisfactory for carrying electromagnetic signals, electrical power lines, electrical power distribution systems, building electrical wiring, conventional telephone lines, coaxial cable, Ethernet, Gigabit Ethernet, Token Ring and Fibre Channel.
  • the communication network 115 may also use wireless communication schemes including, but not limited to, Bluetooth, IEEE 802.11b, infra-red, laser and radio frequency, including the 800 MHz, 900 MHz, 1.9 GHz and 2.4 GHz bands, in addition to or in lieu of one or more wireline communication schemes.
  • wireless communication schemes including, but not limited to, Bluetooth, IEEE 802.11b, infra-red, laser and radio frequency, including the 800 MHz, 900 MHz, 1.9 GHz and 2.4 GHz bands, in addition to or in lieu of one or more wireline communication schemes.
  • a plurality of network nodes 106 , 109 and 112 are preferably communicatively coupled to the communication network 115 .
  • the network nodes 106 , 109 and 112 may be implemented using a variety of computing components.
  • each of the network nodes 106 , 109 and 112 preferably includes at least one processor, and a memory and communication interface operably coupled to the processor (not expressly shown).
  • Examples of network node devices suitable for use with the present invention include, but are not limited to, servers, mainframes, laptops, switches, routers, bridges, hubs, application blades or the like.
  • the network nodes 106 , 109 and 112 in a given computer network may include like devices or a variety of different devices.
  • the network nodes 106 , 109 and 112 are preferably application blades, where an application blade may be defined as any electronic device that is able to perform one or more functions.
  • an application blade may be a peripheral card that is connected to a server or other device that is coupled to a switch.
  • Other examples of application blades include, but are not limited to: remote computing devices communicatively coupled to the communication network 115 by a network connection; software processes running virtually on a single or multiprocessing system and/or single or multithreading processor; electronic appliances with specific functionality; or the like.
  • the network nodes coupled thereto generally report node-specific events, i.e., events generally affecting only the reporting node, and distributed events, i.e., events generally affecting the whole of the computer network, as they are detected, observed or otherwise become known to a network node.
  • Arrows 124 , 127 , and 130 indicate generally the reporting of all events detected by the network nodes 106 , 109 , and 112 , i.e., the reporting of both node-specific events and distributed events.
  • repeated messages regarding the same distributed event are often reported to the network management station by a plurality if not all of the reporting enabled network nodes.
  • the methods of the present invention reduce or eliminate the potential redundant handling of repeated distributed event messages by recognizing a proxy node from the plurality of network nodes that is responsible for reporting distributed events.
  • the network node 106 may be designated as the proxy node for the computer network 100 .
  • the network management station 103 receives a distributed event message from the communication network 115 , it preferably interrogates or otherwise identifies the source of the distributed event message to determine whether the distributed event message was originated or sent by the proxy node or network node 106 as illustrated in FIG. 1 . If the network management station 103 determines that the distributed event message received was generated by or originated from the proxy node 106 , then the network management station 103 preferably handles, processes or otherwise addresses the substance of the event, e.g., removal of a device port's entry from an associated Distributed Name Server.
  • the network management station 103 determines that the distributed event message was sent by a non-proxy node, such as network node 109 and/or 112 , then the network management station 103 preferably further interrogates the distributed event message to determine whether the distributed event need be addressed by the network management station 103 or if the distributed event message can be discarded, delegated or otherwise unprocessed. All node-specific event messages from all of the reporting network nodes 106 , 109 , and 112 are preferably handled, processed or otherwise addressed by the network management station 103 . Additional detail regarding the operational aspects of the present invention will be discussed below with reference to FIG. 2 .
  • the network management station 103 Prior to the initiation of method 200 , the network management station 103 preferably selects or designates one of its associated network nodes 106 , 109 , and 112 to initially serve as the proxy node. The initial selection of a proxy node may be made at random, according to a network address or according to a wide variety of other proxy node selection methods. Once a proxy node has been selected, the network management station 103 notes or stores the proxy node's identity, e.g., its network address, network interface card identifier, etc., for later comparison.
  • identity e.g., its network address, network interface card identifier, etc.
  • Method 200 preferably begins at step 203 where the network management station 103 is in receipt of an event message from the communication network 115 . Upon receipt of the event message, method 200 preferably proceeds to step 206 where the event message may be evaluated to determine whether it contains a node-specific event or a distributed event.
  • step 206 the network management station 103 determines that the received event message contains a node-specific event
  • method 200 preferably proceeds to step 209 .
  • the network management station 103 preferably addresses the node-specific event according to one or more network management settings. For example, if the node-specific event indicates that a cooling fan has failed at the network node reporting the node-specific event, the network management station 103 may generate an electronic message notifying a technician that the fan at the reporting or source network node needs maintenance. Alternatively, if the node-specific event indicates that an application on the reporting node is corrupt, or otherwise in need of repair, the network management station 103 may initiate a reinstall or software download and update routine to repair the corrupt application. Other methods of addressing, processing or otherwise handling node-specific events are contemplated within the spirit and scope of the present invention.
  • step 212 the network management station 103 may await receipt of the next event message before returning to 203 .
  • the network management station 103 may verify that addressed or processed events were actually corrected and reinitiate processing in the instance the event persists.
  • step 206 If at step 206 the review of the received event message indicates that the event message pertains to a distributed event, method 200 preferably proceeds to step 215 .
  • the network management station 103 may identify the address of origination for the distributed event message or otherwise identify the node from which the distributed event message was received or from which network node the distributed event message originated.
  • a variety of methods may be employed to identify the network node from which the distributed event message originated. Such methods include, but are not limited to, parsing a header included with the event message to obtain the network/Internet Protocol/Fibre Channel addressor other unique identifier of the sending network node. Another method of originating node identification may include parsing the distributed event message or a header associated with the distributed event message to locate and/or identify a unique identifier associated with the sending or originating node's network communication device such as a network interface card. Additional methods of identifying the source or origination of a received distributed event message are contemplated within the spirit and scope of the present invention.
  • step 218 the network management station 103 preferably determines whether the network node which sent the distributed event message is the proxy node 106 for the computer network 100 or whether the distributed event message originated from a non-proxy node 109 and/or 112 . To determine whether the proxy node 106 or a non-proxy node 109 and/or 112 originated or sent the distributed event message, the network management station 103 may compare the sending address, the address of origination, or other unique identifier obtained from the distributed event message with stored information identifying the computer network's 100 proxy node selection. Alternative methods such as eliminating the non-proxy nodes 109 and/or 112 as being the sender of the distributed event message may also be employed.
  • step 221 the network management station 103 preferably initiates one or more routines to resolve the issue reported in the distributed event.
  • routines may be configured and used to address or process the various sorts of distributed events that may occur in the computer network 100 . For example, a technician may be notified of repairs needed via an electronic communication generated by the network management station 103 or the network management station 103 may initiate a software routine directed at resolving the issue reported in the distributed event.
  • Alternative methods of network management station 103 settings aimed at resolving distributed events are contemplated within the spirit and scope of the present invention.
  • method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203 .
  • step 218 the network management station 103 determines that the distributed event message received originated or was sent by a non-proxy node 109 and/or 112 , method 200 preferably proceeds to step 224 .
  • the network management station 103 may access or otherwise evaluate the contents of the distributed event message to determine the issue being reported by the distributed event message.
  • the network management station 103 preferably interrogates the distributed event messages received from a non-proxy node 109 and/or 112 to determine if the distributed event issue indicates a problem or a change associated with the proxy node 106 .
  • the network management station 103 may wish to determine if the distributed event message received from a non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed from the communication network 115 , that the proxy node's 106 identifier, e.g., network address, has changed, or that the proxy node 106 is otherwise unavailable.
  • the proxy node's 106 identifier e.g., network address
  • step 227 the network management station 103 may determine whether the distributed event message originated by a non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed or hidden from the network. If the distributed event message received from the non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed from the communication network 115 , method 200 preferably proceeds to step 230 .
  • the network management station 103 preferably reassigns proxy status from the unavailable proxy node to the non-proxy node 109 and/or 112 that sent or originated the distributed event message being processed. For example, if the distributed event indicating that the proxy node 106 has been removed from the communication network 115 originated or was sent by non-proxy node 109 , the network management station 103 may select or designate non-proxy node 109 as the new proxy node for the other non-proxy nodes in the computer network 100 . Before reassigning proxy status, network management station 103 may be configured to execute one or more attempts to bring an unavailable proxy node back on line or to otherwise make an unavailable proxy node available again.
  • the network management station 103 may initiate a routine to designate a non-proxy node 109 and/or 112 that will replace an unavailable proxy node.
  • designating a replacement proxy node are contemplated within the spirit and scope of the present invention.
  • method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203 .
  • step 227 the network management station 103 determines that the contents of the distributed event message were originated or sent by a non-proxy node 109 and/or 112 indicates a problem other than the removal of the proxy node 106 from the communication network 115 , method 200 preferably proceeds to step 233 .
  • the network management station 103 preferably further evaluates the contents of the distributed event message to determine if the distributed event message indicates that the address of the proxy node 106 has been altered or otherwise changed.
  • the address of the proxy node 106 may be defined as a unique identifier for the proxy node 106 used by the network management station 103 .
  • unique identifiers include, but are not limited to, the host/Internet Protocol (“IP”) address in an IP network or the Fibre Channel address of the proxy node 106 in a Fibre Channel network.
  • IP Internet Protocol
  • the network management station 103 may update its proxy node 106 address with the new value.
  • step 236 the network management station 103 preferably updates a stored address for the proxy node 106 with the address reported in the distributed event message originated by a non-proxy node 109 and/or 112 .
  • Alternative implementations of updating the network address of the proxy node 106 include, but are not limited to, the network management station 103 , on its own, verifying or otherwise obtaining the new network address for the proxy node 106 are contemplated within the spirit and scope of the present invention.
  • method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203 .
  • step 233 the network management station 103 determines that the distributed event message originated or sent by a non-proxy node 109 and/or 112 does not indicate that the address of the proxy node 106 has been changed, method 200 preferably proceeds to step 239 .
  • the distributed event message originated or sent by a non-proxy node 109 and/or 112 may be discarded by the network management station 103 .
  • Method 200 may be modified such that distributed event messages originated or sent by a non-proxy node 109 and/or 112 are discarded only after the network management station 103 determines that the distributed event messages have or will have no effect on the proxy node 106 if not addressed.
  • the network management station 103 may also be configured to delegate distributed event handling where the distributed event message does not affect the proxy node 106 .
  • method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203 .
  • method 200 provides numerous advantages over existing distributed event handling methods.
  • One such advantage is that method 200 does not require the network nodes 106 , 109 and 112 to take part in the proxy node selection process; for computer networks composed of heterogeneous network nodes, such network node participation may be impractical to implement.
  • An additional advantage of method 200 is that, compared to those network management systems that only monitor and handle events from an individual node in the network, method 200 will not miss distributed events even if the network node they are monitoring is not the proxy node.
  • FIG. 3 Illustrated in FIG. 3 is an exemplary embodiment of a computer network, similar to computer network 100 , incorporating teachings of the present invention.
  • network management station 103 Similar to computer network 100 , illustrated in FIG. 1 , and computer network 300 , are network management station 103 , network nodes, 106 , 109 and 112 and communication network 115 .
  • Other similarities, such as the monitor 118 and the central processing unit 121 of the network management station 103 are also present.
  • Computer network 300 differs, however, from the computer network 100 illustrated in FIG. 1 in its implementation of distributed event handling reduction. Specifically, the computer network 300 illustrated in FIG. 3 preferably implements method 400 , illustrated in FIG. 4 , to reduce distributed event reporting to the network management station 103 .
  • method 400 preferably enables the network nodes 106 , 109 and 112 to select by and among themselves a proxy node, e.g., network node 106 , as indicated generally at arrows 303 , 306 and 312 .
  • the proxy node 106 is preferably enabled to report both distributed events and node-specific events to the network management station 103 , as indicated generally by arrow 124 .
  • Network or non-proxy nodes 109 and/or 112 are preferably configured to report only node-specific events, as indicated generally by arrows 303 and 306 so long as the proxy node 106 remains on line or available.
  • the proxy node 106 is the primary network node responsible for reporting distributed events to the network management station 103 while it is available. Such a configuration reduces network traffic and the double-handling of distributed events as reported by other methods. As will be discussed in greater detail below, should the proxy node 106 become unavailable, the non-proxy nodes 109 and/or 112 will preferably select a new proxy node and continue operation of the computer network 300 according to method 400 .
  • the network distributed event handling method of FIG. 4 generally functions by placing the intelligence of avoiding double-handling into the network nodes 106 , 109 and 112 .
  • a network node 106 , 109 and 112 is agreed upon as the proxy node by all the network nodes participating in proxy node selection or available on the communication network 115 .
  • all the network nodes 106 , 109 and 112 are participants in the proxy node selection process.
  • Both the proxy node and the non-proxy nodes are preferably configured to report, message or send out node-specific events to the network management station 103 .
  • FIG. 4 Depicted in FIG. 4 is a method for selecting a proxy node from a plurality of network nodes and where the proxy node selection is made entirely by the network nodes themselves, according to teachings of the present invention.
  • method 400 begins generally at step 403 .
  • Method 400 may be modified to operate on a newly configured computer network or an existing computer network such that method 400 maintains operation of the computer network according to teachings of the present invention.
  • a proxy node selection message is preferably sent by and between all of the network nodes 106 , 109 and 112 in the computer network 300 .
  • the proxy node selection message transmission may be initiated and sent by a selected one of the network nodes 106 , 109 and 112 by design, e.g., a network administrator may designate a node to initiate proxy node selection, or the first node to detect an event to be reported may be the node to initiate proxy node selection.
  • Preferably included in each proxy node selection message are both information pertaining to the source node of the proxy node selection message and the existing proxy node selection known to be the source node, if any is so known.
  • the network nodes 106 , 109 and 112 may begin the proxy node selection process at step 409 .
  • the embodiment of method 400 illustrated in FIG. 4 assumes that new network nodes do not have an existing proxy node selection available to them. However, using the teachings below regarding network nodes having an existing proxy node selection available to them, method 400 may be altered or modified such that a newly established network utilizes the existing proxy node selections available.
  • a proxy node selection is made by the new network nodes using an agreed upon selection rule.
  • the agreed upon selection rule may be derived from the physical layout of the network nodes and their associated communication network.
  • the agreed upon selection rule may select a proxy node based on the IP address of the nodes, a Fibre Channel network address or on time stamps associated with the exchanged proxy node selection messages.
  • the proxy node selection rule may be established at the deployment of method 400 by a network administrator, for example. Additional proxy node selection rules are contemplated within the spirit and scope of the present invention.
  • step 409 event detection and generation or monitoring may be initiated in the current proxy node 106 and the non-proxy nodes 109 and/or 112 , i.e., those network nodes not currently designated as the proxy node.
  • the current proxy node 106 and non-proxy nodes 109 and/or 112 are preferably configured for event detection, generation and monitoring differently.
  • the current proxy node 106 is preferably configured to detect and report both distributed events and node-specific events.
  • the current or initial non-proxy nodes 109 and/or 112 are preferably configured only to detect and report node-specific events, so long as the current or initial proxy node 106 remains available.
  • the computer network 300 is preferably available for use and each non-proxy node 109 and/or 112 is preferably monitoring itself for node-specific events and the proxy node 106 is preferably both monitoring itself for node-specific events and monitoring the computer network 300 for distributed events.
  • step 415 is preferably a wait state for the network management station 103 .
  • the computer network 300 is preferably operating as desired, transferring communications as requested, etc.
  • the computer network 300 is preferably being monitored for the addition of new nodes. Monitoring and notification of the presence of new nodes may be accomplished using a variety of methods. For example, as a new node is added to the computer network 300 , the new node may be configured to transmit a signal to the existing nodes on the network that it has been added.
  • the network management station 103 may be configured to periodically poll the computer network 300 to detect the presence of new nodes, detect missing nodes as well as to accomplish other network management goals.
  • the current proxy node 106 or one of the current non-proxy nodes 109 and/or 112 may be configured to monitor the computer network 300 for the addition of new network nodes.
  • method 400 is preferably also monitoring the availability of the current proxy node 106 . According to teachings of the present invention, in the event the current proxy node 106 becomes unavailable, method 400 preferably initiates a new proxy node selection process generally as described below.
  • Monitoring the availability of the current proxy node 106 may be accomplished using a variety of processes. For example, once the current proxy node 106 has been selected, in addition to configuring the current proxy node 106 to report both distributed events and node-specific events, the current proxy node 106 may be configured such that it provides a heartbeat signal to the non-proxy nodes 109 and/or 112 . In such an implementation, when one of the non-proxy nodes 109 and/or 112 ceases to receive the heartbeat signal from the current proxy node 106 , the non-proxy node 109 and/or 112 may verify the unavailability of the proxy node 106 and/or initiate the selection process for a replacement proxy node.
  • one or more of the non-proxy nodes 109 and/or 112 may be configured to periodically verify that the current proxy node 106 is available. In the event a non-proxy node 109 and/or 112 is unable to communicate with the current proxy node 106 or otherwise determines that the current proxy node 106 is unavailable, the process of selecting a replacement or new proxy node may be initiated by discovering non-proxy node 109 and/or 112 .
  • method 400 preferably proceeds to step 418 .
  • event generation in the computer network 300 i.e., in the network nodes 106 , 109 and 112 , is preferably stopped or paused.
  • method 400 preferably proceeds to step 421 .
  • the process of selecting a new or replacement proxy node may be initiated. The proxy node selection process may vary slightly at step 421 depending on whether the proxy node selection process was initiated in response to the addition of a new node to the computer network 300 or in response to the unavailability of the current proxy node 106 .
  • a proxy node selection message is preferably sent to each new node added to the network at step 421 .
  • the current proxy node 106 may be responsible for initiating the exchange of proxy node selection messages with the new network nodes.
  • one of the non-proxy nodes 109 and/or 112 may be responsible for sending out the proxy node selection message to the new nodes and the remaining available nodes.
  • the network management station 103 may be responsible for initiating the proxy selection process, the remaining steps of the proxy selection process preferably being performed by the available network nodes, without additional input or support from the network management station 103 .
  • the proxy node selection messages are preferably exchanged by and between all of the non-proxy nodes 109 and/or 112 and/or all of the network nodes available on the communications network 115 at step 421 .
  • the non-proxy node 109 and/or 112 detecting and/or determining the unavailability of the current proxy node 106 may be responsible for initiating the exchange of proxy node selection messages between the appropriate non-proxy and network nodes.
  • the non-proxy node 109 and/or 112 initiating a new proxy node selection process may indicate the unavailability of the current proxy node 106 to the remaining non-proxy nodes 109 and/or 112 such that each may release their current proxy node selection setting.
  • proxy node selection message initiation and generation are contemplated within the spirit and scope of the present invention.
  • both new and non-proxy, method 400 preferably proceeds to step 424 .
  • the available network nodes preferably wait for the proxy node selection messages from each of the other nodes participating in the proxy node selection process. For example, in the newly added network node scenario described above, the current proxy node 106 and the existing non-proxy nodes 109 and/or 112 will preferably wait for return proxy node selection messages from each of the newly added network nodes. Alternatively, if the current proxy node 106 is managing the proxy node selection process with the new network nodes, the current proxy node 106 may remain in wait at step 424 .
  • method 400 Upon receipt of each return proxy node selection message, method 400 preferably proceeds to step 427 where a check is made to determine if the returning proxy node selection messages have been received from all of the nodes participating in the proxy node selection process, for example, from all of the new network nodes. If it is determined that there are nodes from which a return proxy node selection message has not been received, method 400 preferably returns to step 424 where the remaining return proxy node selection messages may be awaited. If it is determined that all of the nodes participating in the proxy node selection process have returned a proxy node selection message, method 400 preferably proceeds to step 430 . Alternatively, if method 400 has returned to step 424 to await additional return proxy node selection messages but no additional return proxy node selection messages are received within some defined time window, method 400 may proceed to step 430 .
  • the proxy node selection messages preferably include both information as to the source of the proxy node selection message and information as to the existing proxy node selection known to the source node, if any. For example, if proxy node selection was initiated in response to the addition of nodes to the computer network 300 , each of the existing non-proxy nodes 109 and/or 112 and the proxy node 106 already on the computer network 300 should each indicate an existing proxy node selection, i.e., the current proxy node 106 .
  • each of the return proxy node selection messages from the non-proxy nodes 109 and/or 112 participating in the new proxy node selection process may not have an existing proxy selection, e.g., each non-proxy node 109 and/or 112 may have released its existing proxy node selection setting in response to the knowledge that the current proxy node 106 has become unavailable.
  • a new proxy node may be selected from the nodes available on the computer network 300 according to a selection rule agreed upon by the nodes. Examples for such a rule include, but are not limited to, an Internet Protocol address based rule, a Fibre Channel node World Wide Name based rule and an earliest time stamp based rule using the timestamps preferably included in the proxy node selection messages, as mentioned above.
  • method 400 Upon selection of a new or replacement proxy node by agreed upon rule at step 433 , method 400 preferably proceeds to step 436 where event generation may be restarted according to the new arrangement of non-proxy nodes and the newly selected proxy node.
  • the new proxy node may be configured to monitor and report both distributed and node-specific events and to monitor the network for new nodes while the non-proxy nodes may be configured to report only node-specific events and to monitor the availability of the new proxy node.
  • step 436 method 400 preferably returns to step 415 where the addition of nodes to the network and the unavailability of the new proxy node are preferably monitored and awaited by the network management station 103 .
  • step 430 If at step 430 it is determined that one of the return proxy node selection messages contains an existing proxy selection, method 400 preferably proceeds to step 439 .
  • each of the nodes or a managing node e.g., the current proxy node 106 or the non-proxy node 109 and/or 112 detecting the unavailability of the current proxy node 106 , in receipt of return proxy selection messages preferably determines whether the existing proxy node selections indicated in the return proxy node selection messages received from the other nodes are in conflict with or match one another.
  • method 400 preferably proceeds to step 433 where the participating network nodes use an agreed upon rule for selecting a new proxy node generally as described above. Alternatively, if a single network node is evaluating whether there is a conflict among the existing proxy node selections, that network node may generate a message indicating such a conflict to the remaining participating network nodes and the need to proceed to step 433 for selection of a new proxy node by agreed upon rule. If it is determined that there are no conflicts or that the existing proxy node selections indicated in the return proxy selection messages match one another, method 400 preferably proceeds to step 442 .
  • step 442 determines that it either does not have an existing proxy node selection or that its existing proxy node selection does not match or conflicts with the existing proxy node selection submitted by the remaining network nodes
  • method 400 preferably proceeds to step 445 .
  • step 445 the current network node adopts the existing proxy node selection submitted by the remaining network nodes such that all participating network nodes now recognize the same new proxy node for the computer network 300 .
  • step 436 event generation, a described above, may be initiated.
  • Method 400 provides numerous advantages over existing distributed event handling methods.
  • One advantage of method 400 is that method 400 does not require the involvement of the network management station 103 for purposes other than processing node-specific and distributed events, i.e., the network management station 103 is not needed for proxy selection or for ensuring proxy availability, thus the resources of the network management station 103 may be reserved for event handling and other significant network management processing.
  • method 400 reduces network traffic by preferably sending the network management station 103 only one copy of each distributed event.
  • methods 200 and 400 provide clear advantages over the existing distributed event handling solutions.
  • One such advantage is the elimination or reduction in double-handling when the network management station 103 receives multiple copies of the same event.
  • the methods described herein reduce the processing resources associated with double-handling, thereby freeing such resources for other processing or network management applications.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The present disclosure details a system, apparatus and method for reducing the redundant handling of distributed network events. In one aspect, a proxy node is selected from a plurality of network nodes and an associated network management station (“NMS”) preferably addresses only the distributed events received from the proxy node. In an alternate embodiment, non-proxy nodes may be limited to reporting node-specific events to the NMS, resulting in a reduction of the number of distributed events received and processed by the NMS to those sent by the proxy node. The proxy node may be selected by the NMS or by the network nodes, in alternate implementations. Availability of the proxy node may be monitored and ensured by the network nodes or by the NMS. The selection of a proxy node is generally repeated upon the addition of nodes to the network or a lapse in proxy node availability.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is related to U.S. patent application Ser. No. 09/738,960 entitled “Caching System and Method for a Network Storage System” by Lin-Sheng Chiou, Mike Witkowski, Hawkins Yao, Cheh-Suei Yang, and Sompong Paul Olarig, which was filed on Dec. 14, 2000 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/015,047 entitled “System, Apparatus and Method for Address Forwarding for a Computer Network” by Hawkins Yao, Cheh-Suei Yang, Richard Gunlock, Michael L. Witkowski, and Sompong Paul Olarig, which was filed on Oct. 26, 2001 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/039,190 entitled “Network Processor Interface System” by Sompong Paul Olarig, Mark Lyndon Oelke, and John E. Jenne, which was filed on Dec. 31, 2001, and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/039,189 entitled “Xon/Xoff Flow Control for Computer Network” by Hawkins Yao, John E. Jenne, and Mark Lyndon Oelke, which was filed on Dec. 31, 2001, and which is incorporated herein by reference in its entirety for all purposes; and U.S. patent application Ser. No. 10/039,184 entitled “Buffer to Buffer Flow Control for Computer Network” by John E. Jenne, Mark Lyndon Oelke and Sompong Paul Olarig, which was filed on Dec. 31, 2001, and which is incorporated herein by reference in its entirety for all purposes. This application is also related to the following four U.S. patent applications: U.S. patent application Ser. No. 10/117,418 entitled “System and Method for Linking a Plurality of Network Switches,” by Ram Ganesan Iyer, Hawkins Yao and Michael Witkowski, filed on Apr. 5, 2002 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/117,040 entitled “System and Method for Expansion of Computer Network Switching System Without Disruption Thereof,” by Mark Lyndon Oelke, John E. Jenne, Sompong Paul Olarig, Gary Benedict Kotzur and Matthew John Schumacher, filed on Apr. 5, 2002 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/117,266 entitled “System and Method for Guaranteed Link Layer Flow Control,” by Hani Ajus and Chung Dai, filed on Apr. 5, 2002 and which is incorporated herein by reference in its entirety for all purposes; U.S. patent application Ser. No. 10/117,638 entitled “Fibre Channel Implementation Using Network Processors,” by Hawkins Yao, Richard Gunlock and Po-Wei Tan, filed on Apr. 5, 2002 and which is incorporated herein by reference in its entirety for all purposes. This application is a divisional of U.S. patent application Ser. No. 10/117,290 entitled “Method and System For Reduced Distributed Event Handling In A Network Environment,” by Ruotao Huang and Ram Ganesan Iyer, filed on Apr. 5, 2002. The contents of these applications are incorporated herein in their entirety by this reference.
  • BACKGROUND
  • 1. Technical Field of the Invention
  • The present application is related to computer networks. More specifically, the present application is related to a system, apparatus and method for handling multiple instances of events while avoiding duplication of work in a distributed network environment.
  • 2. Background of the Invention
  • The distributed nature of computer networks presents various challenges for their centralized management. One such challenge is event or alarm management and processing. In a typical network environment, distributed network nodes typically notify the network's central network management server software application of any changes in the state of the network or of individual nodes. In general, the network management application or software may be run on one network server or simultaneously on a plurality of such servers. Such network management applications typically represent a single point, management interface for network administration.
  • Among the events or alarms typically monitored in a distributed network are distributed and node-specific events. In general, distributed events are those events that may affect the network as a whole. One example of a distributed event is the removal of a device port's entry from an associated Distributed Name Server. Such an event is considered a distributed event because it affects the Distributed Name Server on all of the network's Fibre Channel switches, for example.
  • Node-specific events, on the other hand, are typically concerned only with the state of an individual node. One example of a node-specific event is a FAN_FAILURE alarm. A FAN_FAILURE alarm is considered a node-specific event because it does not generally affect any nodes in the network other than the node where it originates.
  • Network management difficulties arise when the same distributed event is sent to the network management application by multiple nodes. If the network management application handles or processes each instance of the reported event without distinguishing whether each event is a different event or multiple copies of the same event, the network management application may suffer performance degradation resulting from double-handling, i.e., the repeated processing or addressing of the same events. Double-handling is typically most dangerous in situations where the network management application handles or processes events based on certain assumptions regarding the current state of the computer network. In other words, when the network management application receives a subsequent copy of the same event, the state of the network may have already been changed as a result of the network management application's handling of the previously reported event. At a minimum, double-handling consumes resources as the network management application attempts to repeatedly handle or process the same event.
  • Attempts to resolve the issue of double-handling include giving the multiple copies of the same event the same identity tag. In such an implementation, when the network management application receives notification of events, the network management application will begin by examining the identity tags. By examining the identity tags, the network management application can group those events with the same identity tags together, thereby enabling the network management application to handle or process the same event only once.
  • In reality, however, identity tags are impractical to implement. In one aspect, the need for the nodes to communicate with each other to agree on the identity tag every time they are going to send a notice of an event results in excessive network overhead. In a further aspect, the network management application generally has to keep a history of all the received tags in order to perform tag association.
  • SUMMARY OF THE INVENTION
  • The present invention overcomes the above-identified problems as well as other shortcomings and deficiencies by providing a system, apparatus and method for reducing the double-handling of distributed event messages in a computer network environment. In a primary aspect of the present invention, distributed event handling may be reduced by maintaining the availability of a proxy node that is responsible for reporting the distributed events to a network management station (“NMS”).
  • The present invention provides the technical advantage of properly handling multiple instances of the same event received from the network nodes in a distributed network environment without double-handling while at the same time being able to receive and handle events unique to each individual network node.
  • The present invention further provides technical advantages through the reduction of instances of double-handling which simultaneously reduces usage of the network's processing resources. In one embodiment, network resource usage may be reduced by sending only one copy of each distributed event to the network management station (“NMS”) and its associated applications for processing.
  • A further technical advantage provided by the present invention stems from the distributed event handling that is performed primarily by the network management station, eliminating processing efforts from network nodes. Such elimination is invaluable when using the network management station to monitor and manage the networks of third-party nodes.
  • In another respect, the present invention provides the advantage of reducing distributed event double-handling without consuming network management resources by pushing the elimination of redundant distributed event messages down to the network nodes.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • A more complete understanding of the present embodiments and advantages thereof may be acquired by referring to the following description taken in conjunction with the accompanying drawings, in which like reference numbers indicate like features, and wherein:
  • FIG. 1 is a schematic drawing depicting a computer network formed in accordance with teachings of the present invention;
  • FIG. 2 is a flow diagram depicting a method for reducing the repeated handling of distributed network events according to teaching of the present invention;
  • FIG. 3 is a schematic drawing diagram depicting an alternate embodiment of a computer network formed in accordance with teachings of the present invention; and
  • FIG. 4 is a flow diagram depicting a method for reducing distributed event messaging through the maintenance of a proxy node by and among the network nodes, according to teachings of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Preferred embodiments of the present invention and its advantages are best understood by referring to FIGS. 1 through 4 of the drawings, like numerals being used for like and corresponding parts of the various drawings.
  • Illustrated in FIG. 1 is an exemplary embodiment of a computer network incorporating teachings of the present invention. The computer network, indicated generally at 100, preferably includes one or more network management stations 103. The network management station 103 may be any computing device capable of performing the methods described herein as well as capable of communicating with nodes 106, 109 and 112 or the like via communication network 115.
  • In one embodiment, the network management station 103 may include a monitor or display 118, a central processing unit 121, one or more user input devices (not expressly shown). Examples of user input devices included, but are not limited to, a computer mouse, keyboard, touchscreen, voice recognition hardware and software, as well as other input devices. The central processing unit 121 may take many forms. For example, the central processing unit 121 may be a mainframe computer, a server computer a desktop computer, a laptop computer, application blade or any other computer device capable of responding to event messages generated and communicated by the network nodes 106, 109, and 112 as well as monitoring, repairing or otherwise managing computer network 100.
  • The network management station 103 preferably operates one or more network management applications in order to maximize the uptime, effectiveness, utilization and other operational characteristics of the computer network 100. In a network consisting of distributed network nodes, such as computer network 100, such network management applications are typically employed on a central network management station to provide a single point of network management for network administration.
  • Network management applications are typically operable to monitor, configure, test or otherwise manage generally all aspects of their associated computer networks and the computing components coupled to those networks. For example, a network management application may be configured to detect the addition of network nodes to the computer network. Further, the network management application may be able to detect the availability of individual network nodes or other devices coupled to the computer network. Preferably, the network management application is able to address event messages, distributed or node-specific, generated by the network nodes as well as to perform other network management functions and operations. A network management application may include a single software application or a plurality of software applications cooperating to achieve various network management functions.
  • The communication network 115 may include such network configurations as a local area network (“LAN”), wide area network (“WAN”), metropolitan area network (“MAN”), storage area network (“SAN”), their substantial equivalents or any combination of these and/or other network configurations. In addition, the communication network 115 may use physical or wireline communication protocols and media including, but not limited to, metal wires and cables made of copper or aluminum, fiber-optic lines, and cables constructed of other metals or composite materials satisfactory for carrying electromagnetic signals, electrical power lines, electrical power distribution systems, building electrical wiring, conventional telephone lines, coaxial cable, Ethernet, Gigabit Ethernet, Token Ring and Fibre Channel. Further, the communication network 115 may also use wireless communication schemes including, but not limited to, Bluetooth, IEEE 802.11b, infra-red, laser and radio frequency, including the 800 MHz, 900 MHz, 1.9 GHz and 2.4 GHz bands, in addition to or in lieu of one or more wireline communication schemes.
  • As illustrated in FIG. 1, a plurality of network nodes 106, 109 and 112 are preferably communicatively coupled to the communication network 115. The network nodes 106, 109 and 112 may be implemented using a variety of computing components. In general, each of the network nodes 106, 109 and 112 preferably includes at least one processor, and a memory and communication interface operably coupled to the processor (not expressly shown). Examples of network node devices suitable for use with the present invention include, but are not limited to, servers, mainframes, laptops, switches, routers, bridges, hubs, application blades or the like. The network nodes 106, 109 and 112 in a given computer network may include like devices or a variety of different devices.
  • In one embodiment of the present invention, the network nodes 106, 109 and 112 are preferably application blades, where an application blade may be defined as any electronic device that is able to perform one or more functions. For example, an application blade may be a peripheral card that is connected to a server or other device that is coupled to a switch. Other examples of application blades include, but are not limited to: remote computing devices communicatively coupled to the communication network 115 by a network connection; software processes running virtually on a single or multiprocessing system and/or single or multithreading processor; electronic appliances with specific functionality; or the like.
  • In a typical computer network configuration, the network nodes coupled thereto generally report node-specific events, i.e., events generally affecting only the reporting node, and distributed events, i.e., events generally affecting the whole of the computer network, as they are detected, observed or otherwise become known to a network node. Arrows 124, 127, and 130 indicate generally the reporting of all events detected by the network nodes 106, 109, and 112, i.e., the reporting of both node-specific events and distributed events. As a result, repeated messages regarding the same distributed event are often reported to the network management station by a plurality if not all of the reporting enabled network nodes. The methods of the present invention reduce or eliminate the potential redundant handling of repeated distributed event messages by recognizing a proxy node from the plurality of network nodes that is responsible for reporting distributed events.
  • As shown in FIG. 1, the network node 106 may be designated as the proxy node for the computer network 100. In general operation, when the network management station 103 receives a distributed event message from the communication network 115, it preferably interrogates or otherwise identifies the source of the distributed event message to determine whether the distributed event message was originated or sent by the proxy node or network node 106 as illustrated in FIG. 1. If the network management station 103 determines that the distributed event message received was generated by or originated from the proxy node 106, then the network management station 103 preferably handles, processes or otherwise addresses the substance of the event, e.g., removal of a device port's entry from an associated Distributed Name Server. Alternatively, if the network management station 103 determines that the distributed event message was sent by a non-proxy node, such as network node 109 and/or 112, then the network management station 103 preferably further interrogates the distributed event message to determine whether the distributed event need be addressed by the network management station 103 or if the distributed event message can be discarded, delegated or otherwise unprocessed. All node-specific event messages from all of the reporting network nodes 106, 109, and 112 are preferably handled, processed or otherwise addressed by the network management station 103. Additional detail regarding the operational aspects of the present invention will be discussed below with reference to FIG. 2.
  • Referring now to FIG. 2, a flow diagram illustrating a network management station based method of reducing or eliminating the repeated handling of distributed network event messages is shown, according to teachings of the present invention. Prior to the initiation of method 200, the network management station 103 preferably selects or designates one of its associated network nodes 106, 109, and 112 to initially serve as the proxy node. The initial selection of a proxy node may be made at random, according to a network address or according to a wide variety of other proxy node selection methods. Once a proxy node has been selected, the network management station 103 notes or stores the proxy node's identity, e.g., its network address, network interface card identifier, etc., for later comparison.
  • Method 200 preferably begins at step 203 where the network management station 103 is in receipt of an event message from the communication network 115. Upon receipt of the event message, method 200 preferably proceeds to step 206 where the event message may be evaluated to determine whether it contains a node-specific event or a distributed event.
  • If at step 206 the network management station 103 determines that the received event message contains a node-specific event, method 200 preferably proceeds to step 209. At step 209, the network management station 103 preferably addresses the node-specific event according to one or more network management settings. For example, if the node-specific event indicates that a cooling fan has failed at the network node reporting the node-specific event, the network management station 103 may generate an electronic message notifying a technician that the fan at the reporting or source network node needs maintenance. Alternatively, if the node-specific event indicates that an application on the reporting node is corrupt, or otherwise in need of repair, the network management station 103 may initiate a reinstall or software download and update routine to repair the corrupt application. Other methods of addressing, processing or otherwise handling node-specific events are contemplated within the spirit and scope of the present invention.
  • Once the network management station 103 has addressed, or initiated a response to the node-specific event, method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to 203. In a further embodiment, the network management station 103 may verify that addressed or processed events were actually corrected and reinitiate processing in the instance the event persists.
  • If at step 206 the review of the received event message indicates that the event message pertains to a distributed event, method 200 preferably proceeds to step 215. At step 215 the network management station 103 may identify the address of origination for the distributed event message or otherwise identify the node from which the distributed event message was received or from which network node the distributed event message originated.
  • A variety of methods may be employed to identify the network node from which the distributed event message originated. Such methods include, but are not limited to, parsing a header included with the event message to obtain the network/Internet Protocol/Fibre Channel addressor other unique identifier of the sending network node. Another method of originating node identification may include parsing the distributed event message or a header associated with the distributed event message to locate and/or identify a unique identifier associated with the sending or originating node's network communication device such as a network interface card. Additional methods of identifying the source or origination of a received distributed event message are contemplated within the spirit and scope of the present invention.
  • Once the network management station 103 has obtained the information preferred to determine the originator or sender of the distributed event message, method 200 preferably proceeds to step 218. At step 218, the network management station 103 preferably determines whether the network node which sent the distributed event message is the proxy node 106 for the computer network 100 or whether the distributed event message originated from a non-proxy node 109 and/or 112. To determine whether the proxy node 106 or a non-proxy node 109 and/or 112 originated or sent the distributed event message, the network management station 103 may compare the sending address, the address of origination, or other unique identifier obtained from the distributed event message with stored information identifying the computer network's 100 proxy node selection. Alternative methods such as eliminating the non-proxy nodes 109 and/or 112 as being the sender of the distributed event message may also be employed.
  • If the network management station 103 determines that the distributed event was originated or sent by the proxy node 106, method 200 preferably proceeds to step 221. At step 221, the network management station 103 preferably initiates one or more routines to resolve the issue reported in the distributed event. As mentioned above, many network management station 103 or network management settings may be configured and used to address or process the various sorts of distributed events that may occur in the computer network 100. For example, a technician may be notified of repairs needed via an electronic communication generated by the network management station 103 or the network management station 103 may initiate a software routine directed at resolving the issue reported in the distributed event. Alternative methods of network management station 103 settings aimed at resolving distributed events are contemplated within the spirit and scope of the present invention.
  • Once the network management station 103 has addressed the content of a distributed event, method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203.
  • If at step 218 the network management station 103 determines that the distributed event message received originated or was sent by a non-proxy node 109 and/or 112, method 200 preferably proceeds to step 224. At step 224, the network management station 103 may access or otherwise evaluate the contents of the distributed event message to determine the issue being reported by the distributed event message. In a preferred embodiment, the network management station 103 preferably interrogates the distributed event messages received from a non-proxy node 109 and/or 112 to determine if the distributed event issue indicates a problem or a change associated with the proxy node 106. For example, the network management station 103 may wish to determine if the distributed event message received from a non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed from the communication network 115, that the proxy node's 106 identifier, e.g., network address, has changed, or that the proxy node 106 is otherwise unavailable.
  • Once the network management station 103 has accessed or interrogated the contents of the distributed event message originated or sent by a non-proxy node 109 and/or 112, method 200 preferably proceeds to step 227. At step 227, the network management station 103 may determine whether the distributed event message originated by a non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed or hidden from the network. If the distributed event message received from the non-proxy node 109 and/or 112 indicates that the proxy node 106 has been removed from the communication network 115, method 200 preferably proceeds to step 230.
  • At step 230, the network management station 103 preferably reassigns proxy status from the unavailable proxy node to the non-proxy node 109 and/or 112 that sent or originated the distributed event message being processed. For example, if the distributed event indicating that the proxy node 106 has been removed from the communication network 115 originated or was sent by non-proxy node 109, the network management station 103 may select or designate non-proxy node 109 as the new proxy node for the other non-proxy nodes in the computer network 100. Before reassigning proxy status, network management station 103 may be configured to execute one or more attempts to bring an unavailable proxy node back on line or to otherwise make an unavailable proxy node available again.
  • In an alternate implementation of method 200, the network management station 103 may initiate a routine to designate a non-proxy node 109 and/or 112 that will replace an unavailable proxy node. Other methods and implementations of designating a replacement proxy node are contemplated within the spirit and scope of the present invention.
  • Once the network management station 103 has addressed the removed or unavailable proxy node issue at step 230, method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203.
  • If at step 227 the network management station 103 determines that the contents of the distributed event message were originated or sent by a non-proxy node 109 and/or 112 indicates a problem other than the removal of the proxy node 106 from the communication network 115, method 200 preferably proceeds to step 233. At step 233, the network management station 103 preferably further evaluates the contents of the distributed event message to determine if the distributed event message indicates that the address of the proxy node 106 has been altered or otherwise changed.
  • In one example, the address of the proxy node 106 may be defined as a unique identifier for the proxy node 106 used by the network management station 103. Examples of such unique identifiers include, but are not limited to, the host/Internet Protocol (“IP”) address in an IP network or the Fibre Channel address of the proxy node 106 in a Fibre Channel network. Thus, if the IP address of the proxy node 106 is used by the network management station 103 and a distributed event message from a non-proxy node 109 and/or 112 informs the network management station 103 that the proxy node's 106 IP address has changed, then the network management station 103 may update its proxy node 106 address with the new value.
  • If the distributed event message originated or sent by a non-proxy node 109 and/or 112 indicates that the address of the proxy node 106 has been changed, method 200 preferably proceeds to step 236. At step 236, the network management station 103 preferably updates a stored address for the proxy node 106 with the address reported in the distributed event message originated by a non-proxy node 109 and/or 112. Alternative implementations of updating the network address of the proxy node 106, include, but are not limited to, the network management station 103, on its own, verifying or otherwise obtaining the new network address for the proxy node 106 are contemplated within the spirit and scope of the present invention.
  • Once the network management station 103 has addressed the non-proxy node 109 and/or 112 originated distributed event message at step 236, method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203.
  • If at step 233 the network management station 103 determines that the distributed event message originated or sent by a non-proxy node 109 and/or 112 does not indicate that the address of the proxy node 106 has been changed, method 200 preferably proceeds to step 239. At step 239 the distributed event message originated or sent by a non-proxy node 109 and/or 112 may be discarded by the network management station 103. Method 200 may be modified such that distributed event messages originated or sent by a non-proxy node 109 and/or 112 are discarded only after the network management station 103 determines that the distributed event messages have or will have no effect on the proxy node 106 if not addressed. The network management station 103 may also be configured to delegate distributed event handling where the distributed event message does not affect the proxy node 106.
  • Once the network management station 103 has addressed the distributed event message originated or sent by a non-proxy node at step 239, method 200 preferably proceeds to step 212 where the network management station 103 may await receipt of the next event message before returning to step 203.
  • As described, method 200 provides numerous advantages over existing distributed event handling methods. One such advantage is that method 200 does not require the network nodes 106, 109 and 112 to take part in the proxy node selection process; for computer networks composed of heterogeneous network nodes, such network node participation may be impractical to implement. An additional advantage of method 200 is that, compared to those network management systems that only monitor and handle events from an individual node in the network, method 200 will not miss distributed events even if the network node they are monitoring is not the proxy node.
  • Illustrated in FIG. 3 is an exemplary embodiment of a computer network, similar to computer network 100, incorporating teachings of the present invention. Among the similarities between computer network 100, illustrated in FIG. 1, and computer network 300, are network management station 103, network nodes, 106, 109 and 112 and communication network 115. Other similarities, such as the monitor 118 and the central processing unit 121 of the network management station 103, are also present.
  • Computer network 300 differs, however, from the computer network 100 illustrated in FIG. 1 in its implementation of distributed event handling reduction. Specifically, the computer network 300 illustrated in FIG. 3 preferably implements method 400, illustrated in FIG. 4, to reduce distributed event reporting to the network management station 103.
  • In general, method 400, as described in greater detail below, preferably enables the network nodes 106, 109 and 112 to select by and among themselves a proxy node, e.g., network node 106, as indicated generally at arrows 303, 306 and 312. Upon doing so, the proxy node 106 is preferably enabled to report both distributed events and node-specific events to the network management station 103, as indicated generally by arrow 124. Network or non-proxy nodes 109 and/or 112, on the other hand, are preferably configured to report only node-specific events, as indicated generally by arrows 303 and 306 so long as the proxy node 106 remains on line or available. As a result, the proxy node 106 is the primary network node responsible for reporting distributed events to the network management station 103 while it is available. Such a configuration reduces network traffic and the double-handling of distributed events as reported by other methods. As will be discussed in greater detail below, should the proxy node 106 become unavailable, the non-proxy nodes 109 and/or 112 will preferably select a new proxy node and continue operation of the computer network 300 according to method 400.
  • The network distributed event handling method of FIG. 4 generally functions by placing the intelligence of avoiding double-handling into the network nodes 106, 109 and 112. In method 400 generally, a network node 106, 109 and 112 is agreed upon as the proxy node by all the network nodes participating in proxy node selection or available on the communication network 115. Instead of the network management station 103 designating a proxy node, all the network nodes 106, 109 and 112 are participants in the proxy node selection process. Once a proxy node has been selected, only the proxy node will report distributed events. Both the proxy node and the non-proxy nodes, on the other hand, are preferably configured to report, message or send out node-specific events to the network management station 103.
  • Depicted in FIG. 4 is a method for selecting a proxy node from a plurality of network nodes and where the proxy node selection is made entirely by the network nodes themselves, according to teachings of the present invention. For a newly established, upstart or otherwise initial run of a computer network, method 400 begins generally at step 403. Method 400 may be modified to operate on a newly configured computer network or an existing computer network such that method 400 maintains operation of the computer network according to teachings of the present invention.
  • Upon initiation of the network at step 403, method 400 preferably proceeds to step 406. At step 406, assuming for purposes of description that method 400 is being implemented on a newly established computer network, a proxy node selection message is preferably sent by and between all of the network nodes 106, 109 and 112 in the computer network 300. The proxy node selection message transmission may be initiated and sent by a selected one of the network nodes 106, 109 and 112 by design, e.g., a network administrator may designate a node to initiate proxy node selection, or the first node to detect an event to be reported may be the node to initiate proxy node selection. Preferably included in each proxy node selection message are both information pertaining to the source node of the proxy node selection message and the existing proxy node selection known to be the source node, if any is so known.
  • Once a proxy node selection message has been transmitted by and between each of the network nodes 106, 109 and 112 participating in the proxy node selection process, the network nodes 106, 109 and 112 may begin the proxy node selection process at step 409. The embodiment of method 400 illustrated in FIG. 4 assumes that new network nodes do not have an existing proxy node selection available to them. However, using the teachings below regarding network nodes having an existing proxy node selection available to them, method 400 may be altered or modified such that a newly established network utilizes the existing proxy node selections available.
  • At 409, a proxy node selection is made by the new network nodes using an agreed upon selection rule. For example, the agreed upon selection rule may be derived from the physical layout of the network nodes and their associated communication network. Alternatively, the agreed upon selection rule may select a proxy node based on the IP address of the nodes, a Fibre Channel network address or on time stamps associated with the exchanged proxy node selection messages. The proxy node selection rule may be established at the deployment of method 400 by a network administrator, for example. Additional proxy node selection rules are contemplated within the spirit and scope of the present invention.
  • Once a current or initial proxy node for the computer network has been selected according to the agreed upon proxy node selection rule at step 409, method 400 preferably proceeds to step 412. At step 412, event detection and generation or monitoring may be initiated in the current proxy node 106 and the non-proxy nodes 109 and/or 112, i.e., those network nodes not currently designated as the proxy node.
  • According to teachings of the present invention and method 400, the current proxy node 106 and non-proxy nodes 109 and/or 112 are preferably configured for event detection, generation and monitoring differently. Specifically, the current proxy node 106 is preferably configured to detect and report both distributed events and node-specific events. The current or initial non-proxy nodes 109 and/or 112 are preferably configured only to detect and report node-specific events, so long as the current or initial proxy node 106 remains available. At this point, in any event, the computer network 300 is preferably available for use and each non-proxy node 109 and/or 112 is preferably monitoring itself for node-specific events and the proxy node 106 is preferably both monitoring itself for node-specific events and monitoring the computer network 300 for distributed events.
  • From step 412, method 400 preferably proceeds to step 415 which is preferably a wait state for the network management station 103. At step 415, the computer network 300 is preferably operating as desired, transferring communications as requested, etc. In addition, the computer network 300 is preferably being monitored for the addition of new nodes. Monitoring and notification of the presence of new nodes may be accomplished using a variety of methods. For example, as a new node is added to the computer network 300, the new node may be configured to transmit a signal to the existing nodes on the network that it has been added. Alternatively, the network management station 103 may be configured to periodically poll the computer network 300 to detect the presence of new nodes, detect missing nodes as well as to accomplish other network management goals. In yet another example, the current proxy node 106 or one of the current non-proxy nodes 109 and/or 112 may be configured to monitor the computer network 300 for the addition of new network nodes.
  • In addition to monitoring the computer network 300 for new network nodes at step 415, method 400 is preferably also monitoring the availability of the current proxy node 106. According to teachings of the present invention, in the event the current proxy node 106 becomes unavailable, method 400 preferably initiates a new proxy node selection process generally as described below.
  • Monitoring the availability of the current proxy node 106 may be accomplished using a variety of processes. For example, once the current proxy node 106 has been selected, in addition to configuring the current proxy node 106 to report both distributed events and node-specific events, the current proxy node 106 may be configured such that it provides a heartbeat signal to the non-proxy nodes 109 and/or 112. In such an implementation, when one of the non-proxy nodes 109 and/or 112 ceases to receive the heartbeat signal from the current proxy node 106, the non-proxy node 109 and/or 112 may verify the unavailability of the proxy node 106 and/or initiate the selection process for a replacement proxy node. In an alternate implementation, one or more of the non-proxy nodes 109 and/or 112 may be configured to periodically verify that the current proxy node 106 is available. In the event a non-proxy node 109 and/or 112 is unable to communicate with the current proxy node 106 or otherwise determines that the current proxy node 106 is unavailable, the process of selecting a replacement or new proxy node may be initiated by discovering non-proxy node 109 and/or 112.
  • In the event a new network node is added to or detected on the computer network 300 or in the event the current proxy node 106 has been determined to be unavailable, method 400 preferably proceeds to step 418. At step 418, event generation in the computer network 300, i.e., in the network nodes 106, 109 and 112, is preferably stopped or paused. Once event generation has been stopped or paused, method 400 preferably proceeds to step 421. At step 421, the process of selecting a new or replacement proxy node may be initiated. The proxy node selection process may vary slightly at step 421 depending on whether the proxy node selection process was initiated in response to the addition of a new node to the computer network 300 or in response to the unavailability of the current proxy node 106.
  • In response to the addition of a new node to the computer network 300, a proxy node selection message is preferably sent to each new node added to the network at step 421. In an exemplary embodiment of the present invention, the current proxy node 106 may be responsible for initiating the exchange of proxy node selection messages with the new network nodes. In the event that both new nodes have been added to the computer network 300 and the current proxy node 106 is unavailable, one of the non-proxy nodes 109 and/or 112 may be responsible for sending out the proxy node selection message to the new nodes and the remaining available nodes. Alternatively, in such an event, the network management station 103 may be responsible for initiating the proxy selection process, the remaining steps of the proxy selection process preferably being performed by the available network nodes, without additional input or support from the network management station 103.
  • Alternatively, if proxy node selection messages are being sent in response to the unavailability of the current proxy node 106, the proxy node selection messages are preferably exchanged by and between all of the non-proxy nodes 109 and/or 112 and/or all of the network nodes available on the communications network 115 at step 421. In an exemplary embodiment, the non-proxy node 109 and/or 112 detecting and/or determining the unavailability of the current proxy node 106 may be responsible for initiating the exchange of proxy node selection messages between the appropriate non-proxy and network nodes. In addition, in such an event, the non-proxy node 109 and/or 112 initiating a new proxy node selection process may indicate the unavailability of the current proxy node 106 to the remaining non-proxy nodes 109 and/or 112 such that each may release their current proxy node selection setting. Alternative implementations of proxy node selection message initiation and generation are contemplated within the spirit and scope of the present invention.
  • Once the proxy node selection messages have been exchanged by and between the appropriate network nodes, both new and non-proxy, method 400 preferably proceeds to step 424. At step 424, the available network nodes preferably wait for the proxy node selection messages from each of the other nodes participating in the proxy node selection process. For example, in the newly added network node scenario described above, the current proxy node 106 and the existing non-proxy nodes 109 and/or 112 will preferably wait for return proxy node selection messages from each of the newly added network nodes. Alternatively, if the current proxy node 106 is managing the proxy node selection process with the new network nodes, the current proxy node 106 may remain in wait at step 424.
  • Upon receipt of each return proxy node selection message, method 400 preferably proceeds to step 427 where a check is made to determine if the returning proxy node selection messages have been received from all of the nodes participating in the proxy node selection process, for example, from all of the new network nodes. If it is determined that there are nodes from which a return proxy node selection message has not been received, method 400 preferably returns to step 424 where the remaining return proxy node selection messages may be awaited. If it is determined that all of the nodes participating in the proxy node selection process have returned a proxy node selection message, method 400 preferably proceeds to step 430. Alternatively, if method 400 has returned to step 424 to await additional return proxy node selection messages but no additional return proxy node selection messages are received within some defined time window, method 400 may proceed to step 430.
  • At step 430, a determination is made as to whether any of the return proxy node selection messages contain an existing proxy node selection. As mentioned above, the proxy node selection messages preferably include both information as to the source of the proxy node selection message and information as to the existing proxy node selection known to the source node, if any. For example, if proxy node selection was initiated in response to the addition of nodes to the computer network 300, each of the existing non-proxy nodes 109 and/or 112 and the proxy node 106 already on the computer network 300 should each indicate an existing proxy node selection, i.e., the current proxy node 106. Alternatively, if the proxy node selection process was initiated in response to the unavailability of the current proxy node 106, each of the return proxy node selection messages from the non-proxy nodes 109 and/or 112 participating in the new proxy node selection process may not have an existing proxy selection, e.g., each non-proxy node 109 and/or 112 may have released its existing proxy node selection setting in response to the knowledge that the current proxy node 106 has become unavailable.
  • If at step 430 it is determined that there are no existing proxy selections in the return proxy node selection messages, method 400 preferably proceeds to step 433. At step 433, a new proxy node may be selected from the nodes available on the computer network 300 according to a selection rule agreed upon by the nodes. Examples for such a rule include, but are not limited to, an Internet Protocol address based rule, a Fibre Channel node World Wide Name based rule and an earliest time stamp based rule using the timestamps preferably included in the proxy node selection messages, as mentioned above.
  • Upon selection of a new or replacement proxy node by agreed upon rule at step 433, method 400 preferably proceeds to step 436 where event generation may be restarted according to the new arrangement of non-proxy nodes and the newly selected proxy node. For example, the new proxy node may be configured to monitor and report both distributed and node-specific events and to monitor the network for new nodes while the non-proxy nodes may be configured to report only node-specific events and to monitor the availability of the new proxy node. From step 436, method 400 preferably returns to step 415 where the addition of nodes to the network and the unavailability of the new proxy node are preferably monitored and awaited by the network management station 103.
  • If at step 430 it is determined that one of the return proxy node selection messages contains an existing proxy selection, method 400 preferably proceeds to step 439. At step 439, each of the nodes or a managing node, e.g., the current proxy node 106 or the non-proxy node 109 and/or 112 detecting the unavailability of the current proxy node 106, in receipt of return proxy selection messages preferably determines whether the existing proxy node selections indicated in the return proxy node selection messages received from the other nodes are in conflict with or match one another. If it is determined that there is a conflict or that the existing proxy node selections do not match, method 400 preferably proceeds to step 433 where the participating network nodes use an agreed upon rule for selecting a new proxy node generally as described above. Alternatively, if a single network node is evaluating whether there is a conflict among the existing proxy node selections, that network node may generate a message indicating such a conflict to the remaining participating network nodes and the need to proceed to step 433 for selection of a new proxy node by agreed upon rule. If it is determined that there are no conflicts or that the existing proxy node selections indicated in the return proxy selection messages match one another, method 400 preferably proceeds to step 442.
  • At step 442, a determination is made whether the proxy node selection submitted by and matching amongst the other participating network nodes matches with the evaluating or current network node's own existing proxy node selection, e.g., the managing node or each node in receipt of a return proxy node selection message. If the current node determines that the proxy node selection submitted matches its own proxy node selection, method 400 preferably proceeds to step 436 where event generation and reporting may be re-initiated generally as described above. Alternatively, if at step 442 the evaluating network node determines that it either does not have an existing proxy node selection or that its existing proxy node selection does not match or conflicts with the existing proxy node selection submitted by the remaining network nodes, method 400 preferably proceeds to step 445. At step 445, the current network node adopts the existing proxy node selection submitted by the remaining network nodes such that all participating network nodes now recognize the same new proxy node for the computer network 300. From step 445, method 400 preferably proceeds to step 436 where event generation, a described above, may be initiated.
  • Method 400 provides numerous advantages over existing distributed event handling methods. One advantage of method 400 is that method 400 does not require the involvement of the network management station 103 for purposes other than processing node-specific and distributed events, i.e., the network management station 103 is not needed for proxy selection or for ensuring proxy availability, thus the resources of the network management station 103 may be reserved for event handling and other significant network management processing. In addition, method 400 reduces network traffic by preferably sending the network management station 103 only one copy of each distributed event.
  • As described herein, methods 200 and 400 provide clear advantages over the existing distributed event handling solutions. One such advantage is the elimination or reduction in double-handling when the network management station 103 receives multiple copies of the same event. As such, the methods described herein reduce the processing resources associated with double-handling, thereby freeing such resources for other processing or network management applications.
  • The invention, therefore, is well adapted to carry out the objects and to attain the ends and advantages mentioned, as well as others inherent therein. While the invention has been depicted, described, and is defined by reference to exemplary embodiments of the invention, such references do not imply a limitation on the invention, and no such limitation is to be inferred. The invention is capable of considerable modification, alternation, and equivalents in form and function, as will occur to those ordinarily skilled in the pertinent arts and having the benefit of this disclosure. The depicted and described exemplary embodiments of the invention are exemplary only, and are not exhaustive of the scope of the invention. Consequently, it is intended that the invention be limited only by the spirit and scope of the appended claims, giving full cognizance to equivalents in all respects.

Claims (17)

1-26. (canceled)
27. A computer network comprising:
a plurality of network nodes, including a proxy node and a plurality of non-proxy nodes, the network nodes operably coupled to a communication network;
the plurality of network nodes operable to cooperatively select the proxy node from the plurality of network nodes;
the proxy node operable to detect and report distributed and node-specific events to a network management station via the communication network; and
at least one non-proxy node operable to detect and report only node-specific events to the network management station while the proxy node remains available.
28. The computer network of claim 27 further comprising at least one non-proxy node operable to monitor availability of the proxy node.
29. The computer network of claim 28 further comprising at least one non-proxy node operable to initiate and participate in selection of a new proxy node with the non-proxy nodes in response to a lapse in the availability of the proxy node.
30. The computer network of claim 27 further comprising at least one network node operable to detect a new node added to the communication network.
31. The computer network of claim 30 further comprising at least one network node operable to initiate selection of a proxy node in response to detection of a new node.
32. The computer network of claim 27 further comprising each network node participating in proxy node selection operable to exchange an existing proxy node selection message with one another, each existing proxy node selection messages identifying a proxy node known to each exchanging network node.
33. The computer network of claim 32 further comprising at least one network node operable to apply an agreed upon rule for proxy node selection in response to a conflict between the proxy nodes identified in the exchanged proxy node selection messages.
34. The computer network of claim 32 further comprising at least one network node participating in selection of the proxy node operable to detect a conflict between the proxy nodes identified in the existing proxy node selection messages.
35. The computer network of claim 34 further comprising the at least one network node operable to select the proxy node identified in the existing proxy node selection messages if no conflict is detected.
36. A network computing device comprising:
at least one processor;
memory operably coupled to the processor;
a communication interface operably coupled to the processor and the memory, the communication interface operable to communicate with at least one network node and a network management station via a communication network; and
a program of instructions storable in the memory and executable by the processor, the program of instructions operable to cooperate with at least one network node to select a proxy node and further operable to report events to the network management station according to selection of the network computing device as a proxy node or a non-proxy node.
37. The network computing device of claim 36 further comprising the program of instructions operable to report both distributed and node-specific events to the network management station in response to selection of the network computing device as the proxy node.
38. The network computing device of claim 36 further comprising the program of instructions operable to report node-specific events to the network management station while a selected proxy node is available.
39. The network computing device of claim 36 further comprising the program of instructions operable to exchange existing proxy node selections with the at least one network node and to detect a conflict between the existing proxy node selections.
40. The network computing device of claim 39 further comprising the program of instructions operable to select the proxy node according to the existing proxy node selections if there is no conflict detected and to select the proxy node according to one or more rules in response to detection of a conflict between the existing proxy node selections.
41. The network computing device of claim 36 further comprising the program of instructions operable to monitor availability of the proxy node and to initiate selection of a new proxy node in response to a lapse in proxy node availability.
42. The network computing device of claim 36 further comprising the program of instructions operable to initiate proxy node selection in response to detection of a new network node.
US11/923,317 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment Abandoned US20080065764A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/923,317 US20080065764A1 (en) 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/117,290 US7379970B1 (en) 2002-04-05 2002-04-05 Method and system for reduced distributed event handling in a network environment
US11/923,317 US20080065764A1 (en) 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/117,290 Division US7379970B1 (en) 2002-04-05 2002-04-05 Method and system for reduced distributed event handling in a network environment

Publications (1)

Publication Number Publication Date
US20080065764A1 true US20080065764A1 (en) 2008-03-13

Family

ID=39171102

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/117,290 Expired - Fee Related US7379970B1 (en) 2002-04-05 2002-04-05 Method and system for reduced distributed event handling in a network environment
US11/923,265 Abandoned US20080086547A1 (en) 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment
US11/923,317 Abandoned US20080065764A1 (en) 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/117,290 Expired - Fee Related US7379970B1 (en) 2002-04-05 2002-04-05 Method and system for reduced distributed event handling in a network environment
US11/923,265 Abandoned US20080086547A1 (en) 2002-04-05 2007-10-24 Method and system for reduced distributed event handling in a network environment

Country Status (1)

Country Link
US (3) US7379970B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7853683B1 (en) * 2002-07-31 2010-12-14 Cisco Technology, Inc. Approach for canceling events
US20140047108A1 (en) * 2012-08-10 2014-02-13 Telefonaktiebolaget L M Ericsson (Publ) Self organizing network event reporting
CN106790626A (en) * 2016-12-31 2017-05-31 广州佳都信息技术研发有限公司 A kind of implementation method of reliable distributed warning
US10148503B1 (en) * 2015-12-29 2018-12-04 EMC IP Holding Company LLC Mechanism for dynamic delivery of network configuration states to protocol heads

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7581010B2 (en) * 2003-07-14 2009-08-25 Microsoft Corporation Virtual connectivity with local connection translation
US7769866B2 (en) * 2003-07-14 2010-08-03 Microsoft Corporation Virtual connectivity with subscribe-notify service
US8041799B1 (en) * 2004-04-30 2011-10-18 Sprint Communications Company L.P. Method and system for managing alarms in a communications network

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5805785A (en) * 1996-02-27 1998-09-08 International Business Machines Corporation Method for monitoring and recovery of subsystems in a distributed/clustered system
US5999712A (en) * 1997-10-21 1999-12-07 Sun Microsystems, Inc. Determining cluster membership in a distributed computer system
US6163855A (en) * 1998-04-17 2000-12-19 Microsoft Corporation Method and system for replicated and consistent modifications in a server cluster
US6199169B1 (en) * 1998-03-31 2001-03-06 Compaq Computer Corporation System and method for synchronizing time across a computer cluster
US6446136B1 (en) * 1998-12-31 2002-09-03 Computer Associates Think, Inc. System and method for dynamic correlation of events
US6618805B1 (en) * 2000-06-30 2003-09-09 Sun Microsystems, Inc. System and method for simplifying and managing complex transactions in a distributed high-availability computer system
US6980537B1 (en) * 1999-11-12 2005-12-27 Itt Manufacturing Enterprises, Inc. Method and apparatus for communication network cluster formation and transmission of node link status messages with reduced protocol overhead traffic
US7010617B2 (en) * 2000-05-02 2006-03-07 Sun Microsystems, Inc. Cluster configuration repository
US7428723B2 (en) * 2000-05-22 2008-09-23 Verizon Business Global Llc Aggregrating related events into a single bundle of events with incorporation of bundle into work protocol based on rules
US7464147B1 (en) * 1999-11-10 2008-12-09 International Business Machines Corporation Managing a cluster of networked resources and resource groups using rule - base constraints in a scalable clustering environment

Family Cites Families (105)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4692073A (en) 1985-02-25 1987-09-08 Martindell J Richard Handle adapter and chuck apparatus for power bits
US4755930A (en) 1985-06-27 1988-07-05 Encore Computer Corporation Hierarchical cache memory system and method
US5247649A (en) 1988-05-06 1993-09-21 Hitachi, Ltd. Multi-processor system having a multi-port cache memory
JP2761506B2 (en) 1988-07-08 1998-06-04 株式会社日立製作所 Main memory controller
US5611049A (en) 1992-06-03 1997-03-11 Pitts; William M. System for accessing distributed data cache channel at each network node to pass requests and data
US5289460A (en) * 1992-07-31 1994-02-22 International Business Machines Corp. Maintenance of message distribution trees in a communications network
US5394556A (en) 1992-12-21 1995-02-28 Apple Computer, Inc. Method and apparatus for unique address assignment, node self-identification and topology mapping for a directed acyclic graph
US5664106A (en) 1993-06-04 1997-09-02 Digital Equipment Corporation Phase-space surface representation of server computer performance in a computer network
US5515376A (en) 1993-07-19 1996-05-07 Alantec, Inc. Communication apparatus and methods
US5530832A (en) 1993-10-14 1996-06-25 International Business Machines Corporation System and method for practicing essential inclusion in a multiprocessor and cache hierarchy
EP0676878A1 (en) 1994-04-07 1995-10-11 International Business Machines Corporation Efficient point to point and multi point routing mechanism for programmable packet switching nodes in high speed data transmission networks
DE69429983T2 (en) 1994-05-25 2002-10-17 International Business Machines Corp., Armonk Data transmission network and method for operating the network
US5734825A (en) 1994-07-18 1998-03-31 Digital Equipment Corporation Traffic control system having distributed rate calculation and link by link flow control
US6085234A (en) 1994-11-28 2000-07-04 Inca Technology, Inc. Remote file services network-infrastructure cache
GB2297881B (en) 1995-02-09 1999-02-17 Northern Telecom Ltd Communications system
US5805809A (en) 1995-04-26 1998-09-08 Shiva Corporation Installable performance accelerator for maintaining a local cache storing data residing on a server computer
US5845324A (en) 1995-04-28 1998-12-01 Unisys Corporation Dual bus network cache controller system having rapid invalidation cycles and reduced latency for cache access
US5699548A (en) 1995-06-01 1997-12-16 Intel Corporation Method and apparatus for selecting a mode for updating external memory
US5586847A (en) 1995-06-06 1996-12-24 Mattern, Jr.; Charles J. Power tool adapter
US5918224A (en) 1995-07-26 1999-06-29 Borland International, Inc. Client/server database system with methods for providing clients with server-based bi-directional scrolling at the server
US5845280A (en) 1995-09-25 1998-12-01 Microsoft Corporation Method and apparatus for transmitting a file in a network using a single transmit request from a user-mode process to a kernel-mode process
US5844887A (en) 1995-11-30 1998-12-01 Scorpio Communications Ltd. ATM switching fabric
US5835943A (en) 1995-11-30 1998-11-10 Stampede Technologies, Inc. Apparatus and method for increased data access in a network file oriented caching system
US5864854A (en) 1996-01-05 1999-01-26 Lsi Logic Corporation System and method for maintaining a shared cache look-up table
US5978841A (en) 1996-03-08 1999-11-02 Berger; Louis Look ahead caching process for improved information retrieval response time by caching bodies of information before they are requested by the user
US6147976A (en) 1996-06-24 2000-11-14 Cabletron Systems, Inc. Fast network layer packet filter
US5944789A (en) 1996-08-14 1999-08-31 Emc Corporation Network file server maintaining local caches of file directory information in data mover computers
GB9618131D0 (en) 1996-08-30 1996-10-09 Sgs Thomson Microelectronics Improvements in or relating to an ATM switch
US5779429A (en) 1996-09-10 1998-07-14 Kendall Manufacturing, Inc. Mechanism allowing quick implement attachment to tractors
US5852717A (en) 1996-11-20 1998-12-22 Shiva Corporation Performance optimizations for computer networks utilizing HTTP
US6098096A (en) 1996-12-09 2000-08-01 Sun Microsystems, Inc. Method and apparatus for dynamic cache preloading across a network
US5873100A (en) 1996-12-20 1999-02-16 Intel Corporation Internet browser that includes an enhanced cache for user-controlled document retention
FR2759518B1 (en) 1997-02-07 1999-04-23 France Telecom METHOD AND DEVICE FOR ALLOCATING RESOURCES IN A DIGITAL PACKET TRANSMISSION NETWORK
US5878218A (en) 1997-03-17 1999-03-02 International Business Machines Corporation Method and system for creating and utilizing common caches for internetworks
US6044406A (en) 1997-04-08 2000-03-28 International Business Machines Corporation Credit-based flow control checking and correction method
US5933849A (en) 1997-04-10 1999-08-03 At&T Corp Scalable distributed caching system and method
DE29720616U1 (en) 1997-04-18 1998-08-20 Kaltenbach & Voigt Gmbh & Co, 88400 Biberach Handpiece for medical purposes, in particular for a medical or dental treatment facility, preferably for machining a tooth root canal
US5944780A (en) 1997-05-05 1999-08-31 At&T Corp Network with shared caching
US5991810A (en) 1997-08-01 1999-11-23 Novell, Inc. User name authentication for gateway clients accessing a proxy cache server
US6499064B1 (en) 1997-08-14 2002-12-24 International Business Machines Corporation Method of using decoupled chain of responsibility
US6138209A (en) 1997-09-05 2000-10-24 International Business Machines Corporation Data processing system and multi-way set associative cache utilizing class predict data structure and method thereof
US6041058A (en) 1997-09-11 2000-03-21 3Com Corporation Hardware filtering method and apparatus
US5978951A (en) 1997-09-11 1999-11-02 3Com Corporation High speed cache management unit for use in a bridge/router
JPH11122486A (en) 1997-10-17 1999-04-30 Sharp Corp Picture processor
JP4363676B2 (en) 1997-10-31 2009-11-11 株式会社東芝 Computer system
US6216167B1 (en) 1997-10-31 2001-04-10 Nortel Networks Limited Efficient path based forwarding and multicast forwarding
US6754206B1 (en) 1997-12-04 2004-06-22 Alcatel Usa Sourcing, L.P. Distributed telecommunications switching system and method
US6081883A (en) 1997-12-05 2000-06-27 Auspex Systems, Incorporated Processing system with dynamically allocatable buffer memory
US6105062A (en) 1998-02-26 2000-08-15 Novell, Inc. Method and system for pruning and grafting trees in a directory service
US6353614B1 (en) 1998-03-05 2002-03-05 3Com Corporation Method and protocol for distributed network address translation
US6289386B1 (en) 1998-05-11 2001-09-11 Lsi Logic Corporation Implementation of a divide algorithm for buffer credit calculation in a high speed serial channel
US6594701B1 (en) 1998-08-04 2003-07-15 Microsoft Corporation Credit-based methods and systems for controlling data flow between a sender and a receiver with reduced copying of data
JP4035235B2 (en) 1998-08-24 2008-01-16 キヤノン株式会社 Electronics
US6470013B1 (en) 1998-10-13 2002-10-22 Cisco Technology, Inc. Use of enhanced ethernet link—loop packets to automate configuration of intelligent linecards attached to a router
US6765919B1 (en) 1998-10-23 2004-07-20 Brocade Communications Systems, Inc. Method and system for creating and implementing zones within a fibre channel system
US20030069873A1 (en) 1998-11-18 2003-04-10 Kevin L. Fox Multiple engine information retrieval and visualization system
US6704318B1 (en) 1998-11-30 2004-03-09 Cisco Technology, Inc. Switched token ring over ISL (TR-ISL) network
US6584101B2 (en) 1998-12-04 2003-06-24 Pmc-Sierra Ltd. Communication method for packet switching systems
US6243746B1 (en) 1998-12-04 2001-06-05 Sun Microsystems, Inc. Method and implementation for using computer network topology objects
US6597689B1 (en) 1998-12-30 2003-07-22 Nortel Networks Limited SVC signaling system and method
US6438705B1 (en) * 1999-01-29 2002-08-20 International Business Machines Corporation Method and apparatus for building and managing multi-clustered computer systems
US6400730B1 (en) 1999-03-10 2002-06-04 Nishan Systems, Inc. Method and apparatus for transferring data between IP network devices and SCSI and fibre channel devices over an IP network
US6401120B1 (en) * 1999-03-26 2002-06-04 Microsoft Corporation Method and system for consistent cluster operational data in a server cluster using a quorum of replicas
US6757791B1 (en) 1999-03-30 2004-06-29 Cisco Technology, Inc. Method and apparatus for reordering packet data units in storage queues for reading and writing memory
US6289376B1 (en) 1999-03-31 2001-09-11 Diva Systems Corp. Tightly-coupled disk-to-CPU storage server
US6747949B1 (en) 1999-05-21 2004-06-08 Intel Corporation Register based remote data flow control
US6876668B1 (en) 1999-05-24 2005-04-05 Cisco Technology, Inc. Apparatus and methods for dynamic bandwidth allocation
US6252514B1 (en) 1999-06-07 2001-06-26 Convergent Technologies, Inc. Hot-swap assembly for computers
US6361343B1 (en) 1999-09-21 2002-03-26 Intel Corporation Circuit card retention mechanism
US6597699B1 (en) 1999-09-28 2003-07-22 Telefonaktiebolaget Lm Ericsson (Publ) Quality of service management in a packet data router system having multiple virtual router instances
US6532501B1 (en) 1999-09-30 2003-03-11 Silicon Graphics, Inc. System and method for distributing output queue space
DE19949874B4 (en) 1999-10-15 2004-09-23 Imi Norgren-Herion Fluidtronic Gmbh & Co. Kg safety valve
US6687247B1 (en) 1999-10-27 2004-02-03 Cisco Technology, Inc. Architecture for high speed class of service enabled linecard
US6654895B1 (en) 1999-11-08 2003-11-25 Intel Corporation Adaptive power management in a computing system
US6662219B1 (en) * 1999-12-15 2003-12-09 Microsoft Corporation System for determining at subgroup of nodes relative weight to represent cluster by obtaining exclusive possession of quorum resource
US6922408B2 (en) 2000-01-10 2005-07-26 Mellanox Technologies Ltd. Packet communication buffering with dynamic flow control
US6731644B1 (en) 2000-02-14 2004-05-04 Cisco Technology, Inc. Flexible DMA engine for packet header modification
GB2360168B (en) 2000-03-11 2003-07-16 3Com Corp Network switch including hysteresis in signalling fullness of transmit queues
US6735174B1 (en) 2000-03-29 2004-05-11 Intel Corporation Method and systems for flow control of transmissions over channel-based switched fabric connections
US6657962B1 (en) 2000-04-10 2003-12-02 International Business Machines Corporation Method and system for managing congestion in a network
US6792456B1 (en) * 2000-05-08 2004-09-14 International Business Machines Corporation Systems and methods for authoring and executing operational policies that use event rates
US6601186B1 (en) 2000-05-20 2003-07-29 Equipe Communications Corporation Independent restoration of control plane and data plane functions
US20010037435A1 (en) 2000-05-31 2001-11-01 Van Doren Stephen R. Distributed address mapping and routing table mechanism that supports flexible configuration and partitioning in a modular switch-based, shared-memory multiprocessor computer system
US7174557B2 (en) * 2000-06-07 2007-02-06 Microsoft Corporation Method and apparatus for event distribution and event handling in an enterprise
AU2001271609A1 (en) 2000-06-30 2002-01-14 Kanad Ghose System and method for fast, reliable byte stream transport
US6865602B1 (en) 2000-07-24 2005-03-08 Alcatel Canada Inc. Network management support for OAM functionality and method therefore
US6424657B1 (en) 2000-08-10 2002-07-23 Verizon Communications Inc. Traffic queueing for remote terminal DSLAMs
US6847647B1 (en) 2000-09-26 2005-01-25 Hewlett-Packard Development Company, L.P. Method and apparatus for distributing traffic over multiple switched fiber channel routes
US6879559B1 (en) 2000-10-31 2005-04-12 Chiaro Networks, Ltd. Router line card protection using one-for-N redundancy
US6765871B1 (en) 2000-11-29 2004-07-20 Akara Corporation Fiber channel flow control method and apparatus for interface to metro area transport link
US6954463B1 (en) 2000-12-11 2005-10-11 Cisco Technology, Inc. Distributed packet processing architecture for network access servers
US6792507B2 (en) 2000-12-14 2004-09-14 Maxxan Systems, Inc. Caching system and method for a network storage system
US7035212B1 (en) 2001-01-25 2006-04-25 Optim Networks Method and apparatus for end to end forwarding architecture
US7010715B2 (en) 2001-01-25 2006-03-07 Marconi Intellectual Property (Ringfence), Inc. Redundant control architecture for a network device
US20030023709A1 (en) 2001-02-28 2003-01-30 Alvarez Mario F. Embedded controller and node management architecture for a modular optical network, and methods and apparatus therefor
US6731832B2 (en) 2001-02-28 2004-05-04 Lambda Opticalsystems Corporation Detection of module insertion/removal in a modular optical network, and methods and apparatus therefor
US6839750B1 (en) 2001-03-03 2005-01-04 Emc Corporation Single management point for a storage system or storage area network
US7079485B1 (en) 2001-05-01 2006-07-18 Integrated Device Technology, Inc. Multiservice switching system with distributed switch fabric
US6941367B2 (en) * 2001-05-10 2005-09-06 Hewlett-Packard Development Company, L.P. System for monitoring relevant events by comparing message relation key
US6985490B2 (en) 2001-07-11 2006-01-10 Sancastle Technologies, Ltd. Extension of fibre channel addressing
US6944829B2 (en) 2001-09-25 2005-09-13 Wind River Systems, Inc. Configurable user-interface component management system
US7190695B2 (en) 2001-09-28 2007-03-13 Lucent Technologies Inc. Flexible application of mapping algorithms within a packet distributor
US6845431B2 (en) 2001-12-28 2005-01-18 Hewlett-Packard Development Company, L.P. System and method for intermediating communication with a moveable media library utilizing a plurality of partitions
US6983303B2 (en) 2002-01-31 2006-01-03 Hewlett-Packard Development Company, Lp. Storage aggregator for enhancing virtualization in data storage networks
US6988149B2 (en) 2002-02-26 2006-01-17 Lsi Logic Corporation Integrated target masking

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5805785A (en) * 1996-02-27 1998-09-08 International Business Machines Corporation Method for monitoring and recovery of subsystems in a distributed/clustered system
US5999712A (en) * 1997-10-21 1999-12-07 Sun Microsystems, Inc. Determining cluster membership in a distributed computer system
US6199169B1 (en) * 1998-03-31 2001-03-06 Compaq Computer Corporation System and method for synchronizing time across a computer cluster
US6163855A (en) * 1998-04-17 2000-12-19 Microsoft Corporation Method and system for replicated and consistent modifications in a server cluster
US6446136B1 (en) * 1998-12-31 2002-09-03 Computer Associates Think, Inc. System and method for dynamic correlation of events
US7464147B1 (en) * 1999-11-10 2008-12-09 International Business Machines Corporation Managing a cluster of networked resources and resource groups using rule - base constraints in a scalable clustering environment
US6980537B1 (en) * 1999-11-12 2005-12-27 Itt Manufacturing Enterprises, Inc. Method and apparatus for communication network cluster formation and transmission of node link status messages with reduced protocol overhead traffic
US7010617B2 (en) * 2000-05-02 2006-03-07 Sun Microsystems, Inc. Cluster configuration repository
US7428723B2 (en) * 2000-05-22 2008-09-23 Verizon Business Global Llc Aggregrating related events into a single bundle of events with incorporation of bundle into work protocol based on rules
US6618805B1 (en) * 2000-06-30 2003-09-09 Sun Microsystems, Inc. System and method for simplifying and managing complex transactions in a distributed high-availability computer system

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7853683B1 (en) * 2002-07-31 2010-12-14 Cisco Technology, Inc. Approach for canceling events
US20140047108A1 (en) * 2012-08-10 2014-02-13 Telefonaktiebolaget L M Ericsson (Publ) Self organizing network event reporting
US10277484B2 (en) * 2012-08-10 2019-04-30 Telefonaktiebolaget Lm Ericsson (Publ) Self organizing network event reporting
US10148503B1 (en) * 2015-12-29 2018-12-04 EMC IP Holding Company LLC Mechanism for dynamic delivery of network configuration states to protocol heads
CN106790626A (en) * 2016-12-31 2017-05-31 广州佳都信息技术研发有限公司 A kind of implementation method of reliable distributed warning

Also Published As

Publication number Publication date
US7379970B1 (en) 2008-05-27
US20080086547A1 (en) 2008-04-10

Similar Documents

Publication Publication Date Title
US20080065764A1 (en) Method and system for reduced distributed event handling in a network environment
US7630313B2 (en) Scheduled determination of network resource availability
JP4647234B2 (en) Method and apparatus for discovering network devices
CN111615066B (en) Distributed micro-service registration and calling method based on broadcast
KR100935782B1 (en) System, method, and computer program product for centralized management of an infiniband distributed system area network
JP3593528B2 (en) Distributed network management system and method
US20060153068A1 (en) Systems and methods providing high availability for distributed systems
US20050114352A1 (en) Method and system for detecting a dead server
US6219705B1 (en) System and method of collecting and maintaining historical top communicator information on a communication device
US20130067091A1 (en) Systems, methods and media for distributing peer-to-peer communications
US6992985B1 (en) Method and system for auto discovery of IP-based network elements
CN111510325B (en) Alarm information pushing method, server, client and system
EP1762069B1 (en) Method of selecting one server out of a server set
JP4566200B2 (en) Ways to support transactions
US6539540B1 (en) Methods and apparatus for optimizing simple network management protocol (SNMP) requests
JP2005237018A (en) Data transmission to network management system
CN110109933B (en) Information maintenance method, configuration management database system and storage medium
US10904327B2 (en) Method, electronic device and computer program product for searching for node
JP4673532B2 (en) Comprehensive alignment process in a multi-manager environment
CN101657994B (en) Discovery of disconnected components in a distributed communication network
US20020161613A1 (en) Message-address management program, recording medium carrying message-address management program, message-address management method, and message-address management apparatus
US8554785B2 (en) Method and system for managing user information in instant messaging systems
JP2000148539A (en) Fault detecting method, computer system, constitutional device, and recording medium
CN115277647A (en) Method and device for processing real-time session message, storage medium and terminal
KR100626664B1 (en) Policy-Based QoS Management Server Apparatus And QoS Management Method

Legal Events

Date Code Title Description
AS Assignment

Owner name: CIPHERMAX, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HUANG, RUOTAO;IYER, RAM GANESAN;REEL/FRAME:020057/0136;SIGNING DATES FROM 20020402 TO 20020403

Owner name: CIPHERMAX, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:MAXXAN SYSTEMS, INC.;REEL/FRAME:020057/0066

Effective date: 20070118

AS Assignment

Owner name: CIPHERMAX, INC., CALIFORNIA

Free format text: CHANGE OF NAME;ASSIGNOR:MAXXAN SYSTEMS, INC.;REEL/FRAME:020178/0247

Effective date: 20070118

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION