US20020089982A1 - Dynamic multicast routing facility for a distributed computing environment - Google Patents

Dynamic multicast routing facility for a distributed computing environment Download PDF

Info

Publication number
US20020089982A1
US20020089982A1 US10/085,243 US8524302A US2002089982A1 US 20020089982 A1 US20020089982 A1 US 20020089982A1 US 8524302 A US8524302 A US 8524302A US 2002089982 A1 US2002089982 A1 US 2002089982A1
Authority
US
United States
Prior art keywords
node
group
computing
multicast
nodes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/085,243
Inventor
Marcos Novaes
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US10/085,243 priority Critical patent/US20020089982A1/en
Publication of US20020089982A1 publication Critical patent/US20020089982A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/46Cluster building
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/16Arrangements for providing special services to substations
    • H04L12/18Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
    • H04L12/1863Arrangements for providing special services to substations for broadcast or conference, e.g. multicast comprising mechanisms for improved reliability, e.g. status reports
    • H04L12/1877Measures taken prior to transmission
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/16Multipoint routing

Definitions

  • This invention relates in general to distributed computing environments, and in particular, to a dynamic facility for ensuring multicast routing of messages within such an environment, irrespective of failure of one or more established multicast routing node.
  • Many network environments enable messages to be forwarded from one site within the network to one or more other sites using a multicast protocol.
  • Typical multicast protocols send messages from one site to one or more other sites based on information stored within a message header.
  • One example of a system that includes such a network environment is a publish/subscribe system.
  • publish/subscribe systems publishers post messages and subscribers independently specify categories of events in which they are interested. The system takes the posted messages and includes in each message header the destination information of those subscribers indicating interest in the particular message. The system then uses the destination information in the message to forward the message through the network to the appropriate subscribers.
  • Multicast messages must be routed in order to reach multiple networks in a large distributed computing environment. Multicast routing is complicated by the fact that some older routers do not support such routing. In that case, routing is conventionally solved by manually configuring selected hosts (i.e., computing nodes) as “routing points”. Such routing points are capable of running host discovery protocols that enable them to configure their routing tables in such a way that all nodes of the system will then be reachable via multicast. In some cases, the multicast messages have to be routed through IP routers which do not support multicast routing. In such cases, a “tunnel” has to be configured such that two hosts in different networks can act as routing points for multicast messages.
  • tunneling end-points are usually configured manually, for example, by a network administrator.
  • the invention comprises in one aspect a method for dynamically ensuring multicast messaging within a distributed computing environment.
  • the method includes: establishing multiple groups of computing nodes within the distributed computing environment; selecting one node of each group of computing nodes as a group leader node; forming a group of group leader nodes (GL_group) and selecting a group leader of the GL_group; and automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby establishing multicast routing between groups of nodes of the distributed computing environment.
  • the invention comprises a processing method for a distributed computing environment having multiple networks of computing nodes. Each network has at least one computing node. At least one computing node of the multiple networks of computing nodes functions as a multicast routing node.
  • the method includes: automatically responding to a failure at the at least one computing node functioning as multicast routing node to reassign the multicast routing function; and wherein the automatically responding includes dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks to maintain reachability of multicast messages to all functional computing nodes of the distributed computing environment.
  • the invention comprises a system for ensuring multicast messaging within a distributed computing environment.
  • the system includes multiple groups of computing nodes within the distributed computing environment, and means for selecting one node of each group of computing nodes as a group leader node.
  • the system further includes means for forming a group of group leader nodes (GL_group) and for selecting a group leader of the GL_group.
  • a mechanism is provided for automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby ensuring multicast routing between groups of nodes of the distributed computing environment.
  • a processing system for a distributed computing environment which includes multiple networks of computing nodes.
  • the multiple networks of computing nodes employ multicast messaging, with each network having at least one computing node, and at least one computing node of the multiple networks of computing nodes functioning as multicast routing node.
  • the system further includes means for automatically responding to a failure at the at least one computing node functioning as multicast routing node to reassign the multicast routing function.
  • the means for automatically responding includes a mechanism for dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks of computing nodes to maintain reachability of multicast messages to all functional computing nodes of the distributed computing environment.
  • an article of manufacture which includes a computer program product comprising a computer usable medium having computer readable program code means therein for use in ensuring multicast messaging within a distributed computing environment.
  • the computer readable program code means in the computer program product includes computer readable program code means for causing a computer to effect: establishing multiple groups of computing nodes within the distributed computing environment; selecting one node of each group of computing nodes as a group leader node; forming a group of group leader nodes (GL_group) and selecting a group leader of the GL_group; and automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby establishing multicast routing between groups of nodes of the distributed computing environment.
  • the invention includes an article of manufacture which includes a computer program product comprising a computer usable medium having computer readable program code means therein for maintaining multicast message reachability within a distributed computing environment having multiple networks of computing nodes employing multicast messaging.
  • Each network has at least one computing node, and at least one computing node of the multiple networks of computing nodes functions as multicast routing node.
  • the computer readable program code means in the computer program product includes computer readable program code means for causing a computer to effect: automatically responding to a failure at the at least one computing node functioning as the multicast routing node to reassign the multicast routing function; wherein the automatically responding comprises dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks of computing nodes to maintain multicast message reachability to all functional computing nodes of the distributed computing environment.
  • the present invention solves the problem of maintaining reachability of multicast messages in a distributed computing system having multiple networks of computing nodes.
  • the solution referred to as a Dynamic Multicast Routing (DMR) facility, automatically selects another computing node from a network having a failed computing node operating as the multicast routing node.
  • the DMR facility provided herein ensures that only one node of a network will act as a routing point between networks, thereby avoiding host overhead and pollution of network messages inherent, for example, in making each node of the distributed computing environment capable of receiving and sending multicast messages.
  • the DMR facility utilizes Group Services to be notified immediately of a node failure or communication adapter failure; and automatically responds thereto.
  • the DMR facility described herein has multiple applications in a distributed computing environment such as a cluster or parallel system.
  • the DMR facility could be used when sending a multicast datagram to a known address for service.
  • An example of the need for a dynamic service facility is the location of the registry servers at boot time.
  • Another use of a DMR facility in accordance with this invention is in the distribution of a given file to a large number of nodes. For example, propagation of a password file by multicast is an efficient way to distribute information.
  • the DMR facility presented herein ensures that the multicast message gets routed to all subnets, independently of which nodes are down at any one time and independent of router box support.
  • FIG. 1 depicts one example of a distributed computing environment to incorporate the principles of the present invention
  • FIG. 2 depicts an expanded view of a number of the processing nodes of the distributed computing environment of FIG. 1;
  • FIG. 3 depicts one example of the components of a Group Services facility to be employed by one embodiment of a Dynamic Multicast Routing (DMR) facility in accordance with the principles of the present invention
  • FIG. 4 illustrates one example of a processor group resulting from the Group Services protocol to be employed by said one embodiment of a DMR facility in accordance with the principles of the present invention
  • FIG. 5 depicts another example of a distributed computing environment to employ a DMR facility in accordance with the principles of the present invention, wherein multiple groups of nodes or network groups are to be virtually interfaced for multicast messaging;
  • FIG. 6 depicts virtual interfaces or tunnels, established by a Dynamic Multicast Routing facility in accordance with the principles of the present invention, between a group leader node 2 and other group leader nodes 4 & 6 of the multiple network groups;
  • FIG. 7 is a flowchart of initialization processing in accordance with one embodiment of a Dynamic Multicast Routing facility pursuant to the principles of the present invention.
  • FIG. 8 is a flowchart of recovery processing in accordance with one embodiment of a Dynamic Multicast Routing facility pursuant to the principles of the present invention.
  • the techniques of the present invention are used in distributed computing environments in order to provide multi-computer applications that are highly-available. Applications that are highly-available are able to continue to execute after a failure. That is, the application is fault-tolerant and the integrity of customer data is preserved.
  • DMR facility Dynamic Multicast Routing facility
  • a “host” comprises a computer which is capable of supporting network protocols
  • a “node” is a processing unit, such as a host, in a computer network.
  • Multicast refers to an internet protocol (IP) multicast as the term is used in the above-incorporated Addison/Wesley publication entitled “TCP/IP Illustrated”.
  • a “daemon” is persistent software which runs detached from a controlling terminal.
  • “Distributed subsystem” is a group of daemons which run in different hosts.
  • Group Services is software present on International Business Machines Corporation's Parallel System Support Programs (PSSP) Software Suite (i.e., operating system of the Scalable Parallel (SP)), and IBM's High Availability Cluster Multi-Processing/Enhanced Scalability (HACMP/ES) Software Suite.
  • PSSP Parallel System Support Programs
  • HACMP/ES High Availability Cluster Multi-Processing/Enhanced Scalability
  • Group Services is a system-wide, fault-tolerant and highly-available service that provides a facility for coordinating, managing and monitoring changes to a subsystem running on one or more processors of a distributed computing environment.
  • Group Services provides an integrated framework for designing and implementing fault-tolerant subsystems and for providing consistent recovery of multiple subsystems.
  • Group Services offers a simple programming model based on a small number of core concepts. These concepts include a cluster-wide process group membership and synchronization service that maintains application specific information with each process group.
  • Distributed computing environment 100 includes, for instance, a plurality of frames 102 coupled to one another via a plurality of LAN gates 104 . Frames 102 and LAN gates 104 are described in detail below.
  • distributed computing environment 100 includes eight (8) frames, each of which includes a plurality of processing or computing nodes 106 .
  • each frame includes sixteen (16) processing nodes (a.k.a., processors).
  • Each processing node is, for instance, a RISC/6000 computer running AIX, i.e., a UNIX based operating system.
  • AIX i.e., a UNIX based operating system.
  • Each processing node within a frame is coupled to the other processing nodes of the frame via, for example, an internal LAN connection.
  • each frame is coupled to the other frames via LAN gates 104 .
  • each LAN gate 104 includes either a RISC/6000 computer, any computer network connection to the LAN, or a network router.
  • RISC/6000 computer any computer network connection to the LAN
  • network router any network router
  • the distributed computing environment of FIG. 1 is only one example. It is possible to have more or less than eight frames, or more or less than sixteen nodes per frame. Further, the processing nodes do not have to be RISC/6000 computers running AIX. Some or all of the processing nodes can include different types of computers and/or different operating systems. All of these variations are considered a part of the claimed invention.
  • a Group Services subsystem incorporating the mechanisms of the present invention is distributed across a plurality of processing nodes of distributed computing environment 100 .
  • a Group Services daemon 200 (FIG. 2) is located within one or more of processing nodes 106 .
  • the Group Services daemons 200 are accessed by each process via an application programming interface 204 .
  • the Group Services daemons are collectively referred to as “Group Services”.
  • Group Services facilitates, for instance, communication and synchronization between multiple processes of a process group, and can be used in a variety of situations, including, for example, providing a distributed recovery synchronization mechanism.
  • a process 202 desirous of using the facilities of Group Services is coupled to a Group Services daemon 200 .
  • the process is coupled to Group Services by linking at least a part of the code associated with Group Services (e.g., the library code) into its own code.
  • Group Services 200 includes an internal layer 302 (FIG. 3) and an external layer 304 .
  • Internal layer 302 provides a limited set of functions for external layer 304 .
  • the limited set of functions of the internal layer can be used to build a richer and broader set of functions, which are implemented by the external layer and exported to the processes via the application programming interface.
  • the internal layer of Group Services (also referred to as a “metagroup layer”) is concerned with the Group Services daemons, and not the processes (i.e., the client processes) coupled to the daemons. That is, the internal layer focuses its efforts on the processors, which include the daemons.
  • the internal layer of Group Services implements functions on a per processor group basis.
  • Each processor group includes one or more processors having a Group Services daemon executing thereon.
  • the processors of a particular group are related in that they are executing related processes. (In one example, processes that are related provide a common function.)
  • processor group X 400
  • processor group X 400
  • processing node 1 and processing node 2 since each of these nodes is executing a process X, but it does not include processing node 3 .
  • processing nodes 1 and 2 are members of processor group X.
  • a processing node can be a member of none or any number of processor groups, and processor groups can have one or more members in common.
  • a processor requests to become a member of a particular processor group (e.g., processor group X) when a process related to that group (e.g., process X) requests to join a corresponding process group (e.g., process group X) and the processor is not aware of that corresponding process group. Since the Group Services daemon on the processor handling the request to join a particular process group is not aware of the process group, it knows that it is not a member of the corresponding processor group. Thus, the processor asks to become a member, so that the process can become a member of the process group.
  • Internal layer 302 (FIG. 3) implements a number of functions on a per processor group basis. These functions include, for example, maintenance of group leaders.
  • a group leader is selected for each processor group of the network.
  • the group leader is the first processor requesting to join a particular group.
  • the group leader is responsible for controlling activities associated with its processor group(s). For example, if a processing node, node 2 (FIG. 4), is the first node to request to join processor group X, then processing node 2 is the group leader and is responsible for managing the activities of processor group X. It is possible for processing node 2 to be the group leader of multiple processor groups.
  • group leader recovery must take place.
  • a membership list for the processor group which is ordered in sequence of processors joining that group, is scanned, by one or more processors of the group, for the next processor in the list.
  • the membership list is preferably stored in memory in each of the processing nodes of the processor group.
  • the name server serves as a central location for storing certain information, including, for instance, a list of all processor groups of the network and a list of the group leaders for all the processor groups. This information is stored in the memory of the name server processing node.
  • the name server can be a processing node within the processor group or a processing node independent of the processor group.
  • multicast messages require routing in order to reach multiple networks.
  • the problem of maintaining multicast message reachability is often complicated by the fact that certain older routers do not support multicast routing.
  • This routing problem is conventionally solved by manually configuring the selected hosts and routers in the distributed system as “routing points”.
  • Such host routing points are capable of running host discovery protocols that enable them to configure their routing tables in such a way that all nodes in the system become reachable via multicast.
  • multicast messages have to be routed through IP routers which do not support multicast routing.
  • a virtual interface or “tunnel” has to be configured, such that two nodes in different networks can interface and act as routing points for multicast messages. Tunneling is described in greater detail in the above-incorporated publication by Gary Wright and Richard Stevens entitled “TCP/IP Illustrated”. Again, tunneling end-points are traditionally configured manually by a network administrator.
  • a Dynamic Multicast Routing (DMR) facility which utilizes the above-described Group Services software.
  • the Group Services software provides facilities for other distributed processes to form “groups”.
  • a group is a distributed facility which monitors the health of its members and is capable of executing protocols for them.
  • the DMR facility of the present invention utilizes, in one example, Group Services to monitor the health of the routing nodes, and in executing its election protocols which ultimately determine which node of a plurality of nodes in the group should act as an end-point for a tunnel for multicast routing.
  • a DMR facility in accordance with the present invention also employs the mrouted daemon (again, as specified in the above-incorporated publication by Gary Wright and Richard Stevens entitled “TCP/IP Illustrated”) to establish tunneling end-points.
  • the DMR facility of this invention utilizes the mrouted daemon in such a way that it does not require any of a node's host discovery mechanisms to be deployed; and does not alter the established IP routing tables of the node. This behavior is desirable because the IP routes are configured separately.
  • the DMR thus supports any configuration of IP routing, i.e., whether dynamic, static or custom made.
  • FIG. 5 depicts a further example of a distributed computing environment, denoted 500 , having a plurality of nodes 510 distributed among multiple networks (Network A, Network B, Network C).
  • the DMR facility described herein implements a routing topology, in which exactly one point in each network of a plurality of interconnected networks acts as a routing or tunneling agent.
  • a network of nodes is synonymous herein with a group of nodes.
  • the input information to the DMR facility is a collection of nodes with an arbitrary network configuration, where every node is reachable via conventional IP datagrams.
  • Output is a dynamic set of router nodes which are configured to route multicast datagrams, either via a real interface or a virtual interface (i.e., a tunnel). This dynamic set of nodes ensures that all functional nodes within the distributed computing environment are reachable via multicast.
  • FIG. 5 solid lines represent actual network connections. These physical connections define three physical networks within the distributed computing environment. Namely, Network A comprising node 1 , node 2 , node 3 and a router 520 , Network B including node 4 , node 5 , node 6 and router 520 , and Network C having nodes 1 & 4 .
  • Router 520 in FIG. 5 is assumed to comprise a specialized hardware element which is used only for network routing. Router 520 does not comprise a computing node in the sense that it can only execute a pre-determined number of protocols.
  • nodes 1 - 6 comprise processing or computing nodes as described above and execute the DMR facility of the present invention.
  • the circles around nodes 2 , 4 & 6 identify these nodes as multicast routing points or routing nodes selected by the DMR facility for multicast message forwarding as described further herein.
  • FIG. 5 One aspect of FIG. 5 is that any two computing nodes could actually be used as routing points to tunnel across the router.
  • the DMR facility of this invention runs a special group protocol described below that ensures that only two nodes between two groups will be chosen. This DMR facility monitors the health of these chosen routing points, and does immediate, automatic reconfiguration in the case of failure. Because reconfiguration is automatic, the routing facility is referred to herein as “dynamic”.
  • the DMR process runs in every node of the system, i.e., every node of the distributed computing environment could potentially be selected as a multicast routing node.
  • the DMR process reads the IP address and subnet mask for each communication interface (i.e., adapter) which is configured in the machine (i.e., node) that the DMR process is running on. Every node has to have at least one communication interface in order for the node to be within one network of the multiple networks in the distributed computing environment.
  • communication interface i.e., adapter
  • Every node has to have at least one communication interface in order for the node to be within one network of the multiple networks in the distributed computing environment.
  • the DMR process then uses the network ID as a group identifier in accordance with this invention.
  • Each DMR process will join as many groups as there are communication adapters in the node where it runs, again using the network IDs as the group identifiers.
  • the DMR processes of the group act as a distributed subsystem. This means that the DMR processes are now aware of the existence of each other, and they run synchronized protocols.
  • a DMR process joins a group
  • the process receives a membership list from Group Services.
  • the Group Services subsystem guarantees that the first element in the list is the process that has first successfully joined the group.
  • the DMR utilizes the ordering within the membership list to determine the group leader for each group.
  • the DMR process After joining a group, the DMR process checks to see if it is the group leader of any group; that is, the process checks if it is the first member on any of the group membership lists. The processes which are appointed group leaders will then join another group, which consists only of group leaders. This special group is referred to herein as the “group leaders group” or “GL_group”.
  • the members of the GL_group utilize the same technique described above to elect a group leader; that is, they pick the first member identified on the GL_group membership list.
  • the leader of the GL_group is referred to herein as the “system leader”.
  • the tunneling end-points are created.
  • the system leader's DMR will start an mrouted process and configure it for tunneling using a configuration file and a refresh signal.
  • the system leader DMR will configure its local mrouted daemon to tunnel multicast datagrams from all of its configured communication interfaces to each of the group leaders of the various network groups, i.e., the groups which were first formed and which utilize the networkID as group name.
  • the other members of the GL_group which are leaders of some network group, will in turn also start an mrouted process configured to route multicast datagrams from the communication interface that they are the leader of to all communication interfaces of the system leader.
  • the resulting network topology is that the system leader acts as a routing point for all the leaders of all the network groups.
  • Node 2 operates to forward multicast messages to any node within Network A
  • node 4 forwards multicast messages to any node within Network B
  • node 6 forwards multicast messages to any node within Network C.
  • the same node could operate as group leader for multiple groups of nodes.
  • node 6 could have been group leader for both network B and network C.
  • FIG. 7 depicts a flowchart of the above-described initialization processing in accordance with the present invention.
  • the DMR facility 700 is started on each node of the distributed computing environment, and for each communication interface 710 the DMR facility reads its corresponding IP address and subnet mask to determine a networkID 720 .
  • the networkID which is defined as IP_address & subnet_mask, is employed herein as a “group identifier”, or “groupID”.
  • groupID the node joins a Group Services group using the groupID 730 and determines whether a groupID has been determined for each of its communication interfaces 740 . If no, the process repeats until each interface has a groupID determined for it, and the node has joined the corresponding Group Services group identified by that groupID.
  • the DMR process When the DMR process joins a group, it receives a membership list from the Group Services. This membership list is then employed as described above to determine whether the node is a group leader of any group 750 .
  • a node is a group leader if it is the first member on any of the membership lists of a group to which it belongs. If the node is a leader of a group, then the node joins the group of group leaders, i.e., the GL_group 760 . If the node is other than a group leader, or after the node has joined the GL_group, initialization continues as noted in the recovery processing of FIG. 8 770 .
  • recovery processing starts 800 with the DMR process inquiring whether it is the leader of a network group 810 . If the DMR process is not a group leader, then the node simply waits for a Group Services notification of a membership change 870 , such as the addition or deletion of a node to the group. Upon notification of a membership change, the recovery process repeats as shown.
  • the node joins the GL_group if not already a member 820 , and determines whether it is the GL_group leader 830 . If so, then the node builds a configuration file for the mrouted daemon for tunneling from all of this node's interfaces to all other members of the GL_group 840 . Once the configuration file is established, the mrouted daemon is started or signaled if it is already running 860 .
  • the node If the DMR process is on a node which is not the GL_group leader, then the node builds the configuration file for mrouted to tunnel from the network that the process is a leader of to the GL_group leader 850 , and again starts the mrouted daemon or signals it if it is already running 860 . After completing tunneling, the node waits for the Group Services to notify it of a membership change 870 , after which processing repeats as indicated.
  • the DMR processes recover automatically from the failure of any node in the distributed computing environment by employing the group formation protocols of FIGS. 7 & 8. Any failure within the environment is immediately detected by the Group Services subsystem, which will then inform the DMR processes which belong to any of the groups that the failed node used to belong to. The surviving nodes will perform the same election mechanisms as described above. If the failed node was the group leader for a network group, a new leader is elected. Again, in one example, the new leader comprises the first listed node in the membership list of the effected group. If the failed node was the leader of the GL_group, a new leader is similarly chosen for that group. Whenever a group leader is elected, it re-establishes the tunnel end-points as described above.
  • the operational loop of the DMR process depicted in FIG. 8 is based on membership within the several groups employed. After initialization, the process joins the appropriate groups, and configures the mrouted daemon as indicated. When another process joins the group, or leaves the group due to a failure, all processes get notified by the Group Services of a membership change, and all processes will make another pass at the recovery loop of FIG. 8, updating the configuration as appropriate.
  • the present invention can be included, for example, in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media.
  • This media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention.
  • the articles of manufacture can be included as part of the computer system or sold separately.
  • At least one program storage device readable by machine tangibly embodying at least one program of instructions executable by the machine, to perform the capabilities of the present invention, can be provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A Dynamic Multicast Routing (DMR) facility is provided for a distributed computing environment having a plurality of networks of computing nodes. The DMR facility automatically creates virtual interfaces between selected computing nodes of the networks to ensure multicast message reachability to all functional computing nodes within the distributed computing environment. The DMR facility employs a group of group leader nodes (GL_group) among which virtual interfaces for multicast messaging are established. Upon failure of one of the group leader nodes, another computing node of the respective network having the failing group leader node is assigned group leader status for re-establishing virtual interfaces. Virtual interfaces are established between the group leader nodes such that redundancy in message routing is avoided.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS/PATENTS
  • This application is a divisional of U.S. patent application Ser. No. 09/238,202, filed Jan. 27, 1999, entitled “Dynamic Multicast Routing Facility For A Distributed Computing Environment”, the entirety of which is hereby incorporated herein by reference. [0001]
  • This application also contains subject matter which is related to the subject matter of the following applications and patents. Each of the below-listed applications and patents is hereby incorporated herein by reference in its entirety: [0002]
  • U.S. Ser. No. 08/640,305, filed Apr. 30, 1996, entitled “An Application Programming Interface Unifying Multiple Mechanisms”, now abandoned in favor of U.S. Pat. No. 6,026,426 issued Feb. 15, 2000; [0003]
  • U.S. Pat. No. 6,104,871, issued Aug. 15, 2000, entitled “Utilizing Batch Request to Present Membership Changes to Process Groups”; [0004]
  • U.S. Pat. No. 5,805,786, issued Sep. 8, 1998, entitled “Recovery of a Name Server Managing Membership of a Domain of Processors in a Distributed Computing Environment”; [0005]
  • U.S. Pat. No. 5,799,146, issued Aug. 25, 1998, entitled “Communications System Involving Groups of Processors of a Distributed Computing Environment”; [0006]
  • U.S. Pat. No. 5,793,962, issued Aug. 11, 1998, entitled “System for Managing Membership of a Group of Processors in a Distributed Computing Environment”; [0007]
  • U.S. Pat. No. 5,790,788, issued Aug. 4, 1998, entitled “Managing Group Events by a Name Server for a Group of Processors in a Distributed Computing Environment”; [0008]
  • U.S. Pat. No. 5,790,772, issued Aug. 4, 1998, entitled “Communications Method Involving Groups of Processors of a Distributed Computing Environment”; [0009]
  • U.S. Pat. No. 5,787,250, issued Jul. 28, 1998, entitled “Program Product for Managing Membership of a Group of Processors in a Distributed Computing Environment”; [0010]
  • U.S. Pat. No. 5,787,249, issued Jul. 28, 1998, entitled “Method for Managing Membership of a Group of Processors in a Distributed Computing Environment”; [0011]
  • U.S. Pat. No. 5,768,538, issued Jun. 16, 1998, entitled “Barrier Synchronization Method Wherein Members Dynamic Voting Controls the Number of Synchronization Phases of Protocols and Progression to Each New Phase”; [0012]
  • U.S. Pat. No. 5,764,875, issued Jun. 9, 1998, entitled “Communications Program Product Involving Groups of Processors of a Distributed Computing Environment”; [0013]
  • U.S. Pat. No. 5,748,958, issued May 5, 1998, entitled “System for Utilizing Batch Requests to Present Membership Changes to Process Groups”; [0014]
  • U.S. Pat. No. 5,704,032, issued Dec. 30, 1997, entitled “Method for Group Leader Recovery in a Distributed Computing Environment”; [0015]
  • U.S. Pat. No. 5,699,501, issued Dec. 16, 1997, entitled “System for Group Leader Recovery in a Distributed Computing Environment”; and [0016]
  • U.S. Pat. No. 5,696,896, issued Dec. 9, 1997, entitled “Program Product for Group Leader Recovery in a Distributed Computing Environment”. [0017]
  • TECHNICAL FIELD
  • This invention relates in general to distributed computing environments, and in particular, to a dynamic facility for ensuring multicast routing of messages within such an environment, irrespective of failure of one or more established multicast routing node. [0018]
  • BACKGROUND OF THE INVENTION
  • Many network environments enable messages to be forwarded from one site within the network to one or more other sites using a multicast protocol. Typical multicast protocols send messages from one site to one or more other sites based on information stored within a message header. One example of a system that includes such a network environment is a publish/subscribe system. In publish/subscribe systems, publishers post messages and subscribers independently specify categories of events in which they are interested. The system takes the posted messages and includes in each message header the destination information of those subscribers indicating interest in the particular message. The system then uses the destination information in the message to forward the message through the network to the appropriate subscribers. [0019]
  • In large systems, there may be many subscribers interested in a particular message. Thus, a large list of destinations would need to be added to the message header for use in forwarding the message. The use of such a list, which can even be longer than the message itself, can clearly degrade system performance. Another approach is to use a multicast group, in which destinations are statically bound to a group name, and then that name is included in the message header. The message is sent to all those destinations statically bound to the name. This technique has the disadvantage of requiring static groups of destinations, which restricts flexibility in many publish/subscribe systems. Another disadvantage of static groups occurs upon failure of a destination node within the group. [0020]
  • Multicast messages must be routed in order to reach multiple networks in a large distributed computing environment. Multicast routing is complicated by the fact that some older routers do not support such routing. In that case, routing is conventionally solved by manually configuring selected hosts (i.e., computing nodes) as “routing points”. Such routing points are capable of running host discovery protocols that enable them to configure their routing tables in such a way that all nodes of the system will then be reachable via multicast. In some cases, the multicast messages have to be routed through IP routers which do not support multicast routing. In such cases, a “tunnel” has to be configured such that two hosts in different networks can act as routing points for multicast messages. For further information on “tunneling” reference an Addison/Wesley publication entitled “TCP/IP Illustrated,” by Gary Wright and Richard Stevens, ISBN 0-201-63354-X (1995), the entirety of which is hereby incorporated herein by reference. Again, tunneling end-points are usually configured manually, for example, by a network administrator. [0021]
  • The above-summarized solution has the weakness that the failure of any one such static routing point or node will isolate nodes of the corresponding subsystem. There is no recovery mechanism currently that can guarantee the reachability of all nodes given the failure of one or more nodes in the distributed computing environment. It could be argued that manual configuration of all nodes as routing points would allow survival of any failure. However, such a solution is still unsatisfactory because the deployment of each node as a routing node imposes unnecessary overhead, and significantly multiplies the number of messages required to be forwarded due to the increased number of routes between the nodes. The resulting degradation of transmission bandwidth is clearly unacceptable. [0022]
  • In view of the above, a need exists for a mechanism capable of monitoring the nodes of a distributed computing environment, and in particular, the routing nodes, and automatically react to a failure of any routing node within the environment. Furthermore, it is desirable that only one node act as a routing point to/from a network, to avoid additional overhead and pollution of network messages. This invention addresses these needs by providing a dynamic multicast routing facility for the distributed processing environment. [0023]
  • DISCLOSURE OF THE INVENTION
  • Briefly described, the invention comprises in one aspect a method for dynamically ensuring multicast messaging within a distributed computing environment. The method includes: establishing multiple groups of computing nodes within the distributed computing environment; selecting one node of each group of computing nodes as a group leader node; forming a group of group leader nodes (GL_group) and selecting a group leader of the GL_group; and automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby establishing multicast routing between groups of nodes of the distributed computing environment. [0024]
  • In another aspect, the invention comprises a processing method for a distributed computing environment having multiple networks of computing nodes. Each network has at least one computing node. At least one computing node of the multiple networks of computing nodes functions as a multicast routing node. The method includes: automatically responding to a failure at the at least one computing node functioning as multicast routing node to reassign the multicast routing function; and wherein the automatically responding includes dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks to maintain reachability of multicast messages to all functional computing nodes of the distributed computing environment. [0025]
  • In yet another aspect, the invention comprises a system for ensuring multicast messaging within a distributed computing environment. The system includes multiple groups of computing nodes within the distributed computing environment, and means for selecting one node of each group of computing nodes as a group leader node. The system further includes means for forming a group of group leader nodes (GL_group) and for selecting a group leader of the GL_group. In addition, a mechanism is provided for automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby ensuring multicast routing between groups of nodes of the distributed computing environment. [0026]
  • In still another aspect, a processing system is provided for a distributed computing environment which includes multiple networks of computing nodes. The multiple networks of computing nodes employ multicast messaging, with each network having at least one computing node, and at least one computing node of the multiple networks of computing nodes functioning as multicast routing node. The system further includes means for automatically responding to a failure at the at least one computing node functioning as multicast routing node to reassign the multicast routing function. The means for automatically responding includes a mechanism for dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks of computing nodes to maintain reachability of multicast messages to all functional computing nodes of the distributed computing environment. [0027]
  • In a further aspect, an article of manufacture is presented which includes a computer program product comprising a computer usable medium having computer readable program code means therein for use in ensuring multicast messaging within a distributed computing environment. The computer readable program code means in the computer program product includes computer readable program code means for causing a computer to effect: establishing multiple groups of computing nodes within the distributed computing environment; selecting one node of each group of computing nodes as a group leader node; forming a group of group leader nodes (GL_group) and selecting a group leader of the GL_group; and automatically creating a virtual interface for multicast messaging between the group leader node of the GL_group and at least one other group leader node within the GL_group, thereby establishing multicast routing between groups of nodes of the distributed computing environment. [0028]
  • In a still further aspect, the invention includes an article of manufacture which includes a computer program product comprising a computer usable medium having computer readable program code means therein for maintaining multicast message reachability within a distributed computing environment having multiple networks of computing nodes employing multicast messaging. Each network has at least one computing node, and at least one computing node of the multiple networks of computing nodes functions as multicast routing node. The computer readable program code means in the computer program product includes computer readable program code means for causing a computer to effect: automatically responding to a failure at the at least one computing node functioning as the multicast routing node to reassign the multicast routing function; wherein the automatically responding comprises dynamically reconfiguring the distributed computing environment to replace each failed multicast routing node of the at least one multicast routing node with another computing node of the multiple networks of computing nodes to maintain multicast message reachability to all functional computing nodes of the distributed computing environment. [0029]
  • To restate, the present invention solves the problem of maintaining reachability of multicast messages in a distributed computing system having multiple networks of computing nodes. The solution, referred to as a Dynamic Multicast Routing (DMR) facility, automatically selects another computing node from a network having a failed computing node operating as the multicast routing node. Further, the DMR facility provided herein ensures that only one node of a network will act as a routing point between networks, thereby avoiding host overhead and pollution of network messages inherent, for example, in making each node of the distributed computing environment capable of receiving and sending multicast messages. In one embodiment, the DMR facility utilizes Group Services to be notified immediately of a node failure or communication adapter failure; and automatically responds thereto. [0030]
  • The DMR facility described herein has multiple applications in a distributed computing environment such as a cluster or parallel system. For example, the DMR facility could be used when sending a multicast datagram to a known address for service. An example of the need for a dynamic service facility is the location of the registry servers at boot time. Another use of a DMR facility in accordance with this invention is in the distribution of a given file to a large number of nodes. For example, propagation of a password file by multicast is an efficient way to distribute information. The DMR facility presented herein ensures that the multicast message gets routed to all subnets, independently of which nodes are down at any one time and independent of router box support.[0031]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above-described objects, advantages and features of the present invention, as well as others, will be more readily understood from the following detailed description of certain preferred embodiments of the invention, when considered in conjunction with the accompanying drawings in which: [0032]
  • FIG. 1 depicts one example of a distributed computing environment to incorporate the principles of the present invention; [0033]
  • FIG. 2 depicts an expanded view of a number of the processing nodes of the distributed computing environment of FIG. 1; [0034]
  • FIG. 3 depicts one example of the components of a Group Services facility to be employed by one embodiment of a Dynamic Multicast Routing (DMR) facility in accordance with the principles of the present invention; [0035]
  • FIG. 4 illustrates one example of a processor group resulting from the Group Services protocol to be employed by said one embodiment of a DMR facility in accordance with the principles of the present invention; [0036]
  • FIG. 5 depicts another example of a distributed computing environment to employ a DMR facility in accordance with the principles of the present invention, wherein multiple groups of nodes or network groups are to be virtually interfaced for multicast messaging; [0037]
  • FIG. 6 depicts virtual interfaces or tunnels, established by a Dynamic Multicast Routing facility in accordance with the principles of the present invention, between a [0038] group leader node 2 and other group leader nodes 4 & 6 of the multiple network groups;
  • FIG. 7 is a flowchart of initialization processing in accordance with one embodiment of a Dynamic Multicast Routing facility pursuant to the principles of the present invention; and [0039]
  • FIG. 8 is a flowchart of recovery processing in accordance with one embodiment of a Dynamic Multicast Routing facility pursuant to the principles of the present invention.[0040]
  • BEST MODE FOR CARRYING OUT THE INVENTION
  • In one embodiment, the techniques of the present invention are used in distributed computing environments in order to provide multi-computer applications that are highly-available. Applications that are highly-available are able to continue to execute after a failure. That is, the application is fault-tolerant and the integrity of customer data is preserved. [0041]
  • It is important in highly-available systems to be able to coordinate, manage and monitor changes to subsystems (for example, process groups) running on processing nodes within the distributed computing environment. In accordance with the principles of the present invention, a facility is provided for dynamically or automatically accomplishing this in a distributed computing environment employing multicast routing of data messages between nodes. The Dynamic Multicast Routing facility (herein referred to as the “DMR facility”) of the present invention employs, in one example, the concepts referred to as “Group Services” in the above-incorporated U.S. patent applications and Letters Patent. [0042]
  • As used herein, a “host” comprises a computer which is capable of supporting network protocols, and a “node” is a processing unit, such as a host, in a computer network. “Multicast” refers to an internet protocol (IP) multicast as the term is used in the above-incorporated Addison/Wesley publication entitled “TCP/IP Illustrated”. A “daemon” is persistent software which runs detached from a controlling terminal. “Distributed subsystem” is a group of daemons which run in different hosts. “Group Services” is software present on International Business Machines Corporation's Parallel System Support Programs (PSSP) Software Suite (i.e., operating system of the Scalable Parallel (SP)), and IBM's High Availability Cluster Multi-Processing/Enhanced Scalability (HACMP/ES) Software Suite. [0043]
  • Group Services is a system-wide, fault-tolerant and highly-available service that provides a facility for coordinating, managing and monitoring changes to a subsystem running on one or more processors of a distributed computing environment. Group Services provides an integrated framework for designing and implementing fault-tolerant subsystems and for providing consistent recovery of multiple subsystems. Group Services offers a simple programming model based on a small number of core concepts. These concepts include a cluster-wide process group membership and synchronization service that maintains application specific information with each process group. [0044]
  • Although as noted above, in one example, the mechanisms of the present invention are implemented employing the Group Services facility, the mechanisms of this invention could be used in or with various other facilities, and thus, Group Services is only one example. The use of the term “Group Services” in explaining one embodiment of the present invention is for convenience only. [0045]
  • In one embodiment, the mechanisms of the present invention are incorporated and used in a distributed computing environment, such as the one depicted in FIG. 1. Distributed [0046] computing environment 100 includes, for instance, a plurality of frames 102 coupled to one another via a plurality of LAN gates 104. Frames 102 and LAN gates 104 are described in detail below.
  • In the example shown, distributed [0047] computing environment 100 includes eight (8) frames, each of which includes a plurality of processing or computing nodes 106. In one instance, each frame includes sixteen (16) processing nodes (a.k.a., processors). Each processing node is, for instance, a RISC/6000 computer running AIX, i.e., a UNIX based operating system. Each processing node within a frame is coupled to the other processing nodes of the frame via, for example, an internal LAN connection. Additionally, each frame is coupled to the other frames via LAN gates 104.
  • As examples, each [0048] LAN gate 104 includes either a RISC/6000 computer, any computer network connection to the LAN, or a network router. However, these are only examples. It will be apparent to those skilled in the relevant art that there are other types of LAN gates, and that other mechanisms can be used to couple the frames to one another.
  • In addition to the above, the distributed computing environment of FIG. 1 is only one example. It is possible to have more or less than eight frames, or more or less than sixteen nodes per frame. Further, the processing nodes do not have to be RISC/6000 computers running AIX. Some or all of the processing nodes can include different types of computers and/or different operating systems. All of these variations are considered a part of the claimed invention. [0049]
  • In one embodiment, a Group Services subsystem incorporating the mechanisms of the present invention is distributed across a plurality of processing nodes of distributed [0050] computing environment 100. In particular, in one example, a Group Services daemon 200 (FIG. 2) is located within one or more of processing nodes 106. The Group Services daemons 200 are accessed by each process via an application programming interface 204. The Group Services daemons are collectively referred to as “Group Services”.
  • Group Services facilitates, for instance, communication and synchronization between multiple processes of a process group, and can be used in a variety of situations, including, for example, providing a distributed recovery synchronization mechanism. A process [0051] 202 (FIG. 2) desirous of using the facilities of Group Services is coupled to a Group Services daemon 200. In particular, the process is coupled to Group Services by linking at least a part of the code associated with Group Services (e.g., the library code) into its own code.
  • In one embodiment, [0052] Group Services 200 includes an internal layer 302 (FIG. 3) and an external layer 304. Internal layer 302 provides a limited set of functions for external layer 304. The limited set of functions of the internal layer can be used to build a richer and broader set of functions, which are implemented by the external layer and exported to the processes via the application programming interface. The internal layer of Group Services (also referred to as a “metagroup layer”) is concerned with the Group Services daemons, and not the processes (i.e., the client processes) coupled to the daemons. That is, the internal layer focuses its efforts on the processors, which include the daemons. In one example, there is only one Group Services daemon on a processing node; however, a subset or all of the processing nodes within the distributed computing environment can include Group Services daemons.
  • The internal layer of Group Services implements functions on a per processor group basis. There may be a plurality of processor groups in the distributed computing environment. Each processor group includes one or more processors having a Group Services daemon executing thereon. The processors of a particular group are related in that they are executing related processes. (In one example, processes that are related provide a common function.) For example, referring to FIG. 4, processor group X ([0053] 400) includes processing node 1 and processing node 2, since each of these nodes is executing a process X, but it does not include processing node 3. Thus, processing nodes 1 and 2 are members of processor group X. A processing node can be a member of none or any number of processor groups, and processor groups can have one or more members in common.
  • In order to become a member of a processor group, the processor needs to request to be a member of that group. A processor requests to become a member of a particular processor group (e.g., processor group X) when a process related to that group (e.g., process X) requests to join a corresponding process group (e.g., process group X) and the processor is not aware of that corresponding process group. Since the Group Services daemon on the processor handling the request to join a particular process group is not aware of the process group, it knows that it is not a member of the corresponding processor group. Thus, the processor asks to become a member, so that the process can become a member of the process group. [0054]
  • Internal layer [0055] 302 (FIG. 3) implements a number of functions on a per processor group basis. These functions include, for example, maintenance of group leaders.
  • A group leader is selected for each processor group of the network. In one example, the group leader is the first processor requesting to join a particular group. As described herein, the group leader is responsible for controlling activities associated with its processor group(s). For example, if a processing node, node [0056] 2 (FIG. 4), is the first node to request to join processor group X, then processing node 2 is the group leader and is responsible for managing the activities of processor group X. It is possible for processing node 2 to be the group leader of multiple processor groups.
  • If the group leader is removed from the processor group for any reason, including, for instance, the processor requests to leave the group, the processor fails or the Group Services daemon on the processor fails, then group leader recovery must take place. In one example, in order to select a new group leader, a membership list for the processor group, which is ordered in sequence of processors joining that group, is scanned, by one or more processors of the group, for the next processor in the list. The membership list is preferably stored in memory in each of the processing nodes of the processor group. Once the group leader is selected, the new group leader informs, in one embodiment, a name server that it is the new group leader. A name server might be one of the processing nodes within the distributed computing environment designated to be the name server. The name server serves as a central location for storing certain information, including, for instance, a list of all processor groups of the network and a list of the group leaders for all the processor groups. This information is stored in the memory of the name server processing node. The name server can be a processing node within the processor group or a processing node independent of the processor group. [0057]
  • In large clustered systems, multicast messages require routing in order to reach multiple networks. As noted initially, the problem of maintaining multicast message reachability is often complicated by the fact that certain older routers do not support multicast routing. This routing problem is conventionally solved by manually configuring the selected hosts and routers in the distributed system as “routing points”. Such host routing points are capable of running host discovery protocols that enable them to configure their routing tables in such a way that all nodes in the system become reachable via multicast. [0058]
  • In certain cases, multicast messages have to be routed through IP routers which do not support multicast routing. In such cases, a virtual interface or “tunnel” has to be configured, such that two nodes in different networks can interface and act as routing points for multicast messages. Tunneling is described in greater detail in the above-incorporated publication by Gary Wright and Richard Stevens entitled “TCP/IP Illustrated”. Again, tunneling end-points are traditionally configured manually by a network administrator. [0059]
  • In accordance with the principles of the present invention, a Dynamic Multicast Routing (DMR) facility is provided which utilizes the above-described Group Services software. As noted, the Group Services software provides facilities for other distributed processes to form “groups”. A group is a distributed facility which monitors the health of its members and is capable of executing protocols for them. The DMR facility of the present invention utilizes, in one example, Group Services to monitor the health of the routing nodes, and in executing its election protocols which ultimately determine which node of a plurality of nodes in the group should act as an end-point for a tunnel for multicast routing. [0060]
  • A DMR facility in accordance with the present invention also employs the mrouted daemon (again, as specified in the above-incorporated publication by Gary Wright and Richard Stevens entitled “TCP/IP Illustrated”) to establish tunneling end-points. The DMR facility of this invention utilizes the mrouted daemon in such a way that it does not require any of a node's host discovery mechanisms to be deployed; and does not alter the established IP routing tables of the node. This behavior is desirable because the IP routes are configured separately. The DMR thus supports any configuration of IP routing, i.e., whether dynamic, static or custom made. [0061]
  • FIG. 5 depicts a further example of a distributed computing environment, denoted [0062] 500, having a plurality of nodes 510 distributed among multiple networks (Network A, Network B, Network C). The DMR facility described herein implements a routing topology, in which exactly one point in each network of a plurality of interconnected networks acts as a routing or tunneling agent. For reasons described below, a network of nodes is synonymous herein with a group of nodes. The input information to the DMR facility is a collection of nodes with an arbitrary network configuration, where every node is reachable via conventional IP datagrams. Output is a dynamic set of router nodes which are configured to route multicast datagrams, either via a real interface or a virtual interface (i.e., a tunnel). This dynamic set of nodes ensures that all functional nodes within the distributed computing environment are reachable via multicast.
  • In the example of FIG. 5, solid lines represent actual network connections. These physical connections define three physical networks within the distributed computing environment. Namely, Network [0063] A comprising node 1, node 2, node 3 and a router 520, Network B including node 4, node 5, node 6 and router 520, and Network C having nodes 1 & 4. Router 520 in FIG. 5 is assumed to comprise a specialized hardware element which is used only for network routing. Router 520 does not comprise a computing node in the sense that it can only execute a pre-determined number of protocols. In contrast, nodes 1-6 comprise processing or computing nodes as described above and execute the DMR facility of the present invention. The circles around nodes 2, 4 & 6 identify these nodes as multicast routing points or routing nodes selected by the DMR facility for multicast message forwarding as described further herein.
  • One aspect of FIG. 5 is that any two computing nodes could actually be used as routing points to tunnel across the router. The DMR facility of this invention runs a special group protocol described below that ensures that only two nodes between two groups will be chosen. This DMR facility monitors the health of these chosen routing points, and does immediate, automatic reconfiguration in the case of failure. Because reconfiguration is automatic, the routing facility is referred to herein as “dynamic”. [0064]
  • One detailed embodiment of a technique in accordance with the principles of the present invention to dynamically determine tunneling end-points for the forwarding of multicast datagrams can be summarized as follows: [0065]
  • The DMR process runs in every node of the system, i.e., every node of the distributed computing environment could potentially be selected as a multicast routing node. [0066]
  • At initialization time, the DMR process reads the IP address and subnet mask for each communication interface (i.e., adapter) which is configured in the machine (i.e., node) that the DMR process is running on. Every node has to have at least one communication interface in order for the node to be within one network of the multiple networks in the distributed computing environment. [0067]
  • The DMR process makes a logical (Boolean) AND operation of the IP address and subnet mask of each communication interface, obtaining in this way a network ID. Specifically, networkID=IP_address & subnet_mask. [0068]
  • The DMR process then uses the network ID as a group identifier in accordance with this invention. Each DMR process will join as many groups as there are communication adapters in the node where it runs, again using the network IDs as the group identifiers. [0069]
  • Once the node joins a group, the DMR processes of the group act as a distributed subsystem. This means that the DMR processes are now aware of the existence of each other, and they run synchronized protocols. [0070]
  • When a DMR process joins a group, the process receives a membership list from Group Services. The Group Services subsystem guarantees that the first element in the list is the process that has first successfully joined the group. The DMR utilizes the ordering within the membership list to determine the group leader for each group. [0071]
  • After joining a group, the DMR process checks to see if it is the group leader of any group; that is, the process checks if it is the first member on any of the group membership lists. The processes which are appointed group leaders will then join another group, which consists only of group leaders. This special group is referred to herein as the “group leaders group” or “GL_group”. [0072]
  • The members of the GL_group utilize the same technique described above to elect a group leader; that is, they pick the first member identified on the GL_group membership list. [0073]
  • The leader of the GL_group is referred to herein as the “system leader”. Once a DMR process is appointed a system leader, the tunneling end-points are created. The system leader's DMR will start an mrouted process and configure it for tunneling using a configuration file and a refresh signal. The system leader DMR will configure its local mrouted daemon to tunnel multicast datagrams from all of its configured communication interfaces to each of the group leaders of the various network groups, i.e., the groups which were first formed and which utilize the networkID as group name. [0074]
  • The other members of the GL_group, which are leaders of some network group, will in turn also start an mrouted process configured to route multicast datagrams from the communication interface that they are the leader of to all communication interfaces of the system leader. [0075]
  • The resulting network topology is that the system leader acts as a routing point for all the leaders of all the network groups. [0076]
  • Applying the above procedure to the distributed [0077] computing environment 500 of FIG. 5, results in the topology shown in FIG. 6. This topology is arrived at by assuming that node 2 is the first listed node of the membership list of the group comprising the nodes of network A, node 6 is the first listed node in the membership list of the nodes comprising network B, and node 4 is the first listed node in the membership list of the nodes comprising network C. Further, the topology is obtained by assuming that node 2 is the first listed group leader in the membership list for the GL_group comprising nodes 2, 4 & 6. Again, mrouted daemons at nodes 2, 4 and 6 are employed to establish the multicast tunnel connections or interfaces between these nodes. Node 2 operates to forward multicast messages to any node within Network A, node 4 forwards multicast messages to any node within Network B and node 6 forwards multicast messages to any node within Network C. Note that although not shown, the same node could operate as group leader for multiple groups of nodes. For example, node 6 could have been group leader for both network B and network C.
  • FIG. 7 depicts a flowchart of the above-described initialization processing in accordance with the present invention. The [0078] DMR facility 700 is started on each node of the distributed computing environment, and for each communication interface 710 the DMR facility reads its corresponding IP address and subnet mask to determine a networkID 720. The networkID, which is defined as IP_address & subnet_mask, is employed herein as a “group identifier”, or “groupID”. After determining a groupID, the node joins a Group Services group using the groupID 730 and determines whether a groupID has been determined for each of its communication interfaces 740. If no, the process repeats until each interface has a groupID determined for it, and the node has joined the corresponding Group Services group identified by that groupID.
  • When the DMR process joins a group, it receives a membership list from the Group Services. This membership list is then employed as described above to determine whether the node is a group leader of any [0079] group 750. Again, in one example, a node is a group leader if it is the first member on any of the membership lists of a group to which it belongs. If the node is a leader of a group, then the node joins the group of group leaders, i.e., the GL_group 760. If the node is other than a group leader, or after the node has joined the GL_group, initialization continues as noted in the recovery processing of FIG. 8 770.
  • In the example of FIG. 8, recovery processing starts [0080] 800 with the DMR process inquiring whether it is the leader of a network group 810. If the DMR process is not a group leader, then the node simply waits for a Group Services notification of a membership change 870, such as the addition or deletion of a node to the group. Upon notification of a membership change, the recovery process repeats as shown.
  • Assuming that the DMR process is running on a node that is a group leader, then the node joins the GL_group if not already a [0081] member 820, and determines whether it is the GL_group leader 830. If so, then the node builds a configuration file for the mrouted daemon for tunneling from all of this node's interfaces to all other members of the GL_group 840. Once the configuration file is established, the mrouted daemon is started or signaled if it is already running 860. If the DMR process is on a node which is not the GL_group leader, then the node builds the configuration file for mrouted to tunnel from the network that the process is a leader of to the GL_group leader 850, and again starts the mrouted daemon or signals it if it is already running 860. After completing tunneling, the node waits for the Group Services to notify it of a membership change 870, after which processing repeats as indicated.
  • In accordance with the present invention, the DMR processes recover automatically from the failure of any node in the distributed computing environment by employing the group formation protocols of FIGS. 7 & 8. Any failure within the environment is immediately detected by the Group Services subsystem, which will then inform the DMR processes which belong to any of the groups that the failed node used to belong to. The surviving nodes will perform the same election mechanisms as described above. If the failed node was the group leader for a network group, a new leader is elected. Again, in one example, the new leader comprises the first listed node in the membership list of the effected group. If the failed node was the leader of the GL_group, a new leader is similarly chosen for that group. Whenever a group leader is elected, it re-establishes the tunnel end-points as described above. [0082]
  • The operational loop of the DMR process depicted in FIG. 8 is based on membership within the several groups employed. After initialization, the process joins the appropriate groups, and configures the mrouted daemon as indicated. When another process joins the group, or leaves the group due to a failure, all processes get notified by the Group Services of a membership change, and all processes will make another pass at the recovery loop of FIG. 8, updating the configuration as appropriate. [0083]
  • The present invention can be included, for example, in an article of manufacture (e.g., one or more computer program products) having, for instance, computer usable media. This media has embodied therein, for instance, computer readable program code means for providing and facilitating the capabilities of the present invention. The articles of manufacture can be included as part of the computer system or sold separately. [0084]
  • Additionally, at least one program storage device readable by machine, tangibly embodying at least one program of instructions executable by the machine, to perform the capabilities of the present invention, can be provided. [0085]
  • The flow diagrams depicted herein are provided by way of example. There may be variations to these diagrams or the steps (or operations) described herein without departing from the spirit of the invention. For instance, in certain cases, the steps may be performed in differing order, or steps may be added, deleted or modified. All of these variations are considered to comprise part of the present invention as recited in the appended claims. [0086]
  • While the invention has been described in detail herein in accordance with certain preferred embodiments thereof, many modifications and changes therein may be effected by those skilled in the art. Accordingly, it is intended by the appended claims to cover all such modifications and changes as fall within the true spirit and scope of the invention. [0087]

Claims (11)

What is claimed is:
1. A processing method for a distributed computing environment having multiple networks of computing nodes employing multicast messaging, each network having at least one computing node, at least one computing node of said multiple networks of computing nodes functioning as a multicast routing node, said method comprising:
automatically responding to a failure at said at least one computing node functioning as said multicast routing node to reassign said multicast routing function; and
wherein said automatically responding comprises dynamically reconfiguring said distributed computing environment to replace each failed multicast routing node of said at least one multicast routing node with another computing node of said multiple networks of computing nodes to maintain multicast message reachability to all functional computing nodes of said distributed computing environment.
2. The processing method of claim 1, wherein said at least one computing node functioning as said multicast routing node comprises multiple computing nodes functioning as multiple multicast routing nodes and said distributed computing environment comprises a plurality of groups of computing nodes, each group comprising one network of said multiple networks, and wherein each computing node functioning as multicast routing node comprises a group leader for multicast routing of a respective group of computing nodes, each group leader being coupled via a virtual interface to at least one other group leader of a group of computing nodes of the distributed computing environment, and wherein said automatically responding to said failure comprises automatically selecting a new group leader from functioning computing nodes of the respective group of computing nodes having said group leader failure.
3. The processing method of claim 2, wherein said dynamically reconfiguring comprises establishing a virtual interface from said new group leader to at least one other group leader within the distributed computing environment, said virtual interface comprising a multicast messaging tunnel between said group leaders, said multicast messaging tunnel being established using an mrouted daemon.
4. The processing method of claim 3, wherein said dynamically reconfiguring comprises ensuring only one computing node of each group of computing nodes is a group leader functioning as said multicast routing node for said group of computing nodes, thereby avoiding redundancy in routing of multicast messages between any two networks of computing nodes.
5. A processing system for a distributed computing environment, said processing system comprising:
multiple networks of computing nodes within the distributed computing environment, said multiple networks of computing nodes employing multicast messaging, with each network having at least one computing node, and at least one computing node of the multiple networks of computing nodes functioning as a multicast routing node;
means for automatically responding to a failure at said at least one computing node functioning as said multicast routing node to reassign said multicast routing function, wherein said means for automatically responding comprises means for dynamically reconfiguring said distributed computing environment to replace each failed multicast routing node of said at least one multicast routing node within another computing node of said multiple networks of computing nodes to maintain reachability of multicast messages to all functional computing nodes of said distributed computing environment.
6. The system of claim 5, wherein said at least one computing node functioning as said multicast routing node comprises multiple computing nodes functioning as multiple multicast routing nodes and said distributed computing environment comprises a plurality of groups of computing nodes, each group comprising one network of said multiple networks, and wherein each computing node functioning as multicast routing node comprises a group leader for multicast routing of a respective group of computing nodes, each group leader being coupled via a virtual interface to at least one other group leader of a group of computing nodes of the distributed computing environment, and wherein said means for automatically responding to said failure comprises means for automatically selecting a new group leader from functioning computing nodes of the respective group of computing nodes when said failure comprises a group leader failure.
7. The system of claim 6, wherein said means for dynamically reconfiguring comprises means for establishing a virtual interface from said new group leader to at least one other group leader within the distributed computing environment, said virtual interface comprising a multicast messaging tunnel between said group leaders, said multicast messaging tunnel being established using an mrouted daemon.
8. The system of claim 7, wherein said means for dynamically reconfiguring comprises means for ensuring only one computing node of each group of computing nodes is a group leader functioning as said multicast routing node for said group of computing nodes, thereby avoiding redundancy in routing of multicast messages between any two networks of computing nodes.
9. A processing system for a distributed computing environment comprising:
multiple networks of computing nodes within the distributed computing environment, said multiple networks of computing nodes employing multicast messaging, with each network having at least one computing node, and at least one computing node of the multiple networks of computing nodes functioning as a multicast routing node;
a processor associated with the distributed computing environment; and
code executable by said processor associated with said distributed computing environment, said code causing said processor to effect:
automatically responding to a failure at said at least one computing node functioning as said multicast routing node to reassign said multicast routing function; and
wherein said automatically responding comprises dynamically reconfiguring said distributed computing environment to replace each failed multicast routing node of said at least one multicast routing node within another computing node of said multiple networks of computing nodes to maintain reachability of multicast messages to all functional computing nodes of said distributed computing environment.
10. An article of manufacture comprising:
a computer program product comprising a computer usable medium having computer readable program code means therein for maintaining multicast message reachability within a distributed computing environment having multiple networks of computing nodes employing multicast messaging, each network having at least one computing node, and at least one computing node of the multiple networks of computing nodes functioning as a multicast routing node, said computer readable program code means in said computer program product comprising:
(i) computer readable program code means for causing a computer to effect automatically responding to a failure at said at least one computing node functioning as said multicast routing node to reassign said multicast routing function; and
(ii) wherein said computer readable program code means for causing a computer to effect automatically responding comprises computer readable program code means for causing a computer to effect dynamically reconfiguring said distributed computing environment to replace each failed multicast routing node of said at least one multicast routing node with another computing node of said multiple networks of computing nodes to maintain multicast message reachability to all functional computing nodes of said distributed computing environment.
11. The article of manufacture of claim 10, wherein said computer readable program code means for causing a computer to effect dynamically reconfiguring comprises computer readable program code means for causing a computer to effect ensuring only one computing node of each group of computing nodes functions as a multicast routing node for said group of computing nodes, thereby avoiding redundancy in routing of multicast messages between any two networks of computing nodes.
US10/085,243 1999-01-27 2002-02-28 Dynamic multicast routing facility for a distributed computing environment Abandoned US20020089982A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/085,243 US20020089982A1 (en) 1999-01-27 2002-02-28 Dynamic multicast routing facility for a distributed computing environment

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/238,202 US6507863B2 (en) 1999-01-27 1999-01-27 Dynamic multicast routing facility for a distributed computing environment
US10/085,243 US20020089982A1 (en) 1999-01-27 2002-02-28 Dynamic multicast routing facility for a distributed computing environment

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/238,202 Division US6507863B2 (en) 1999-01-27 1999-01-27 Dynamic multicast routing facility for a distributed computing environment

Publications (1)

Publication Number Publication Date
US20020089982A1 true US20020089982A1 (en) 2002-07-11

Family

ID=22896906

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/238,202 Expired - Fee Related US6507863B2 (en) 1999-01-27 1999-01-27 Dynamic multicast routing facility for a distributed computing environment
US10/085,243 Abandoned US20020089982A1 (en) 1999-01-27 2002-02-28 Dynamic multicast routing facility for a distributed computing environment

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/238,202 Expired - Fee Related US6507863B2 (en) 1999-01-27 1999-01-27 Dynamic multicast routing facility for a distributed computing environment

Country Status (1)

Country Link
US (2) US6507863B2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063409A1 (en) * 2003-09-18 2005-03-24 Nokia Corporation Method and apparatus for managing multicast delivery to mobile devices involving a plurality of different networks
US20060080462A1 (en) * 2004-06-04 2006-04-13 Asnis James D System for Meta-Hop routing
US20070112963A1 (en) * 2005-11-17 2007-05-17 International Business Machines Corporation Sending routing data based on times that servers joined a cluster
US20120057591A1 (en) * 2010-09-07 2012-03-08 Check Point Software Technologies Ltd. Predictive synchronization for clustered devices
US20180205566A1 (en) * 2006-07-05 2018-07-19 Conversant Wireless Licensing S.A R.L. Group communication
WO2020111989A1 (en) * 2018-11-27 2020-06-04 Telefonaktiebolaget Lm Ericsson (Publ) Automatic and dynamic adaptation of grouping in a data processing system

Families Citing this family (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3223355B2 (en) * 1998-11-12 2001-10-29 株式会社エヌ・ティ・ティ・ドコモ Communication control method, communication control device, recording medium, and data terminal
US7139790B1 (en) * 1999-08-17 2006-11-21 Microsoft Corporation Weak leader election
US7035918B1 (en) * 1999-09-03 2006-04-25 Safenet Canada. Inc. License management system and method with multiple license servers
US7020717B1 (en) * 1999-09-29 2006-03-28 Harris-Exigent, Inc. System and method for resynchronizing interprocess communications connection between consumer and publisher applications by using a shared state memory among message topic server and message routers
US7702732B1 (en) 1999-09-29 2010-04-20 Nortel Networks Limited Methods for auto-configuring a router on an IP subnet
US6820133B1 (en) * 2000-02-07 2004-11-16 Netli, Inc. System and method for high-performance delivery of web content using high-performance communications protocol between the first and second specialized intermediate nodes to optimize a measure of communications performance between the source and the destination
US6735200B1 (en) * 2000-03-21 2004-05-11 International Business Machines Corporation Method and apparatus for monitoring the availability of nodes in a communications network
US6993587B1 (en) * 2000-04-07 2006-01-31 Network Appliance Inc. Method and apparatus for election of group leaders in a distributed network
US6751747B2 (en) * 2000-05-02 2004-06-15 Nortel Networks Limited System, device, and method for detecting and recovering from failures in a multicast communication system
US7657887B2 (en) * 2000-05-17 2010-02-02 Interwoven, Inc. System for transactionally deploying content across multiple machines
US6879587B1 (en) * 2000-06-30 2005-04-12 Intel Corporation Packet processing in a router architecture
US6968359B1 (en) * 2000-08-14 2005-11-22 International Business Machines Corporation Merge protocol for clustered computer system
US7194549B1 (en) * 2000-09-06 2007-03-20 Vulcan Patents Llc Multicast system using client forwarding
JP2004519024A (en) * 2000-09-08 2004-06-24 ゴー アヘッド ソフトウェア インコーポレイテッド System and method for managing a cluster containing multiple nodes
US6839752B1 (en) 2000-10-27 2005-01-04 International Business Machines Corporation Group data sharing during membership change in clustered computer system
US7185099B1 (en) 2000-11-22 2007-02-27 International Business Machines Corporation Apparatus and method for communicating between computer systems using a sliding send window for ordered messages in a clustered computing environment
US7769844B2 (en) 2000-12-07 2010-08-03 International Business Machines Corporation Peer protocol status query in clustered computer system
US7051070B2 (en) * 2000-12-18 2006-05-23 Timothy Tuttle Asynchronous messaging using a node specialization architecture in the dynamic routing network
US8505024B2 (en) * 2000-12-18 2013-08-06 Shaw Parsing Llc Storing state in a dynamic content routing network
US6839865B2 (en) * 2000-12-29 2005-01-04 Road Runner System and method for multicast stream failover
US20020174172A1 (en) * 2001-03-29 2002-11-21 Hatalkar Atul N. Mechanism to control compilation and communication of the client-device profile by using unidirectional messaging over a broadcast channel
US20050160088A1 (en) * 2001-05-17 2005-07-21 Todd Scallan System and method for metadata-based distribution of content
KR20020023100A (en) * 2001-05-28 2002-03-28 박현제 System for virtual multicast network depolyment
US6889338B2 (en) * 2001-08-15 2005-05-03 Nortel Networks Limited Electing a master server using election periodic timer in fault-tolerant distributed dynamic network systems
US7231461B2 (en) * 2001-09-14 2007-06-12 International Business Machines Corporation Synchronization of group state data when rejoining a member to a primary-backup group in a clustered computer system
US7194002B2 (en) * 2002-02-01 2007-03-20 Microsoft Corporation Peer-to-peer based network performance measurement and analysis system and method for large scale networks
US7133368B2 (en) * 2002-02-01 2006-11-07 Microsoft Corporation Peer-to-peer method of quality of service (QoS) probing and analysis and infrastructure employing same
US7089323B2 (en) * 2002-06-21 2006-08-08 Microsoft Corporation Method for multicasting a message on a computer network
US7516202B2 (en) * 2002-07-10 2009-04-07 Nortel Networks Limited Method and apparatus for defining failover events in a network device
US7260611B2 (en) * 2002-11-21 2007-08-21 Microsoft Corporation Multi-leader distributed system
US8380822B2 (en) * 2002-12-10 2013-02-19 Sharp Laboratories Of America, Inc. Systems and methods for object distribution in a communication system
CN100346605C (en) * 2003-06-26 2007-10-31 华为技术有限公司 A method and system for multicast source control
US7359335B2 (en) * 2003-07-18 2008-04-15 International Business Machines Corporation Automatic configuration of network for monitoring
US8086747B2 (en) * 2003-09-22 2011-12-27 Anilkumar Dominic Group-to-group communication over a single connection
US7525902B2 (en) * 2003-09-22 2009-04-28 Anilkumar Dominic Fault tolerant symmetric multi-computing system
CA2594267C (en) * 2005-01-06 2012-02-07 J. Barry Thompson End-to-end publish/subscribe middleware architecture
CA2594036A1 (en) 2005-01-06 2006-07-13 Tervela, Inc. Intelligent messaging application programming interface
US8102846B2 (en) * 2005-03-31 2012-01-24 Alcatel Lucent Method and apparatus for managing a multicast tree using a multicast tree manager and a content server
US7911977B2 (en) * 2005-05-31 2011-03-22 Cisco Technology, Inc. Designated router assignment per multicast group address/range
US7817609B2 (en) * 2006-03-27 2010-10-19 Ka Lun Eddie Law Multi-channel wireless networks
US9680880B2 (en) * 2006-07-11 2017-06-13 Alcatel-Lucent Usa Inc. Method and apparatus for supporting IP multicast
US20100085916A1 (en) * 2007-01-31 2010-04-08 Noosphere Communications, Inc. Systems and Methods for Hybrid Wired and Wireless Universal Access Networks
CN101378354B (en) * 2007-08-28 2010-12-08 华为技术有限公司 Method and device for forwarding multicast message
US20090158273A1 (en) * 2007-12-18 2009-06-18 Thanabalan Thavittupitchai Paul Systems and methods to distribute software for client receivers of a content distribution system
JP5074290B2 (en) * 2008-05-13 2012-11-14 株式会社日立国際電気 Redundancy switching system, redundancy management device and application processing device
US8677342B1 (en) * 2008-10-17 2014-03-18 Honeywell International Inc. System, method and apparatus for replacing wireless devices in a system
US8281317B1 (en) * 2008-12-15 2012-10-02 Open Invention Network Llc Method and computer readable medium for providing checkpointing to windows application groups
US9825822B1 (en) * 2014-02-13 2017-11-21 Amazon Technologies, Inc. Group networking in an overlay network
US10367676B1 (en) * 2015-09-28 2019-07-30 Amazon Technologies, Inc. Stable leader selection for distributed services

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5331637A (en) * 1993-07-30 1994-07-19 Bell Communications Research, Inc. Multicast routing using core based trees
US5361256A (en) * 1992-11-27 1994-11-01 International Business Machines Corporation Inter-domain multicast routing
US5365523A (en) * 1992-11-16 1994-11-15 International Business Machines Corporation Forming and maintaining access groups at the lan/wan interface
US5426637A (en) * 1992-12-14 1995-06-20 International Business Machines Corporation Methods and apparatus for interconnecting local area networks with wide area backbone networks
US5444694A (en) * 1993-03-19 1995-08-22 Thomson-Csf Method for the reconfiguration of a meshed network
US5461609A (en) * 1993-10-13 1995-10-24 General Datacomm Advanced Research Centre Ltd. Packet data network switch having internal fault detection and correction
US5473599A (en) * 1994-04-22 1995-12-05 Cisco Systems, Incorporated Standby router protocol
US5583997A (en) * 1992-04-20 1996-12-10 3Com Corporation System for extending network resources to remote networks
US5583862A (en) * 1995-03-28 1996-12-10 Bay Networks, Inc. Method and apparatus for routing for virtual networks
US5649091A (en) * 1994-06-15 1997-07-15 U.S. Philips Corporation Local area network redundant pieces of interconnection equipment a false physical address and a logical address in common to form a unique entity
US5659685A (en) * 1994-12-13 1997-08-19 Microsoft Corporation Method and apparatus for maintaining network communications on a computer capable of connecting to a WAN and LAN
US5661719A (en) * 1995-10-19 1997-08-26 Ncr Corporation Method for activating a backup network management station in a network management system
US5799146A (en) * 1996-04-30 1998-08-25 International Business Machines Corporation Communications system involving groups of processors of a distributed computing environment
US5831975A (en) * 1996-04-04 1998-11-03 Lucent Technologies Inc. System and method for hierarchical multicast routing in ATM networks
US5938732A (en) * 1996-12-09 1999-08-17 Sun Microsystems, Inc. Load balancing and failover of network services
US6256747B1 (en) * 1997-09-25 2001-07-03 Hitachi, Ltd. Method of managing distributed servers and distributed information processing system using the method
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
US6515973B1 (en) * 1998-11-24 2003-02-04 Rockwell Collins, Inc. Method of establishing a soft circuit between a source node and a destination node in a network of nodes to allow data to be transmitted therebetween
US6614757B1 (en) * 1998-11-23 2003-09-02 3Com Corporation Method of local flow control in an asynchronous transfer mode network utilizing PNNI routing protocol

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5583997A (en) * 1992-04-20 1996-12-10 3Com Corporation System for extending network resources to remote networks
US5365523A (en) * 1992-11-16 1994-11-15 International Business Machines Corporation Forming and maintaining access groups at the lan/wan interface
US5361256A (en) * 1992-11-27 1994-11-01 International Business Machines Corporation Inter-domain multicast routing
US5426637A (en) * 1992-12-14 1995-06-20 International Business Machines Corporation Methods and apparatus for interconnecting local area networks with wide area backbone networks
US5444694A (en) * 1993-03-19 1995-08-22 Thomson-Csf Method for the reconfiguration of a meshed network
US5331637A (en) * 1993-07-30 1994-07-19 Bell Communications Research, Inc. Multicast routing using core based trees
US5461609A (en) * 1993-10-13 1995-10-24 General Datacomm Advanced Research Centre Ltd. Packet data network switch having internal fault detection and correction
US5473599A (en) * 1994-04-22 1995-12-05 Cisco Systems, Incorporated Standby router protocol
US5649091A (en) * 1994-06-15 1997-07-15 U.S. Philips Corporation Local area network redundant pieces of interconnection equipment a false physical address and a logical address in common to form a unique entity
US5659685A (en) * 1994-12-13 1997-08-19 Microsoft Corporation Method and apparatus for maintaining network communications on a computer capable of connecting to a WAN and LAN
US5583862A (en) * 1995-03-28 1996-12-10 Bay Networks, Inc. Method and apparatus for routing for virtual networks
US5661719A (en) * 1995-10-19 1997-08-26 Ncr Corporation Method for activating a backup network management station in a network management system
US5831975A (en) * 1996-04-04 1998-11-03 Lucent Technologies Inc. System and method for hierarchical multicast routing in ATM networks
US5799146A (en) * 1996-04-30 1998-08-25 International Business Machines Corporation Communications system involving groups of processors of a distributed computing environment
US5938732A (en) * 1996-12-09 1999-08-17 Sun Microsystems, Inc. Load balancing and failover of network services
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
US6256747B1 (en) * 1997-09-25 2001-07-03 Hitachi, Ltd. Method of managing distributed servers and distributed information processing system using the method
US6614757B1 (en) * 1998-11-23 2003-09-02 3Com Corporation Method of local flow control in an asynchronous transfer mode network utilizing PNNI routing protocol
US6515973B1 (en) * 1998-11-24 2003-02-04 Rockwell Collins, Inc. Method of establishing a soft circuit between a source node and a destination node in a network of nodes to allow data to be transmitted therebetween

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063409A1 (en) * 2003-09-18 2005-03-24 Nokia Corporation Method and apparatus for managing multicast delivery to mobile devices involving a plurality of different networks
WO2005029876A2 (en) * 2003-09-18 2005-03-31 Nokia Corporation Method and apparatus for managing multicast delivery to mobile devices involving a plurality of different networks
WO2005029876A3 (en) * 2003-09-18 2006-03-23 Nokia Corp Method and apparatus for managing multicast delivery to mobile devices involving a plurality of different networks
US20060080462A1 (en) * 2004-06-04 2006-04-13 Asnis James D System for Meta-Hop routing
US7730294B2 (en) * 2004-06-04 2010-06-01 Nokia Corporation System for geographically distributed virtual routing
US20070112963A1 (en) * 2005-11-17 2007-05-17 International Business Machines Corporation Sending routing data based on times that servers joined a cluster
US20180205566A1 (en) * 2006-07-05 2018-07-19 Conversant Wireless Licensing S.A R.L. Group communication
US10594501B2 (en) * 2006-07-05 2020-03-17 Conversant Wireless Licensing S.a.r.l. Group communication
US8406233B2 (en) * 2010-09-07 2013-03-26 Check Point Software Technologies Ltd. Predictive synchronization for clustered devices
US8902900B2 (en) 2010-09-07 2014-12-02 Check Point Software Technologies Ltd. Predictive synchronization for clustered devices
US20120057591A1 (en) * 2010-09-07 2012-03-08 Check Point Software Technologies Ltd. Predictive synchronization for clustered devices
WO2020111989A1 (en) * 2018-11-27 2020-06-04 Telefonaktiebolaget Lm Ericsson (Publ) Automatic and dynamic adaptation of grouping in a data processing system
US11539584B2 (en) 2018-11-27 2022-12-27 Telefonaktiebolaget Lm Ericsson (Publ) Automatic and dynamic adaptation of grouping in a data processing system

Also Published As

Publication number Publication date
US20020165977A1 (en) 2002-11-07
US6507863B2 (en) 2003-01-14

Similar Documents

Publication Publication Date Title
US6507863B2 (en) Dynamic multicast routing facility for a distributed computing environment
US11588886B2 (en) Managing replication of computing nodes for provided computer networks
Oktian et al. Distributed SDN controller system: A survey on design choice
US8189579B1 (en) Distributed solution for managing periodic communications in a multi-chassis routing system
US7859992B2 (en) Router redundancy in data communication networks
US8213336B2 (en) Distributed data center access switch
US7370223B2 (en) System and method for managing clusters containing multiple nodes
US7609619B2 (en) Active-active data center using RHI, BGP, and IGP anycast for disaster recovery and load distribution
US6981034B2 (en) Decentralized management architecture for a modular communication system
US8200803B2 (en) Method and system for a network management framework with redundant failover methodology
US6996617B1 (en) Methods, systems and computer program products for non-disruptively transferring a virtual internet protocol address between communication protocol stacks
US10033622B2 (en) Controller-based dynamic routing in a software defined network environment
US20060206611A1 (en) Method and system for managing programs with network address
US20050141506A1 (en) Methods, systems and computer program products for cluster workload distribution
JP2006202280A (en) Virtual multicast routing for cluster having state synchronization
US10447652B2 (en) High availability bridging between layer 2 networks
JP4789425B2 (en) Route table synchronization method, network device, and route table synchronization program
WO2010034608A1 (en) System and method for configuration of processing clusters
WO2013020459A1 (en) Distributed cluster processing system and message processing method thereof
WO2007110942A1 (en) Server management program in network system
WO2005076580A1 (en) A method, apparatus and system of organizing servers
JP4133738B2 (en) High-speed network address takeover method, network device, and program
Al-Theneyan et al. Enhancing Jini for use across non-multicastable networks
Choi et al. Design and Implementation of Fault-Tolerant LISP Mapping System
CN115242708A (en) Multicast table item processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION