EP1062766A1 - Method, apparatus, and medium for minimal time multicast graft/join restoration - Google Patents
Method, apparatus, and medium for minimal time multicast graft/join restorationInfo
- Publication number
- EP1062766A1 EP1062766A1 EP99912580A EP99912580A EP1062766A1 EP 1062766 A1 EP1062766 A1 EP 1062766A1 EP 99912580 A EP99912580 A EP 99912580A EP 99912580 A EP99912580 A EP 99912580A EP 1062766 A1 EP1062766 A1 EP 1062766A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- network node
- information
- network
- multicast
- reconnect
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/28—Routing or path finding of packets in data switching networks using route fault recovery
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/185—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0805—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability
- H04L43/0811—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters by checking availability by checking connectivity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/16—Multipoint routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/40—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass for recovering from a failure of a protocol instance or entity, e.g. service redundancy protocols, protocol state redundancy or protocol service redirection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/1863—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast comprising mechanisms for improved reliability, e.g. status reports
Definitions
- the invention generally relates to IP multicast technology. More particularly, the invention relates to re-establishing a link with minimal delay, between a user's local area network and a multicast content channel.
- PTP point-to-point type protocols
- An example of point-to-point protocol is TCP/IP.
- PTP protocols to send data become increasingly inefficient in terms of bandwidth utilization as final destinations are located closer to each other. Packets traveling to similar destinations may traverse similar paths in a network, thus consuming extraneous bandwidth along the way.
- IP multicasting this technology is a mechanism provided by a network that facilitates the transmission of a single packet to a plurality of destinations.
- a network node Router or switch
- the network node duplicates and transmits the packet down as many pathways as needed to forward the packet to all predetermined locations specified by the multicast communication channel.
- IP multicast routing protocols The class of controlling protocols utilized by network nodes that build and maintain multicast communication channels (or routing trees) are referred to as IP multicast routing protocols.
- the class of controlling protocols utilized between destination machines and their first hop network nodes are called Group Management Protocols, such as IGMP.
- IGMP Group Management Protocol
- the IGMP specification (identified as Internet Group Management Protocol, Version 2, Network Working Group of the Internet Engineering Task Force, November 1997), as is known in the art, is hereby incorporated by reference.
- PIM dense mode is a multicast routing protocol that controls the maintenance of multicast communication channels utilized for transmissions to multicast groups which are densely distributed across a network.
- Applicable networks include intranets, extranets and the Internet, and other equivalent networks.
- PIM dense mode uses reverse path multicasting (RPM), also called reverse path forwarding (RPF), to establish and maintain routing trees.
- RPM is a technique in which a multicast packet (or datagram) is forwarded if the receiving interface on a network node is the one used to forward unicast datagrams to the source of the multicast datagram.
- the first step in establishing a routing tree according to the PIM dense mode specification is to broadcast the multicast datagram to all PIM DM enabled routers in the network, such that a routing tree is formed that provides a multicast communication channel on which datagrams are carried. This broadcast process is repeatedly performed every three minutes.
- Figure 1 shows an example of setting up a routing tree in a PIM dense mode IP multicast system.
- Network 101 includes a plurality of network nodes 102-107.
- Figure 1 includes LAN 108 as attached to network node 105 and workstations 109 and 110.
- the communication protocol between network node 105 and workstations 109 and 110 on LAN 108 may include the Internet Group Management Protocol, IGMP.
- IGMP Internet Group Management Protocol
- other communication protocols are available.
- IGMP there are several version of IGMP, mainly version numbers 1, 2 and 3, the last of which is a draft proposal in the research community.
- Other Management Protocols exist.
- the Cisco Corporation has developed a protocol called CGMP (Cisco Group Management Protocol) which is similar to IGMP, described above.
- IGMP is used to tell network node 105 (leaf node) that workstations 109 and 110 exist and continue to have interest in the particular multicast communication channel on which datagrams are forwarded to them.
- network node 102 receives a datagram from sending host 102 A and transmits it to downstream network nodes 103, 104, and 107 as illustrated by transmission paths 1 1 1, 1 12, and 1 13. Having received the datagram, network nodes - 4 - 103, 104 and 107 retransmit the datagram to other network nodes to which they are connected, except to the router from which the datagram was received. In this case, network node 103 transmits the datagram to network node 104 via path 114 and to network node 105 via path 1 15. Network node 104 retransmits the datagram to network node 103 via path 1 18, network node 105 via path 120, network node 106 via path 121, and network node 107 via path 119.
- Network node 107 retransmits the datagram to network node 104 via path 1 16 and network node 106 via path 122. Finally, depending on circumstances, for example, when the datagram was received by network node 106, it may retransmit the datagram to network node 107 via path 117.
- the second step in establishing PIM DM routing trees includes pruning back all branches that lead to end stations that have not expressed interest in attaching to the multicast communication channel.
- Figure 2 shows an example of the pruning process in action in accordance with the PLM dense mode. In Figure 2, it is assumed that either of workstations 109 or 110 have expressed an interest in attaching to the multicast communication channel (that is currently in the broadcast phase, as shown in Figure 1).
- Router 105 sends prune message 203 to router 104, but does not send a prune message to router 103, because the RPF (the best reverse path to the source of the multicast data) interface on router 105 is the one on which router 103 is connected. Router 105 will not prune itself from the multicast channel in which it is interested. It is assumed, although not shown, that router 106 servers at least one end user workstation that has expressed interest in the multicast channel. As a result, router 106 does not prune its interface towards router 107, because the interface of - 5 - router 106 knows that router 107 is router 106's RPF to the sending host 102 A. Router 107, however, does send prune message 205 to router 104.
- the RPF the best reverse path to the source of the multicast data
- FIG 2 shows a mechanism used to establish routing trees
- other techniques to establish routing trees are known in the art as related to other EP multicasting protocols including protocols PEM sparse mode, CBT, and MOSPF, for example.
- PIM sparse mode is another applicable Multicast Routing Protocol (Protocol Independent Multicast-Sparse Mode: Protocol Specification, September 9, 1997), hereinafter incorporated by reference.
- the resulting network as created by routing tables stored in the various network nodes is shown in Figure 3A
- the resulting multicast channel is shown in Figure 3B.
- Datagrams, originating from sending host 102 are transmitted from network node 102 to network nodes 103 and 107 via paths 301 and 302, respectively.
- the datagrams are transmitted to network nodes 105 and 106, via paths 303 and 305, respectively.
- the network node preferably has at least one destination end user attached to it and that the destination end user has expressed an interest in attaching to the multicast communication channel (via IGMP or some similar Group Management Protocol).
- multicast routing communication channels terminate at network nodes
- multicast communication channels terminate at end user workstations.
- network node 104 may remain connected to receive a multicast channel so as to be available for quick connection to the multicast channel by another end user.
- One feature of multicast systems includes the concept of groups. Groups are sets of destinations that participate in a multicast communication channel. The channel generally originates with a single content provider.
- FIG. 4 shows network node 102 (of network 101, not shown for simplicity) receiving a multicast datagram from sending host 102 A and sending the datagram via path 301 to network node 103.
- Network node 103 transmits the datagram to network nodes 403, 404, and 405.
- All the network nodes including 102, 103, 403, 404, 405, and 105 participated in group G_, during the broadcast phase.
- After the pruning phase only network nodes 102, 103, 403, 404, and 405, remain participating in G . (105 having pruned back the channel).
- workstation 109 which is connected to network node 105, part of network 101, desires to join group G] 409, it sends a request via IGMP to network node 105 specifying that it wishes to join group Gi.
- Network node 105 is considered to be a leaf node, in that it is directly connected via a LAN to end users that have expressed interest in the particular multicast channel, and there are no - 7 - further routers downstream of router 105 on the LAN.
- Network node 105 interprets the request from workstation 109 as a command to attempt to join group Gi.
- Network node 105 sends a Graft message with the payload Gi, Si to network node 103 via path 401.
- network node 103 replies with a Graft acknowledge (Graft-Ack) signal to network node 105 via path 402.
- Graft-Ack a Graft acknowledge
- the system may have to wait between 0 and 3 minutes (the PIM dense mode broadcast phase cycle time) or for an average of 1.5 minutes until the multicast system enters the broadcast phase such that state is reestablished in network nodes and the channel can re-establish itself.
- a difficulty here especially in environments with a need for minimal disconnect time, is that the delay associated with re-establishing a connection to network node 105, if it becomes separated from group Gi, may take an average of 1.5 minutes.
- the cause of the separation may be due to a variety of reasons including transmission line failure, local router failure, and related problems.
- This reconnect delay is the average time in which PIM dense mode enters the re-broadcasting phase as shown in Figure 1.
- the delay associated with waiting until the overall multicast system re- establishes itself is unacceptable.
- merely decreasing the multicast system's re- broadcast interval substantially increases the amount of non-usable broadcast information sent to all end network nodes, regardless of their interest in the particular communication channel.
- This approach can be very costly in terms of inefficient use of network bandwidth. The result of this delay may be one reason that systems requiring a fast re-establishment time (high availability requirements) have not readily embraced multicast technology as a solution for efficient information transfer.
- the dotted line to network node 105' in Figure 3 A indicates that while 105 and 105' are considered separate network nodes, as part of the PIM DM protocol, one of them (i.e., 105') has pruned itself from the multicast channel 101, leaving the remaining network node (105) as the "forwarder" as represented in Figure 3B.
- the process to elect which network node is to be the forwarder to LAN 108 is called Assertion.
- Routers 105 and 105' execute the Assert process, as specified in the P M dense mode specification, and only one of the network nodes becomes the forwarder, and the other network node(s) on the LAN that lose the assertion process prune themselves back, e.g. 105'.
- network node 105' when network node 105' has pruned itself from the multicast channel, it needs to wait for the re-broadcast phase (see Figure 1 implemented every three minutes) before network node 105' can begin receiving new datagrams again (see Figure 3A). When this occurs, the assertion process is repeated and the network nodes that lose the assertion process prune themselves back.
- the existence of two client side network nodes (leaf nodes in this case) 105 and 105' is to make LAN 108 more robust in being able to handle router or switch failures (network node). However, at most only one of the two network nodes 105 and 105' is active in receiving datagrams from network node 103 and forwarding them onto the LAN 108, at any given time, because the other node (e.g., 105') was pruned out of the multicast channel during the pruning phase. Note that during the assertion process, both network nodes 105 and 105' can be acting as forwarder to LAN 108, until one of the network nodes wins the election process and becomes the forwarder for the LAN 108.
- network node 105 is the forwarded for the LAN 108, - 10 - and in the event that network node 105 becomes disabled, workstations 109 and/or 110 on LAN 108 will have to wait until the PIM DM re-broadcast phase initiates, so as to forward packets to network node 105' (an average of 1.5 minutes).
- network node 105' will elect itself as the forwarder (since no other network node has challenged it and the assert election process is not triggered in this case) and being to forward packets to onto the LAN 108.
- the hosts on LAN 108 may experience large gaps in messages, and general data loss.
- the maximum waiting interval is, as above, 3 minutes, with an average waiting interval of 1.5 minutes.
- the average waiting interval of 1.5 minutes is unacceptable as volumes of data may back up, resulting in excessive data loss, memory usage and delay in attempting to re-establish the data to the hosts on LAN 108.
- users of real-time applications that require up-to-the-second information cannot tolerate even modest periods of disconnection from the service, e.g. instrument traders using a realtime financial application.
- the present invention overcomes the above-described problems by providing a fast re-establishment of a lost connection to a multicast group.
- the end user's system stores relevant connection information so that, when needed, it directs a network node to re-establish its connection to one or more multicast groups.
- Each LAN end user on network 108 may elect a director for the LAN 108 that is responsible for coordinating the recovery of the one or more multicast channels to the LAN Alternatively, a director may not be elected and each end user may act independently and redundantly perform the operations that the director would handle.
- Advantages of including a director include limiting the task of initiating re- establishment of the multicast channel to one designated entity, rather than several end user workstations
- Advantages of not including a director include redundant processing so that if any portion of the LAN 108 fails, each workstation (109, 110) may be able to re-establish its connection with network 101.
- the director when referencing the director, it will be understood that the director is intended to refer to any workstation performing the functions of the director, including but not limited to any end user's workstation redundantly performing the director's functions where LAN 108 has no director and each end user's workstation acts independently.
- the director stores the group information as received from various sources. For example, the director may store this information in a source, group pair (also referred to as an S, G pair).
- the director also monitors sender "liveness" regarding a monitored group. Liveness, for purposes herein, refers to the monitoring of information received from a source. By monitoring liveness information, the receiver (director or workstation) knows when to expect information from the source. That is, when the source is idle (has no data to send), liveness (heartbeat) messages are sent at predefined intervals or at decaying intervals to inform all end user systems that the channel is still alive
- Various sender driven liveness messages are specified in - 12 - Holbrook et al.'s "Log-Based Receiver" (H. Holbrook, S.
- the director determines that the connection to the source has been lost.
- the director retrieves the previously stored information (the S, G pair), forms a Graft message, and transmits a Graft message to the all routers address (224.0.0.13) or to a specific leaf router address, if the target leaf router address is well known (configured or learned) by the director.
- the all routers address is specified in the PIM DM specification.
- the Graft message could be sent using any defined or reserved address that delivers the message to the LAN network nodes that can process the Graft and reestablish the multicast channel to the LAN.
- the network node or network nodes receiving the Graft message acts accordingly to re-establish the connection to the group by immediately forwarding the Graft upstream on the Reverse Path Forwarding (RPF) interface, as specified in the PIM DM specification.
- RPF Reverse Path Forwarding
- the director is acting like a network node by forwarding a Graft to its leaf router(s).
- the receiving network node does not distinguish between the director and another node, in this case, and processes the Graft as if it were received from a network node, as specified in the PIM dense mode specification.
- the result is that the multicast channel is more rapidly re-established the LAN.
- a director may quickly re-establish a connection to a group with minimal delay.
- the Internet is used as an example of a network of computers which benefits from IP multicast technology.
- the invention as described herein may be readily applied to internets, extranets, WANs and other networks which may benefit from multicasting.
- PIM dense mode it is readily apparent that the advantages described herein are applicable to other IP multicasting protocols including PIM sparse mode, CBT, and MOSPF, and other equivalent EP multicasting protocols.
- the reference to PEM dense mode is made by way of example only. For example, in PEM sparse mode, the director would send a Join message rather than a Graft, when the liveness information has not been received as expected. While PEM sparse mode is not a broadcast and prune protocol, as is PEM dense mode, the rapid recovery mechanisms described herein are applicable in achieving rapid multicast channel recovery.
- Figure 1 is network diagram showing a broadcast phase of a conventional multicast system.
- Figure 2 is network diagram showing a pruning phase of a conventional multicast system.
- Figures 3 A and 3B are network diagrams showing a remaining network after a broadcast and pruning phases of a conventional multicast system.
- Figure 4 is a network diagram showing a Graft in a conventional multicast system.
- FIG. 5 is a network diagram of a Graft originating with an end user according to embodiments of the present invention.
- Figure 6 is a packet format diagram of a multicast system as used in conjunction with embodiments of the present invention.
- Figure 7 is a network diagram of the present invention including an embodiment of an end user's system according to embodiments of the present invention.
- Figure 8 is a network diagram of the present invention showing Graft messages as contemplated by the present invention.
- FIG. 5 shows an array of workstations 109 and 1 10 on network 108 (for example, a LAN) connected via network node 105 to network 101 (not shown for simplicity). While only two workstations are shown, it is understood that many - 15 - workstations may be connected to network 108.
- One of the workstations, 109 or 110 may be elected as director for a particular S,G pair As the director officiates in numerous occasions, the director may be also referred to as a host of the network 108.
- workstation 109 is the elected director.
- workstation 1 10 is the designated assistant director.
- the assistant director stores information similar to workstation 109 so as to replace workstation 109 in the event workstation 109 fails
- the director is intended to refer to any workstation performing the functions of the director, including but not limited to any end user's workstation redundantly performing the director's functions where LAN 108 has no director and each end user's workstation acts independently.
- Workstation 109 stores information 507 in internal memory. While not shown for simplicity, workstations 109 and 1 10 may contain various forms of internal memory including RAM, ROM, replaceable storage devices (for example, diskettes, hard drives, and CD-ROMs), and equivalents thereof.
- Information 507 contains a list of which workstations are the director and assistant director (wrkl and wrk2 respectively) and a list of S, G pairs that list the source (for example, Si) of various groups (for example, G_, G 2 , through G n ). Workstation 1 10 may store a similar set of information 508
- the director derives sender liveness information from information received in connection with the received liveness datagrams, the latter (liveness or heartbeat - 16 - mechanisms) being discussed in the Holbrook et al "Log-Based Receiver" From this sender liveness information, the director determines when to expect datagrams from a source S regarding group G Once the datagrams or liveness messages are not received as expected, the director transmits a Graft or Join message 501 in PEM DM or PEM SM, respectively, with the appropriate payload (for example, S n , Gi) to the all routers address (as found in the PIM DM specification). Alternatively, the director transmits the Graft message 501 to a specific leaf router, or any other all local routers address.
- the director did not receive the liveness messages as expected, because the forwarder for the multicast channel, e g router 105, failed, then the other router, e.g. 105' (or routers) will receive the Graft message and forward it to network node 103 (or on their respective RPF interface), which will cause the quick re-establishment of the multicast channel to LAN 108.
- Assistant director workstation 110 in addition to being able to perform tasks similar to that of director workstation 109, monitors director workstation 109 in its performance of its director duties. So, if and when director workstation 109 fails, workstation 1 10 may act as a backup.
- a workstation and equivalent systems may be referred to as information handlers.
- the director generates the S,G pair(s) by monitoring received datagrams for the respective multicast channel(s)
- the header information in a received IP Multicast packet contains the sender's identity (here the identify of the source end user's - 17 - workstation, and the group address G, carried in the IP destination field of the IP header
- the workstation stores the sender's information in conjunction with the group over which the datagram was received
- the sender transmits the following a source address, a destination address, the source port, and the destination port
- the source address is the EP address of the host sending the information out to receivers
- this source address is the S of the S, G pair
- the destination address is the EP address of the multicast communication channel referred to as a group
- This group is the group G of the S, G pair
- end user workstations can further refine that information that is passed to the end user application for a given G, by filtering those datagrams that are received on destination port numbers of which are interest
- FIG. 6 shows an example of PIM packet formats for control messages as sent from workstation 109 and 1 10.
- Bits 0-3 relate to the PIM version, bits 4-7 identify the type of control message, bits 8-15 are reserved, and bits 16-31 are checksum.
- PIM message types The following list defines PIM message types:
- Graft - Ack 8 Candidate-RP- Advertisement
- the Graft control message as part of the present invention is initially formatted in workstation 109 or workstation 1 10 depending on which workstation(s) is attempting to re-establishing the link to group G.
- the other control messages can be found in the PEM dense mode or PEM spare mode specification, referenced here in, etc.
- Figure 7 shows an embodiment where more than one router is connected to LAN 108.
- two routers 701 and 702 connect LAN 108 to router 105.
- the - 19 - interaction between routers (network nodes) 701 and 702 and router (network node) 105 and workstations 109 and 1 10 is as described above.
- workstation 109 may be the only workstation.
- LAN 108 may be supporting a single workstation 109 in connection to network 101.
- the term LAN may refer to a device which allows workstation to connect to both network nodes 701 and 702 simultaneously.
- Figure 9 shows network node 105, network nodes 701 and 702, and LAN 108 as described with respect to Figure 7. Also included in the arrangement are network nodes 703 and 704 also connected to network node 105 and LAN 705. Next, both LANs 108 and 705 are connected to workstation 109. In this embodiment, both LANs 108 and 705 support a single (or multiple) workstations. By using this arrangement, workstation 109 (and other similarly connected workstations) is provided with multiple paths to connect to network node 105.
- An example of when the arrangement Figure 9 may be used is in applications where redundant paths to the network 101 is required, so as to achieve higher availability of services provided by sources such as 706.
- Figure 8 shows a network level diagram of network 801 as including a variety of sources S n and a variety of end user workstations 810-812.
- Source Si transmits group Gi datagrams to receivers S 4 and S 5 (S 4 and S 5 are receivers with respect to group Gi, but are senders with respect to the groups they source) on multicast channel Si Gj.
- the combination of receivers S 4 and S 5 comprise group Gi. - 20 -
- source S 2 sends content on group Gj (on multicast channel S 2 Gi) as shown by the combined use of Gi from sources S ⁇ and S 2 .
- the content of Gi may originate with Si and use S 2 as a backup for content.
- Source Si also provides information to receiver S 3 .
- receiver S 3 reformulates and retransmits the datagrams to receivers of network 809.
- Network 809 may be a LAN or individual groups of receivers so long as they maybe collectively referred to as group G 5 .
- receiver S 4 reformulates and retransmits received datagrams to network 808 as group G 3 .
- receiver S 5 reformulates and retransmits received datagrams to network 807 as group G 4 .
- receiver 810 When receiver 810 (for example, a workstation with the functionality of workstation 109 of Figure 5) from network 809 wishes to re-establish a connection to group G 5 , receiver 810 transmits a Graft message as described above to its leaf router containing the S 3 ,G 5 pair information.
- the network node (not shown) in between S 3 and LAN 809 receives G (here, G 5 ) and sends receiver 810 the multicast channel corresponding to S 3 ,G 5 .
- the Graft sent by the director specifically only re-establishes the specific S,G pair and not all *,G pairs, as is done with simply a IGMP join message from the host. Alternatively, all *,G pairs are re-established then unwanted ones dropped.
- Figure 8 are relative to the datagrams originating with sources S t and S 2 .
- a network receives information from alternative sources, for example, network 808 receiving group G n datagrams (not shown for simplicity) from source S 3 , the Graft S, G pair information from receiver 81 1 takes the form of (S 3 , G n ).
- delay times in re-establishing a connection to a group may be minimized through sensing a fault in received (or not received) information, determining how to quickly reconnect, and reconnecting to a group through the use of information stored in a monitoring station.
- the techniques of the present invention apply to any network including intranets and extranets.
- the present invention may also be implemented in a peer-to-peer computing environment or in a multi-user host system having a mainframe or a minicomputer.
- the computer network in which the invention is implemented - 22 - should be broadly construed to include any multicast computer network from which a client can retrieve a channel in a multicast environment.
Abstract
The present invention provides a method, apparatus, and medium for quickly re-establishing a lost multicast connection between an end user and a group in a multicast environment. The end user monitors the liveness of the received information as well as retains a list of the multicast communication channels required for re-establishment of a connection to a group. Through the use of an end user-originated Graft or Join based on the stored list of multicast communication channel identifiers (S, G), when the received information is no longer live, an end user may quickly rejoin a multicast group with minimal down time.
Description
Method, Apparatus, And Medium For Minimal Time Multicast
Graft/Join Restoration
Background Of The Invention 1. Technical Field
The invention generally relates to IP multicast technology. More particularly, the invention relates to re-establishing a link with minimal delay, between a user's local area network and a multicast content channel. 2. Related Information As the Internet becomes increasingly burdened with traffic, solutions for relieving at least some of the burden on the current Internet infrastructure are being sought. Current EP packet transmissions are sent point to point using point-to-point type protocols (PTP) as well known in the art. An example of point-to-point protocol is TCP/IP. Using PTP protocols to send data become increasingly inefficient in terms of bandwidth utilization as final destinations are located closer to each other. Packets traveling to similar destinations may traverse similar paths in a network, thus consuming extraneous bandwidth along the way. Also, the transmitting host faces a mounting burden as it attempts to service the numerous destinations on a timely basis. One solution is the use of IP multicast technology. Also referred to as IP multicasting, this technology is a mechanism provided by a network that facilitates the transmission of a single packet to a plurality of destinations. When a network node (router or switch) handling the multicast packet in
- 2 - transit determines that the end destinations to which the packet is heading no longer uses a similar pathway, the network node duplicates and transmits the packet down as many pathways as needed to forward the packet to all predetermined locations specified by the multicast communication channel. The class of controlling protocols utilized by network nodes that build and maintain multicast communication channels (or routing trees) are referred to as IP multicast routing protocols. The class of controlling protocols utilized between destination machines and their first hop network nodes are called Group Management Protocols, such as IGMP. The IGMP specification (identified as Internet Group Management Protocol, Version 2, Network Working Group of the Internet Engineering Task Force, November 1997), as is known in the art, is hereby incorporated by reference.
Multiple IP multicast routing protocols exist, including, but not limited to, PIM dense mode, PLM sparse mode, CBT, and MOSPF. For example, PIM (Protocol Independent Multicast) dense mode is a multicast routing protocol that controls the maintenance of multicast communication channels utilized for transmissions to multicast groups which are densely distributed across a network. Applicable networks include intranets, extranets and the Internet, and other equivalent networks. PIM dense mode uses reverse path multicasting (RPM), also called reverse path forwarding (RPF), to establish and maintain routing trees. RPM is a technique in which a multicast packet (or datagram) is forwarded if the receiving interface on a network node is the one used to forward unicast datagrams to the source of the multicast datagram.
- 3 - The first step in establishing a routing tree according to the PIM dense mode specification (Protocol Independent Multicast Version 2, Internet Engineering Task Force, April 2, 1997), hereinafter incorporated by reference, is to broadcast the multicast datagram to all PIM DM enabled routers in the network, such that a routing tree is formed that provides a multicast communication channel on which datagrams are carried. This broadcast process is repeatedly performed every three minutes. Figure 1 shows an example of setting up a routing tree in a PIM dense mode IP multicast system. Network 101 includes a plurality of network nodes 102-107. Also, Figure 1 includes LAN 108 as attached to network node 105 and workstations 109 and 110. As is well known in the art, the communication protocol between network node 105 and workstations 109 and 110 on LAN 108 may include the Internet Group Management Protocol, IGMP. In addition, other communication protocols are available. For example, there are several version of IGMP, mainly version numbers 1, 2 and 3, the last of which is a draft proposal in the research community. Other Management Protocols exist. For example, the Cisco Corporation has developed a protocol called CGMP (Cisco Group Management Protocol) which is similar to IGMP, described above. Here, IGMP is used to tell network node 105 (leaf node) that workstations 109 and 110 exist and continue to have interest in the particular multicast communication channel on which datagrams are forwarded to them. In particular, network node 102 receives a datagram from sending host 102 A and transmits it to downstream network nodes 103, 104, and 107 as illustrated by transmission paths 1 1 1, 1 12, and 1 13. Having received the datagram, network nodes
- 4 - 103, 104 and 107 retransmit the datagram to other network nodes to which they are connected, except to the router from which the datagram was received. In this case, network node 103 transmits the datagram to network node 104 via path 114 and to network node 105 via path 1 15. Network node 104 retransmits the datagram to network node 103 via path 1 18, network node 105 via path 120, network node 106 via path 121, and network node 107 via path 119. Network node 107 retransmits the datagram to network node 104 via path 1 16 and network node 106 via path 122. Finally, depending on circumstances, for example, when the datagram was received by network node 106, it may retransmit the datagram to network node 107 via path 117. The second step in establishing PIM DM routing trees includes pruning back all branches that lead to end stations that have not expressed interest in attaching to the multicast communication channel. Figure 2 shows an example of the pruning process in action in accordance with the PLM dense mode. In Figure 2, it is assumed that either of workstations 109 or 110 have expressed an interest in attaching to the multicast communication channel (that is currently in the broadcast phase, as shown in Figure 1). Router 105 sends prune message 203 to router 104, but does not send a prune message to router 103, because the RPF (the best reverse path to the source of the multicast data) interface on router 105 is the one on which router 103 is connected. Router 105 will not prune itself from the multicast channel in which it is interested. It is assumed, although not shown, that router 106 servers at least one end user workstation that has expressed interest in the multicast channel. As a result, router 106 does not prune its interface towards router 107, because the interface of
- 5 - router 106 knows that router 107 is router 106's RPF to the sending host 102 A. Router 107, however, does send prune message 205 to router 104.
While Figure 2, as relating to PIM dense mode, shows a mechanism used to establish routing trees, other techniques to establish routing trees are known in the art as related to other EP multicasting protocols including protocols PEM sparse mode, CBT, and MOSPF, for example. In particular, PIM sparse mode is another applicable Multicast Routing Protocol (Protocol Independent Multicast-Sparse Mode: Protocol Specification, September 9, 1997), hereinafter incorporated by reference.
The resulting network as created by routing tables stored in the various network nodes is shown in Figure 3A The resulting multicast channel is shown in Figure 3B. Datagrams, originating from sending host 102 are transmitted from network node 102 to network nodes 103 and 107 via paths 301 and 302, respectively. Next, the datagrams are transmitted to network nodes 105 and 106, via paths 303 and 305, respectively. For a multicast routing communication channel to terminate at a network node, the network node preferably has at least one destination end user attached to it and that the destination end user has expressed an interest in attaching to the multicast communication channel (via IGMP or some similar Group Management Protocol). It is noted that, while multicast routing communication channels terminate at network nodes, multicast communication channels terminate at end user workstations. As shown in Figure 3 A, there are no end users attached to network node 104. Accordingly, while it is possible to establish and maintain a communication path to network node 104 regarding a multicast channel, the lack of
- 6 - end users connected to network node 104 suggests that the connection for this channel should be pruned back. In an alternative embodiment of the invention, network node 104 may remain connected to receive a multicast channel so as to be available for quick connection to the multicast channel by another end user. One feature of multicast systems includes the concept of groups. Groups are sets of destinations that participate in a multicast communication channel. The channel generally originates with a single content provider. However, multiple content providers may combine to provide content on the same multicast communication channel. Accordingly, to specify a content channel precisely, one needs to know the source of the information as well as the group identifier to which the channel is directed. Figure 4 shows network node 102 (of network 101, not shown for simplicity) receiving a multicast datagram from sending host 102 A and sending the datagram via path 301 to network node 103. Network node 103 transmits the datagram to network nodes 403, 404, and 405. Originally, all the network nodes including 102, 103, 403, 404, 405, and 105 participated in group G_, during the broadcast phase. After the pruning phase, only network nodes 102, 103, 403, 404, and 405, remain participating in G. (105 having pruned back the channel).
Next, if workstation 109 (or 1 10) which is connected to network node 105, part of network 101, desires to join group G] 409, it sends a request via IGMP to network node 105 specifying that it wishes to join group Gi. Network node 105 is considered to be a leaf node, in that it is directly connected via a LAN to end users that have expressed interest in the particular multicast channel, and there are no
- 7 - further routers downstream of router 105 on the LAN. Network node 105 interprets the request from workstation 109 as a command to attempt to join group Gi. Network node 105 sends a Graft message with the payload Gi, Si to network node 103 via path 401. In response, network node 103 replies with a Graft acknowledge (Graft-Ack) signal to network node 105 via path 402. When network node 103 receives the next datagram destined for group G] 409, it transmits the datagram to network node 105 as network node 105 is now part of group Gj 409.
It is an assumption in the above that the state Gj, S] is retained indefinitely, in network node 105 from the previous broadcast phase. However, it is possible that the state may not exist in network node 105, in which case the network node 105 cannot send a Graft towards network node 103 because network node 105 does not know from which network node it can obtain the specific multicast channel.
Further, in the above-described system, as well as in other IP multicast systems, if a leaf network node fails, the system may have to wait between 0 and 3 minutes (the PIM dense mode broadcast phase cycle time) or for an average of 1.5 minutes until the multicast system enters the broadcast phase such that state is reestablished in network nodes and the channel can re-establish itself. A difficulty here, especially in environments with a need for minimal disconnect time, is that the delay associated with re-establishing a connection to network node 105, if it becomes separated from group Gi, may take an average of 1.5 minutes. The cause of the separation may be due to a variety of reasons including transmission line failure, local router failure, and related problems. This reconnect delay is the average time in which
PIM dense mode enters the re-broadcasting phase as shown in Figure 1. In systems that require faster reconnect times, for example, in financial arenas where the 1.5 minute average delay freezes a trader's information stream so the trader cannot act as desired, the delay associated with waiting until the overall multicast system re- establishes itself is unacceptable. Further, merely decreasing the multicast system's re- broadcast interval substantially increases the amount of non-usable broadcast information sent to all end network nodes, regardless of their interest in the particular communication channel. This approach can be very costly in terms of inefficient use of network bandwidth. The result of this delay may be one reason that systems requiring a fast re-establishment time (high availability requirements) have not readily embraced multicast technology as a solution for efficient information transfer.
In networks with only one leaf router serving a LAN, if that leaf router fails, connectivity to the group is lost. For example, if network node 105 becomes disabled, LAN 108 is prevented from connecting to group G One possible solution is to utilize a more redundant system, in which there are multiple leaf network nodes. For example, one could add another network node 105' to the LAN 108 to the network, by connecting to network node 103 through path 306 as shown in Figure 3 A. In the above-described environment, one of these network nodes 105 and 105' (for example, 105') will have been pruned out of the multicast channel during the pruning phase of the PIM DM protocol, resulting in the multicast channel as shown in Figure 3B.
- 9 - The dotted line to network node 105' in Figure 3 A indicates that while 105 and 105' are considered separate network nodes, as part of the PIM DM protocol, one of them (i.e., 105') has pruned itself from the multicast channel 101, leaving the remaining network node (105) as the "forwarder" as represented in Figure 3B. The process to elect which network node is to be the forwarder to LAN 108 is called Assertion. Routers 105 and 105' execute the Assert process, as specified in the P M dense mode specification, and only one of the network nodes becomes the forwarder, and the other network node(s) on the LAN that lose the assertion process prune themselves back, e.g. 105'. Accordingly, when network node 105' has pruned itself from the multicast channel, it needs to wait for the re-broadcast phase (see Figure 1 implemented every three minutes) before network node 105' can begin receiving new datagrams again (see Figure 3A). When this occurs, the assertion process is repeated and the network nodes that lose the assertion process prune themselves back.
The existence of two client side network nodes (leaf nodes in this case) 105 and 105' is to make LAN 108 more robust in being able to handle router or switch failures (network node). However, at most only one of the two network nodes 105 and 105' is active in receiving datagrams from network node 103 and forwarding them onto the LAN 108, at any given time, because the other node (e.g., 105') was pruned out of the multicast channel during the pruning phase. Note that during the assertion process, both network nodes 105 and 105' can be acting as forwarder to LAN 108, until one of the network nodes wins the election process and becomes the forwarder for the LAN 108. If network node 105 is the forwarded for the LAN 108,
- 10 - and in the event that network node 105 becomes disabled, workstations 109 and/or 110 on LAN 108 will have to wait until the PIM DM re-broadcast phase initiates, so as to forward packets to network node 105' (an average of 1.5 minutes). When 105' begins to forward packets onto LAN 108, network node 105' will elect itself as the forwarder (since no other network node has challenged it and the assert election process is not triggered in this case) and being to forward packets to onto the LAN 108. While waiting for reconnection to the multicast channel, the hosts on LAN 108 may experience large gaps in messages, and general data loss. Again, the maximum waiting interval is, as above, 3 minutes, with an average waiting interval of 1.5 minutes. In applications that require high availability, with or without high throughput characteristics, the average waiting interval of 1.5 minutes is unacceptable as volumes of data may back up, resulting in excessive data loss, memory usage and delay in attempting to re-establish the data to the hosts on LAN 108. Furthermore, users of real-time applications that require up-to-the-second information cannot tolerate even modest periods of disconnection from the service, e.g. instrument traders using a realtime financial application.
Summary Of The Invention
The present invention overcomes the above-described problems by providing a fast re-establishment of a lost connection to a multicast group. To quickly reconnect an end user (or an end user's local area network, or LAN), the end user's system stores relevant connection information so that, when needed, it directs a network node to re-establish its connection to one or more multicast groups.
Each LAN end user on network 108 may elect a director for the LAN 108 that is responsible for coordinating the recovery of the one or more multicast channels to the LAN Alternatively, a director may not be elected and each end user may act independently and redundantly perform the operations that the director would handle. Advantages of including a director include limiting the task of initiating re- establishment of the multicast channel to one designated entity, rather than several end user workstations Advantages of not including a director include redundant processing so that if any portion of the LAN 108 fails, each workstation (109, 110) may be able to re-establish its connection with network 101. To this end, when referencing the director, it will be understood that the director is intended to refer to any workstation performing the functions of the director, including but not limited to any end user's workstation redundantly performing the director's functions where LAN 108 has no director and each end user's workstation acts independently.
The director stores the group information as received from various sources. For example, the director may store this information in a source, group pair (also referred to as an S, G pair). The director also monitors sender "liveness" regarding a monitored group. Liveness, for purposes herein, refers to the monitoring of information received from a source. By monitoring liveness information, the receiver (director or workstation) knows when to expect information from the source. That is, when the source is idle (has no data to send), liveness (heartbeat) messages are sent at predefined intervals or at decaying intervals to inform all end user systems that the channel is still alive Various sender driven liveness messages are specified in
- 12 - Holbrook et al.'s "Log-Based Receiver" (H. Holbrook, S. Signhal, D. Cheriton: Log- Based Receiver-Reliable Multicast for Distributed Interactive Simulation. Computer Communication Review, Vol. 25, No. 4, Proceedings of the ACM SIGCOMM'95, August 1995) and are incorporated herein by reference. Once the information has not been received as expected, the director determines that the connection to the source has been lost. Next, the director retrieves the previously stored information (the S, G pair), forms a Graft message, and transmits a Graft message to the all routers address (224.0.0.13) or to a specific leaf router address, if the target leaf router address is well known (configured or learned) by the director. The all routers address is specified in the PIM DM specification. Note that the Graft message could be sent using any defined or reserved address that delivers the message to the LAN network nodes that can process the Graft and reestablish the multicast channel to the LAN. The network node or network nodes receiving the Graft message acts accordingly to re-establish the connection to the group by immediately forwarding the Graft upstream on the Reverse Path Forwarding (RPF) interface, as specified in the PIM DM specification. In this scenario, the director is acting like a network node by forwarding a Graft to its leaf router(s). The receiving network node does not distinguish between the director and another node, in this case, and processes the Graft as if it were received from a network node, as specified in the PIM dense mode specification. The result is that the multicast channel is more rapidly re-established the LAN. Thus, by using the system as embodied by the
- 13 - invention, a director may quickly re-establish a connection to a group with minimal delay.
As purposes herein, the Internet is used as an example of a network of computers which benefits from IP multicast technology. However, it will be understood that the invention as described herein may be readily applied to internets, extranets, WANs and other networks which may benefit from multicasting. Further, while the invention is described in connection to PIM dense mode, it is readily apparent that the advantages described herein are applicable to other IP multicasting protocols including PIM sparse mode, CBT, and MOSPF, and other equivalent EP multicasting protocols. Accordingly, the reference to PEM dense mode is made by way of example only. For example, in PEM sparse mode, the director would send a Join message rather than a Graft, when the liveness information has not been received as expected. While PEM sparse mode is not a broadcast and prune protocol, as is PEM dense mode, the rapid recovery mechanisms described herein are applicable in achieving rapid multicast channel recovery.
Brief Description Of The Drawings
In the following text and drawings, wherein similar reference numerals denote similar elements throughout the several views thereof, the present invention is explained with reference to illustrative embodiments. Figure 1 is network diagram showing a broadcast phase of a conventional multicast system.
- 14 - Figure 2 is network diagram showing a pruning phase of a conventional multicast system.
Figures 3 A and 3B are network diagrams showing a remaining network after a broadcast and pruning phases of a conventional multicast system. Figure 4 is a network diagram showing a Graft in a conventional multicast system.
Figure 5 is a network diagram of a Graft originating with an end user according to embodiments of the present invention.
Figure 6 is a packet format diagram of a multicast system as used in conjunction with embodiments of the present invention.
Figure 7 is a network diagram of the present invention including an embodiment of an end user's system according to embodiments of the present invention.
Figure 8 is a network diagram of the present invention showing Graft messages as contemplated by the present invention.
Detailed Description Of The Preferred Embodiments
The present invention relates to quickly re-establishing a connection to a multicast group. For purposes herein, a multicast group should be understood to also relate to a multicast communication channel. Figure 5 shows an array of workstations 109 and 1 10 on network 108 (for example, a LAN) connected via network node 105 to network 101 (not shown for simplicity). While only two workstations are shown, it is understood that many
- 15 - workstations may be connected to network 108. One of the workstations, 109 or 110 may be elected as director for a particular S,G pair As the director officiates in numerous occasions, the director may be also referred to as a host of the network 108. In this example, workstation 109 is the elected director. In case director workstation 109 fails, workstation 1 10 is the designated assistant director. The assistant director stores information similar to workstation 109 so as to replace workstation 109 in the event workstation 109 fails
To this end, when referencing the director, it will be understood that the director is intended to refer to any workstation performing the functions of the director, including but not limited to any end user's workstation redundantly performing the director's functions where LAN 108 has no director and each end user's workstation acts independently.
Workstation 109 stores information 507 in internal memory. While not shown for simplicity, workstations 109 and 1 10 may contain various forms of internal memory including RAM, ROM, replaceable storage devices (for example, diskettes, hard drives, and CD-ROMs), and equivalents thereof. Information 507 contains a list of which workstations are the director and assistant director (wrkl and wrk2 respectively) and a list of S, G pairs that list the source (for example, Si) of various groups (for example, G_, G2, through Gn). Workstation 1 10 may store a similar set of information 508
The director derives sender liveness information from information received in connection with the received liveness datagrams, the latter (liveness or heartbeat
- 16 - mechanisms) being discussed in the Holbrook et al "Log-Based Receiver" From this sender liveness information, the director determines when to expect datagrams from a source S regarding group G Once the datagrams or liveness messages are not received as expected, the director transmits a Graft or Join message 501 in PEM DM or PEM SM, respectively, with the appropriate payload (for example, Sn, Gi) to the all routers address (as found in the PIM DM specification). Alternatively, the director transmits the Graft message 501 to a specific leaf router, or any other all local routers address. If the director did not receive the liveness messages as expected, because the forwarder for the multicast channel, e g router 105, failed, then the other router, e.g. 105' (or routers) will receive the Graft message and forward it to network node 103 (or on their respective RPF interface), which will cause the quick re-establishment of the multicast channel to LAN 108. Assistant director workstation 110, in addition to being able to perform tasks similar to that of director workstation 109, monitors director workstation 109 in its performance of its director duties. So, if and when director workstation 109 fails, workstation 1 10 may act as a backup. As noted above, if the PIM sparse mode multicast routing protocol was used instead of PEM dense mode, then the director would send a Join message instead of a Graft. For purposes of simplicity, a workstation and equivalent systems may be referred to as information handlers. The director generates the S,G pair(s) by monitoring received datagrams for the respective multicast channel(s) The header information in a received IP Multicast packet contains the sender's identity (here the identify of the source end user's
- 17 - workstation, and the group address G, carried in the IP destination field of the IP header The workstation stores the sender's information in conjunction with the group over which the datagram was received For example, in a UDP header, the sender transmits the following a source address, a destination address, the source port, and the destination port In a multicast environment, the source address is the EP address of the host sending the information out to receivers For purposes herein, this source address is the S of the S, G pair The destination address is the EP address of the multicast communication channel referred to as a group This group is the group G of the S, G pair Also, end user workstations can further refine that information that is passed to the end user application for a given G, by filtering those datagrams that are received on destination port numbers of which are interest to the receiving application(s) Workstations 109 and 110 would derive the S, G pair from datagrams received from an ongoing multicast channel via network node 105 When workstation 109 desires to connect to a multicast channel G\ (including routers 102 and 103 and workstations 506), it requests connection to this channel Gi via IGMP as described above At this point, network node 105 determines which sources (Si, S2, etc.) correspond to the requested channel Next, network node 105 forwards all channels to LAN 108 which correspond to the specified G So, if S_,Gι and S2,Gι multicast channel exists in the network, then network node 105 forwards both Sι,Gι and S2,G| to workstation 109 The application layer in workstation 109 is then responsible for sorting out which, if any, of Sι,Gt and S2,Gι is the actual channel desired It is possible that the application may desire to obtain all of the S, G pairs
After workstations 109 and 1 10, join group Gi, group Gi is understood to include routers 102, 103, and 105 and workstations 506, 109, and 1 10.
Figure 6 shows an example of PIM packet formats for control messages as sent from workstation 109 and 1 10. Bits 0-3 relate to the PIM version, bits 4-7 identify the type of control message, bits 8-15 are reserved, and bits 16-31 are checksum. The following list defines PIM message types:
0 = Hello
1 = Register
2= Register - Stop 3 = Join/Prune
4 = Bootstrap
5 = Assert
6 = Graft
7 = Graft - Ack 8 = Candidate-RP- Advertisement
Here, the Graft control message as part of the present invention is initially formatted in workstation 109 or workstation 1 10 depending on which workstation(s) is attempting to re-establishing the link to group G. The other control messages can be found in the PEM dense mode or PEM spare mode specification, referenced here in, etc.
Figure 7 shows an embodiment where more than one router is connected to LAN 108. In Figure 7, two routers 701 and 702 connect LAN 108 to router 105. The
- 19 - interaction between routers (network nodes) 701 and 702 and router (network node) 105 and workstations 109 and 1 10 is as described above. In some applications, workstation 109 may be the only workstation. In this case, LAN 108 may be supporting a single workstation 109 in connection to network 101. To this end, the term LAN may refer to a device which allows workstation to connect to both network nodes 701 and 702 simultaneously.
Another embodiment of the system described herein is shown with respect to Figure 9. Figure 9 shows network node 105, network nodes 701 and 702, and LAN 108 as described with respect to Figure 7. Also included in the arrangement are network nodes 703 and 704 also connected to network node 105 and LAN 705. Next, both LANs 108 and 705 are connected to workstation 109. In this embodiment, both LANs 108 and 705 support a single (or multiple) workstations. By using this arrangement, workstation 109 (and other similarly connected workstations) is provided with multiple paths to connect to network node 105. An example of when the arrangement Figure 9 may be used is in applications where redundant paths to the network 101 is required, so as to achieve higher availability of services provided by sources such as 706.
Figure 8 shows a network level diagram of network 801 as including a variety of sources Sn and a variety of end user workstations 810-812. Source Si transmits group Gi datagrams to receivers S4 and S5 (S4 and S5 are receivers with respect to group Gi, but are senders with respect to the groups they source) on multicast channel Si Gj. Notably, the combination of receivers S4 and S5 comprise group Gi.
- 20 - Also, source S2 sends content on group Gj (on multicast channel S2 Gi) as shown by the combined use of Gi from sources S\ and S2. Alternatively, the content of Gi may originate with Si and use S2 as a backup for content. Source Si also provides information to receiver S3. Having received datagrams for group G2, receiver S3 reformulates and retransmits the datagrams to receivers of network 809. Network 809 may be a LAN or individual groups of receivers so long as they maybe collectively referred to as group G5. Similarly, receiver S4 reformulates and retransmits received datagrams to network 808 as group G3. Finally, receiver S5 reformulates and retransmits received datagrams to network 807 as group G4. When receiver 810 (for example, a workstation with the functionality of workstation 109 of Figure 5) from network 809 wishes to re-establish a connection to group G5, receiver 810 transmits a Graft message as described above to its leaf router containing the S3,G5 pair information. The network node (not shown) in between S3 and LAN 809 receives G (here, G5) and sends receiver 810 the multicast channel corresponding to S3,G5. In one embodiment, the Graft sent by the director specifically only re-establishes the specific S,G pair and not all *,G pairs, as is done with simply a IGMP join message from the host. Alternatively, all *,G pairs are re-established then unwanted ones dropped.
Similar operations from workstations 811 and 812, when they generate to their respective leaf routers Graft messages specifying the S4, G3 pair and the S5> G4 pair, respectively
- 21 -
It should be noted that the relationships as described between S and G for
Figure 8 are relative to the datagrams originating with sources St and S2. When a network receives information from alternative sources, for example, network 808 receiving group Gn datagrams (not shown for simplicity) from source S3, the Graft S, G pair information from receiver 81 1 takes the form of (S3, Gn).
It is readily understood that the method described above may be implemented through the use of a computer-readable medium. For example, one may store computer-implemented steps in a medium which is then read- by any of workstations 109 and 110. Further, the process steps may be transferred into an internal storage of workstations 109 and 1 10 (or into alternative network nodes, for instance) through a different input into LAN 108 or network 101.
Using the above-described system, delay times in re-establishing a connection to a group may be minimized through sensing a fault in received (or not received) information, determining how to quickly reconnect, and reconnecting to a group through the use of information stored in a monitoring station.
It will be apparent to those skilled in the art that application of the present invention need not be utilized in conjunction with the Internet. It is also envisioned that the techniques of the present invention apply to any network including intranets and extranets. Further, the present invention may also be implemented in a peer-to-peer computing environment or in a multi-user host system having a mainframe or a minicomputer. Thus, the computer network in which the invention is implemented
- 22 - should be broadly construed to include any multicast computer network from which a client can retrieve a channel in a multicast environment.
In the foregoing specification, the present invention has been described with reference to specific exemplary embodiments thereof. In particular, reference has been made to the PIM dense mode specification. Although the invention has been described in terms of a preferred embodiment, those skilled in the art will recognize that various modifications, embodiments or variations of the invention can be practiced within the spirit and scope of the invention as set forth in the appended claims. Some variations include the use of PEM sparse mode, CBT, and MOSPF EP multicast protocols and other multicast protocols. Al are considered within the sphere, spirit, and scope of the invention. The specification and drawings are, therefore, to be regarded in an illustrated rather than restrictive sense. Accordingly, it is not intended that the invention be limited except as may be necessary in view of the appended claims.
Claims
1. A system for re-establishing a connection to a multicast channel comprising: a first network node receiving multicast data from a network, a second network node connected to said network; an information handler connected to said first network node and said second network node, said information handler including a memory for storing reconnect information, and one or more monitoring device(s) that monitors the reception of said data for a disconnect of said network node from said multicast channel and alerts said information handler upon occurrence or detection of said disconnect, wherein, upon reception of said alert, said information handler retrieves said reconnect information and transmits said information to at least said second network node so as to reconnect said information handler to said multicast channel.
2. The system according to claim 1, wherein said information handler transmits said information to a reserved address specifying all routers.
3 The system according to claim 1 , wherein the transmission of said information is in the form of a well formed packet.
4 The system according to claim 3, wherein the well formed packet is in the form of a PIM dense mode Graft - 24 -
5 The system according to claim 3, wherein the well formed packet is in the form of a PIM spare mode Join
6 The system according to claim 1, wherein said reconnect information includes the source of said data and the group to which said data belong.
7. The system according to claim 1, wherein said information handler is a workstation
8 The system according to claim 1, wherein said network routing protocol is based on the PIM dense mode specification
9. The system according to claim 1, wherein said network routing protocol is based on the PEM sparse mode specification.
10. The system according to claim 1, wherein said first network node and said second network node are the same node.
1 1. A method for re-establishing a connection to a multicast channel comprising the steps of receiving in a first network node multicast data from a network, storing in a memory in an information handler, which is connected to said first network node, reconnect information related to establishing a reconnection to said multicast channel, monitoring the reception of said data for a disconnect of said network node from said multicast channel, alerting said information handler upon occurrence or detection of said disconnect, - 25 - retrieving said reconnect information by said information handler upon reception of said alert; and, transmitting said information to at least one of said first network node and a second network node to reconnect at least one of said first network node and said second network node to said multicast channel.
12. The method according to claim 1 1, wherein said transmitting step transmits said reconnect information to a reserved address specifying all routers.
13. The method according to claim 1 1 , wherein said transmitting step transmits said reconnect information in the form of a well formed packet.
14. The method according to claim 13, wherein the well formed packet is in the form of a PIM dense mode Graft.
15. The system according to claim 13, wherein the well formed packet is in the form of a PEM spare mode Join.
16. The method according to claim 1 1, wherein said reconnect information includes the source of said data and the group to which said data belong.
17. The method according to claim 1 1, wherein said information handler is a workstation.
18. The method according to claim 11, wherein said network routing protocol is based on the PIM dense mode specification.
19. The method according to claim 1 1 , wherein said network routing protocol is based on the PIM spare mode specification. - 26 -
20. A computer-readable medium having computer-executable instructions for performing the steps comprising: receiving at a first network node multicast data related to a multicast channel from a network; storing reconnect information in a memory in an information handler, which is connected to said first network node; monitoring the reception of said data for a disconnect of said first network node from said multicast channel; alerting said information handler upon occurrence or detection of said disconnect; retrieving said reconnect information by said information handler upon reception of said alert ; and, transmitting said reconnect information to at least one of said first network node and a second network node to reconnect at least one of said first network node and said second network node to said multicast channel.
21. The computer-readable medium according to claim 20, wherein said transmitting step transmits said reconnect information to a reserved address specifying all routers.
22. The computer-readable medium according to claim 20, wherein said transmitting step transmits said reconnect information in the form of a well formed packet. - 27 -
23. The computer-readable medium according to claim 22, wherein the well formed packet is in the form of a PIM dense mode Graft.
24. The computer-readable medium according to claim 22, wherein the well formed packet is in the form of a PIM sparse mode Join.
25. The computer-readable medium according to claim 20, wherein said reconnect information includes the source of said data and the group to which said data belong.
26. The computer-readable medium according to claim 20, wherein said information handler is a network node.
27. The computer-readable medium according to claim 20, wherein said information handler is a workstation.
28. The computer-readable medium according to claim 20, wherein said network is based on the PIM dense mode specification.
29. The computer-readable medium according to claim 20, wherein said network is based on the PIM sparse mode specification.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US3937598A | 1998-03-16 | 1998-03-16 | |
US39375 | 1998-03-16 | ||
PCT/US1999/005731 WO1999048246A1 (en) | 1998-03-16 | 1999-03-16 | Method, apparatus, and medium for minimal time multicast graft/join restoration |
Publications (1)
Publication Number | Publication Date |
---|---|
EP1062766A1 true EP1062766A1 (en) | 2000-12-27 |
Family
ID=21905126
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP99912580A Withdrawn EP1062766A1 (en) | 1998-03-16 | 1999-03-16 | Method, apparatus, and medium for minimal time multicast graft/join restoration |
Country Status (4)
Country | Link |
---|---|
EP (1) | EP1062766A1 (en) |
JP (1) | JP2002507857A (en) |
AU (1) | AU3092699A (en) |
WO (1) | WO1999048246A1 (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6687846B1 (en) | 2000-03-30 | 2004-02-03 | Intel Corporation | System and method for error handling and recovery |
US6880090B1 (en) * | 2000-04-17 | 2005-04-12 | Charles Byron Alexander Shawcross | Method and system for protection of internet sites against denial of service attacks through use of an IP multicast address hopping technique |
WO2001091397A2 (en) * | 2000-05-22 | 2001-11-29 | Ladr It Corporation | Method and system for stopping hacker attacks |
US7020709B1 (en) * | 2000-06-30 | 2006-03-28 | Intel Corporation | System and method for fault tolerant stream splitting |
US7318107B1 (en) * | 2000-06-30 | 2008-01-08 | Intel Corporation | System and method for automatic stream fail-over |
US6651141B2 (en) | 2000-12-29 | 2003-11-18 | Intel Corporation | System and method for populating cache servers with popular media contents |
US9167036B2 (en) | 2002-02-14 | 2015-10-20 | Level 3 Communications, Llc | Managed object replication and delivery |
KR100552506B1 (en) * | 2003-03-28 | 2006-02-14 | 삼성전자주식회사 | method for construction of CBT direction based for overlay multicast CBT based |
CN104967974B (en) * | 2008-02-26 | 2019-07-30 | 艾利森电话股份有限公司 | Method and apparatus for reliable broadcast/multicast service |
WO2009106131A1 (en) | 2008-02-26 | 2009-09-03 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and apparatus for reliable broadcast/multicast service |
US9762692B2 (en) | 2008-04-04 | 2017-09-12 | Level 3 Communications, Llc | Handling long-tail content in a content delivery network (CDN) |
US10924573B2 (en) | 2008-04-04 | 2021-02-16 | Level 3 Communications, Llc | Handling long-tail content in a content delivery network (CDN) |
WO2009123868A2 (en) | 2008-04-04 | 2009-10-08 | Level 3 Communications, Llc | Handling long-tail content in a content delivery network (cdn) |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
FR2721465A1 (en) * | 1994-06-15 | 1995-12-22 | Trt Telecom Radio Electr | Local area interconnection system and equipment for use in such a system. |
-
1999
- 1999-03-16 JP JP2000537343A patent/JP2002507857A/en active Pending
- 1999-03-16 AU AU30926/99A patent/AU3092699A/en not_active Abandoned
- 1999-03-16 EP EP99912580A patent/EP1062766A1/en not_active Withdrawn
- 1999-03-16 WO PCT/US1999/005731 patent/WO1999048246A1/en not_active Application Discontinuation
Non-Patent Citations (1)
Title |
---|
See references of WO9948246A1 * |
Also Published As
Publication number | Publication date |
---|---|
JP2002507857A (en) | 2002-03-12 |
WO1999048246A1 (en) | 1999-09-23 |
AU3092699A (en) | 1999-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP0980608B1 (en) | Multicast switching | |
US7944811B2 (en) | Multiple multicast forwarder prevention during NSF recovery of control failures in a router | |
US8953604B2 (en) | Root node redundancy for multipoint-to-multipoint transport trees | |
US9077551B2 (en) | Selection of multicast router interfaces in an L2 switch connecting end hosts and routers, which is running IGMP and PIM snooping | |
Levine et al. | Improving internet multicast with routing labels | |
US7860093B2 (en) | Fast multicast convergence at secondary designated router or designated forwarder | |
US6654371B1 (en) | Method and apparatus for forwarding multicast data by relaying IGMP group membership | |
US8879429B2 (en) | Acknowledgement-based rerouting of multicast traffic | |
US7768913B1 (en) | Delivering and receiving multicast content across a unicast network | |
US20020143951A1 (en) | Method and system for multicast to unicast bridging | |
US9106569B2 (en) | System and method that routes flows via multicast flow transport for groups | |
US20060050643A1 (en) | Router for multicast redundant routing and system for multicast redundancy | |
US8599851B2 (en) | System and method that routes flows via multicast flow transport for groups | |
US20030193958A1 (en) | Methods for providing rendezvous point router redundancy in sparse mode multicast networks | |
EP1804423A2 (en) | Method for rapidly recovering multicast service and network device | |
US7660268B2 (en) | Determining the presence of IP multicast routers | |
WO1999048246A1 (en) | Method, apparatus, and medium for minimal time multicast graft/join restoration | |
JP3824906B2 (en) | INTERNET CONNECTION METHOD, ITS DEVICE, AND INTERNET CONNECTION SYSTEM USING THE DEVICE | |
Ballardie et al. | Core Based Tree (CBT) Multicast | |
Cisco | Configuring IP Multicast Layer 3 Switching | |
CN114915588B (en) | Upstream multicast hop UMH extension for anycast deployment | |
Jia et al. | Efficient internet multicast routing using anycast path selection | |
Xylomenos et al. | IP multicasting for point-to-point local distribution | |
Nugraha et al. | Multicast communication for video broadcasting service over IPv4 network using IP option |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20001011 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): BE CH DE ES FR GB IT LI NL |
|
17Q | First examination report despatched |
Effective date: 20010320 |
|
18D | Application deemed to be withdrawn |
Effective date: 20010731 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
R18D | Application deemed to be withdrawn (corrected) |
Effective date: 20010731 |
|
R18D | Application deemed to be withdrawn (corrected) |
Effective date: 20011002 |