CN118283539A - Multicast switching method and device, electronic equipment and storage medium - Google Patents

Multicast switching method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN118283539A
CN118283539A CN202211657445.8A CN202211657445A CN118283539A CN 118283539 A CN118283539 A CN 118283539A CN 202211657445 A CN202211657445 A CN 202211657445A CN 118283539 A CN118283539 A CN 118283539A
Authority
CN
China
Prior art keywords
link
multicast
ipmc
resource
standby
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211657445.8A
Other languages
Chinese (zh)
Inventor
林洪冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Ruijie Networks Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ruijie Networks Co Ltd filed Critical Ruijie Networks Co Ltd
Priority to CN202211657445.8A priority Critical patent/CN118283539A/en
Publication of CN118283539A publication Critical patent/CN118283539A/en
Pending legal-status Critical Current

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses a multicast switching method, a device, an electronic device and a storage medium, which belong to the technical field of communication, in the method, multicast data of a multicast source is received through a main link and a standby link aiming at network equipment which opens PIM FRR functions, when the main link fails, the standby link of the multicast is switched into the main link at a data forwarding plane so as to send the multicast data of the standby link, and the standby link is switched into a new main link at a control plane instead of sending the multicast data of the standby link after the control plane and the data forwarding plane finish switching successively, so that the multicast data loss during multicast switching can be greatly reduced.

Description

Multicast switching method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of communications technologies, and in particular, to a multicast switching method, a device, an electronic device, and a storage medium.
Background
As a communication method parallel to Unicast (Unicast) and Broadcast (Broadcast), multicast (Multicast) technology can effectively solve the problem of Unicast-Multicast reception, and realize efficient point-to-multipoint data transmission.
The current network device relies on unicast routing to establish a multicast forwarding table, when a link or a node in the network fails, the multicast table needs to be re-established after the unicast routing is re-converged, and because time is consumed for unicast convergence of a control plane, the multicast table needs to be added or deleted after control plane protocol convergence, and time is consumed for data forwarding plane processing of the table. The whole process takes a long time, and excessive multicast data loss can be caused.
Therefore, the prior art has the problem that the multicast data is too much lost during the multicast switching.
Disclosure of Invention
The embodiment of the application provides a multicast switching method, a device, electronic equipment and a storage medium, which are used for solving the problem of excessive multicast data loss during multicast switching in the prior art.
In a first aspect, an embodiment of the present application provides a multicast switching method, including:
after establishing a protocol independent multicast fast reroute (PIM FRR) multicast list item of a multicast source, network equipment receives multicast data of the multicast source through a main link and a standby link respectively;
If the main link is determined to have a fault, switching to the standby link at a data forwarding plane to forward the multicast data received through the standby link outwards, and switching the standby link to a new main link at a control plane to update the PIM FRR multicast table item to a PIM multicast table item.
In some embodiments, after determining that the primary link fails, further comprising:
When the fault type of the main link is determined to be a physical link fault, carrying out port oscillation suppression processing to try to relieve the physical link fault of the main link;
If the physical link failure release failure of the main link is determined, switching to the standby link at a data forwarding plane to forward the multicast data received through the standby link.
In some embodiments, multicast data received over the backup link is forwarded out according to the following steps:
Switching the first IPMC resource which is corresponding to the standby link and can not be forwarded to a second IPMC resource which can be forwarded, and switching the third IPMC resource which is corresponding to the main link and can be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by the second IPMC resource, so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the main link.
In some embodiments, after the data forwarding plane switches to the backup link, the method further includes:
if the original main link fault recovery is determined, switching the first IPMC resource which is corresponding to the original main link and can not be forwarded to the third IPMC resource which can be forwarded, and switching the second IPMC resource which is corresponding to the original standby link and can not be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the original main link to a forwardable outlet pointed by the third IPMC resource, so as to send the multicast data received through the original main link outwards through the forwardable outlet, and submitting the multicast data received through the original standby link to an unrepeatable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the original standby link.
In some embodiments, detection parameters required for remote link failure detection are automatically set each time a PIM FRR multicast table entry is established based on link information of a primary link in the PIM FRR multicast table entry.
In a second aspect, an embodiment of the present application provides a multicast switching device, including:
The receiving module is used for receiving multicast data of the multicast source through a main link and a standby link respectively after the protocol independent multicast fast rerouting PIM FRR multicast list item of the multicast source is established;
And the switching module is used for switching to the standby link at the data forwarding plane if the main link fails, forwarding the multicast data received through the standby link outwards, and switching the standby link to a new main link at the control plane so as to update the PIM FRR multicast list item to a PIM multicast list item.
In some embodiments, further comprising:
The fault processing module is used for carrying out port oscillation suppression processing to try to relieve the physical link fault of the main link when the fault type of the main link is determined to be the physical link fault after the main link is determined to be faulty;
And the switching module is also used for switching to the standby link at the data forwarding plane to forward the multicast data received through the standby link outwards if the physical link failure release failure of the main link is determined.
In some embodiments, the switching module is specifically configured to forward the multicast data received through the backup link outwards according to the following steps:
Switching the first IPMC resource which is corresponding to the standby link and can not be forwarded to a second IPMC resource which can be forwarded, and switching the third IPMC resource which is corresponding to the main link and can be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by the second IPMC resource, so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the main link.
In some embodiments, the system further comprises a recovery module for:
After the data forwarding plane is switched to the standby link, if the original main link is determined to be in fault recovery, switching the first IPMC resource which corresponds to the original main link and can not be forwarded to the third IPMC resource which can be forwarded, and switching the second IPMC resource which corresponds to the original standby link to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the original main link to a forwardable outlet pointed by the third IPMC resource, so as to send the multicast data received through the original main link outwards through the forwardable outlet, and submitting the multicast data received through the original standby link to an unrepeatable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the original standby link.
In some embodiments, detection parameters required for remote link failure detection are automatically set each time a PIM FRR multicast table entry is established based on link information of a primary link in the PIM FRR multicast table entry.
In a third aspect, an embodiment of the present application provides an electronic device, including: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
The memory stores a computer program executable by at least one processor to enable the at least one processor to perform the multicast switching method described above.
In a fourth aspect, an embodiment of the present application provides a storage medium, where a computer program is executed by a processor of an electronic device, the electronic device being capable of executing the above-described multicast switching method.
In the embodiment of the application, aiming at the network equipment for starting the PIM FRR function, the multicast data of the multicast source is received through the main link and the standby link, when the main link fails, the standby link of the multicast is switched into the main link on the data forwarding plane so as to send the multicast data from the standby link, and the standby link is switched into the new main link on the control plane instead of sending the multicast data from the standby link after the control plane and the data forwarding plane complete the switching successively, so that the multicast data loss during the multicast switching can be greatly reduced.
Drawings
The accompanying drawings, which are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiments of the application and together with the description serve to explain the application and do not constitute a limitation on the application. In the drawings:
FIG. 1 is a schematic diagram of a relationship between a main link and an auxiliary link and IPMC resources when a main link is normal according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a relationship between a primary link and a standby link and IPMC resources when a primary link fails according to an embodiment of the present application;
fig. 3 is a schematic diagram of a handover procedure when a physical link fails according to an embodiment of the present application;
Fig. 4 is a schematic diagram of a handover procedure when a remote link fails according to an embodiment of the present application;
Fig. 5 is a schematic diagram of a topology of multicast according to an embodiment of the present application;
fig. 6 is a schematic diagram of a PIM FRR multicast data forwarding process according to an embodiment of the present application;
Fig. 7 is a schematic diagram of a switching process when a local link fails according to an embodiment of the present application;
fig. 8 is a schematic diagram of a handover procedure when a remote link fails according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a back-cut process during recovery of a local fault of a main link according to an embodiment of the present application;
Fig. 10 is a flowchart of a multicast switching method according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a multicast switching device according to an embodiment of the present application;
Fig. 12 is a schematic hardware structure of an electronic device for implementing a multicast switching method according to an embodiment of the present application.
Detailed Description
In order to solve the problem of excessive multicast data loss during multicast switching in the prior art, the embodiment of the application provides a multicast switching method, a device, electronic equipment and a storage medium.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
In order to facilitate understanding of the present application, the present application relates to the technical terms:
1. Protocol independent multicast (Protocol Independent Multicast, PIM), an intra-domain multicast routing protocol. The multicast source sends a message to the group address. The message is forwarded hop by hop through the network device, and finally reaches the group member. On a three-tier network device, multicast routing table entries may be created and maintained using PIM to support multicast forwarding.
2. The multicast message is transmitted from one point to multiple points, and the forwarding path presents a tree structure. This forwarding path is called a multicast distribution tree (MDT, multicast Distribution Tree) and is of two types:
RPT (RP Tree, shared Tree): the meeting point RP (Rendezvous Point) is taken as the root and the designated router DR (Designated Router) connecting the group members is taken as the leaf.
SPT (Shortest Path Tree ): the designated router DR (Designated Router) connected to the multicast source is used as a root, and the junction RP or the designated router DR connected to the group members is used as a leaf.
3. Data link detection protocol (Data Link Detection Protocol, DLDP), a fast detection protocol for ethernet link failure. The general ethernet link detection mechanism only uses the state of the physical connection to detect the connectivity of the link through the auto-negotiation of the physical layer. However, for three-layer data connectivity detection, such as a scene that the physical connection state is normal but three-layer data communication is abnormal, certain limitations exist. By employing the data link detection protocol DLDP, reliable three-layer link detection information can be provided. Meanwhile, after detecting the fault, DLDP can close the logic state of the three-layer interface, so that the three-layer protocol is promoted to be converged rapidly.
1) Detection mode
DLDP detection modes include: active mode and passive mode, wherein:
active mode: the mode of actively transmitting the ICMP detection message is defined as an active mode by default.
Passive mode: refers to a mode of passively receiving ICMP probe messages. That is, the DLDP component does not actively initiate an ICMP Echo message to detect, and only needs to respond ICMP REPLY message after receiving the ICMP Echo message, and judges whether the interface has a path fault by judging whether the ICMP Echo message is received or not in a specified time. The effect of detecting the link paths by the two devices is met, and meanwhile, bandwidth resources and device CPU resources are saved.
2) Cross-network segment detection
If DLDP components need to detect reachability of non-directly connected segment IP, then the next hop IP of the interface can be configured so that DLDP components can obtain the next hop media access Control (MEDIA ACCESS Control or Medium Access Control, MAC) address through address resolution protocol (Address Resolution Protocol, ARP) message, and correctly encapsulate ICMP message and send out. However, in this case, the situation that the response message replies from other links needs to be avoided, otherwise, the DLDP component misjudges that the ICMP reply is not received by the interface.
3) Detection time
When the network device does not receive the response message of the opposite terminal in the time period of 'detection interval' x 'retransmission times', the three-layer link is considered to be failed, and the three-layer interface logic state (whether the actual physical link is connected or not) is actively closed (the SHUTDOWN). And once the three-layer link is in normal communication, the logic state of the three-layer interface is restored to be an UP state.
Detection interval: finger DLDP detects the transmission interval of the message (ICMP echo).
Number of retransmissions: finger DLDP detects the number of hair packs required for failure.
4) Number of recovery times
If the detection link is unstable, for example, ping is broken three times, and the detection link is turned on once and then broken multiple times. If judged by single result logic, DLDP detection results in UP, DOWN multiple times, which will exacerbate network instability.
The number of resumptions indicates the number of consecutive DLDP detection message responses that need to be received before the link is set from the DOWN state to the UP state. The default number of recovery times is 3, i.e. the UP state will be set only if the consecutive PINGs on the link are passed 3 times. In this case, although the sensitivity of the link detection is reduced, the stability is increased, and the related parameters can be adjusted according to the network condition in practical application.
5) Ignoring ICMP detected link failures
After the DLDP function is started, the DLDP agent sends an ICMP echo message to perform path detection, the detection state of the ICMP echo message is returned to the DLDP component, and if a link failure is detected by default, the DLDP component actively shifts down to a corresponding three-layer port.
In some application scenarios, the DLDP agent is required to advertise the detected state to other functional modules, and if a state anomaly is detected, the DLDP component cannot be shitdown for the three-tier port. In this scenario, the DLDP component ignores the case where the three-layer link state changes from the UP state to the DOWN state, but the DLDP component still does not ignore the case where the DOWN state changes to the UP state.
A control plane for controlling and managing the operation of all network protocols, such as spanning tree protocol, VLAN protocol, ARP protocol, various routing protocols, multicast protocol, etc. The control plane provides the router/switch with accurate knowledge of network devices, connection links and interaction protocols in the overall network environment through network protocols, and makes timely adjustments to maintain normal operation of the network when network conditions change. The control plane provides various network information and forwarding lookup table entries necessary for the data forwarding plane before data processing and forwarding.
5. The data forwarding plane processes information mainly by hardware resources. The basic task of the network device is to process and forward various types of data on different ports, and for various specific processing and forwarding processes in the data processing process, for example, specific execution processes of various functions such as L2/L3/ACL/QOS/multicast/security protection and the like, the basic task belongs to the task category of the data forwarding plane.
6. Route convergence refers to the state where all routers in the routing domain agree on the current network structure and route forwarding. Changes are known from the topology of the network to all relevant routers on the network and change accordingly.
In the related art, when a link or a node in a network fails, the unicast route needs to be re-converged, so that the re-establishment of a multicast table item is triggered, but the process takes a long time, excessive multicast data loss can be caused, and through actual measurement, 17s are required for switching the main link to a standby link by 1000 table items, so that the high requirement of convergence in 3s cannot be met.
The multicast control plane protocol converges, unicast convergence is needed to be relied on, and the multicast convergence can be performed after the unicast convergence, so that time consumption exists in the unicast convergence. After the control plane protocol converges, the multicast list item is issued and added and deleted, and the time consumed for processing the list item by the data forwarding plane exists. The whole process is mainly time-consuming: 1. and (3) link fault detection, 2, unicast route convergence, 3, multicast route convergence, 4, and multicast forwarding table installation. Packet loss process of multicast data in link failure: step 1 to step 4.
Therefore, the embodiment of the application provides a multicast switching method to reduce the packet loss of multicast data when a link fails. In the method, network equipment such as a switch, a router and the like supports PIM FRR functions, the PIM FRR has multicast list items when a link fails and a standby link, when a main link fails, multicast data from the main link is sent, the multicast data from the standby link is discarded, and when the main link fails, the multicast data from the standby link can be immediately forwarded without undergoing a process of reestablishing the multicast list items, so that the interruption time of the multicast data is reduced.
PIM FRR advantage: the main link and the standby link are installed in advance, and when the main link fails, the main link is quickly switched to the standby link without protocol convergence.
The whole process is mainly time-consuming: 1. and (3) link fault detection, multicast fast cutting, unicast route convergence, multicast route convergence and multicast forwarding table installation. Packet loss process of multicast data in link failure: step 1 to step 2.
In order to achieve the purpose of fast switching and less packet loss, the Multicast group (data forwarding plane for fast software switching and suitable for all chips) can be maintained by means of IP Multicast (IPMC) resources.
Specifically, one IPMC resource is reserved as an IPMC that is not forwardable, without any egress. It is assumed that the source ports of the primary and secondary links correspond to multicast routes (S, G, V1) and multicast routes (S, G, V2), respectively. After the PIM FRR function is started, a PIM FRR multicast list item is established on the network equipment, wherein (S, G, V1) is a main link multicast routing prefix, (S, G, V2) is a standby link multicast routing prefix, IPMC1+ PORTi is main link exit information (i is positioned between 1 and m), IPMC2+ PORTj (j is positioned between 1 and n) is standby link exit information, IPMC_DROP is IPMC resources without exit information, and the IPMC_DROP points to an unrepeatable exit empty. Wherein S represents multicast group source IP, G represents multicast group IP, V represents source port VLAN, source address SIP and GIP of main and standby link multicast group are the same, source port VLAN of main and standby link is different, VLAN and IFX (port number of device) are only mapped.
Referring to fig. 1, when the main link is normal, the main link (S, G, V1) multicast routing prefix corresponds to a forwardable IPMC1 resource (i.e., a third IPMC resource), and the standby link multicast routing prefix (S, G, V2) corresponds to an unrepeatable ipmc_drop resource (i.e., a first IPMC resource). The network device may receive the primary and secondary multicast data, the multicast data from the primary link may be submitted to the forwardable exit PORTi pointed by the IPMC1 resource, and forwarded by PORTi, and the multicast data from the secondary link may be submitted to the non-forwardable exit empty pointed by the ipmc_drop resource, so as to discard the multicast data from the secondary link.
Referring to fig. 2, when it is detected that the main link fails, the data forwarding plane performs fast-cutting by only changing the correspondence between the multicast prefix and the IPMC resource, for example, modifying the main link multicast routing prefix (S, G, V1) from the corresponding IPMC1 resource that can be forwarded to the corresponding ipmc_drop resource that cannot be forwarded, and modifying the standby link multicast routing prefix (S, G, V2) from the corresponding ipmc_drop resource that cannot be forwarded to the corresponding IPMC2 resource that can be forwarded (i.e., the second IPMC resource). Subsequently, the multicast data received through the standby link may be submitted to the forwardable exit PORTj pointed by the IPMC2 resource, so that the multicast data received through the standby link is sent out through PORTj, and the multicast data (if any) received through the main link may be submitted to the non-forwardable exit empty pointed by the ipmc_drop resource, so as to discard the multicast data from the main link.
When the data forwarding plane is cut quickly, the forwarding state of the link can only be changed, and the multicast table item cannot be deleted. After the control surface converges, the data forwarding surface and the control surface are kept consistent. In addition, when the multicast group has only one link, the fast switching is not executed, when the failed link is a standby link, the fast switching is not executed, and when the main link is in a failure state, the standby link is not executed.
It should be noted that, the prefixes of the primary and the secondary links are all installed on hardware (such as an ASCI chip) of the network device, otherwise, the multicast data of the secondary links may be sent to the CPU as an unknown multicast.
In practical application, the link failure is divided into a local link failure and a remote link failure, and multicast switching processes in the two failures are described below.
1. Local link failure
The local link detection is responsible for the port component of the network equipment, the link fault information of any port is directly announced to the data forwarding plane multicast component from the control plane port component and the data forwarding plane port component, and the data forwarding plane multicast component performs fast cutting on the multicast group of ifx of which all source ports are the ports.
The local link failure includes a physical link failure and a logical link failure. A logical link failure may occur when a physical link failure occurs, and a physical link failure may not necessarily occur when a logical link failure occurs. These two fault conditions are described separately below.
Referring to fig. 3, the handover procedure upon physical link failure includes the steps of:
1.1, an ASIC chip senses that a physical link fails and announces physical link failure information to a data forwarding plane interface component;
1.2, the interface component of the data forwarding plane receives the fault information of the physical link, performs port oscillation suppression processing, and if the fault cannot be relieved after the processing, notifies the fault information to the multicast component of the data forwarding plane;
and 1.3, after the data forwarding plane multicast component receives the fault information, executing fast switching to send the multicast data from the standby link.
2.1, The control plane interface component perceives the logical link fault (configuring a shutdown under the interface, enabling a routing protocol, calculating the link fault through the control plane protocol, and the like), and announces the logical link fault information to the data forwarding plane multicast component;
and 2.2, after the data forwarding plane multicast component receives the fault information, executing fast switching and switching to transmitting the multicast data from the standby link.
3.1, The control plane interface component perceives the logic link fault (configuring a shutdown under the interface, enabling a routing protocol, calculating the link fault through the control plane protocol, and the like), announces the logic link fault information to the control plane unicast component, and triggers the control plane unicast component to perform unicast routing calculation; (3.1 and 2.1 are performed simultaneously)
3.2, After the unicast route calculation (i.e. unicast route convergence) is completed, the control plane unicast component notifies the control plane multicast component that the unicast route of the multicast source IP address changes and triggers the control plane multicast component to perform multicast route calculation;
3.3, after the multicast route calculation is completed (namely, the multicast route converges), the control plane multicast component notifies the data forwarding plane multicast component that the multicast route converges, and the standby link is raised to be a new main link;
and 3.4, the data forwarding plane multicast component installs the new multicast table entry on the ASIC chip, and finally a main link (original standby link) is left.
In the above flow, the steps 1.3 and 2.2 are the same and only performed once.
The switching process when the logical link fails does not include 1.1-1.3 in fig. 3, and other steps are the same, and are not described herein.
It should be noted that, the ASIC chip is an internal chip of the network device, and the data forwarding plane interface component, the data forwarding plane multicast component, the control plane interface component, the control plane unicast component, the control plane multicast component, and the data forwarding plane multicast component are all pre-installed in the network device.
2. Remote link failure
The remote link fault detection uses DLDP, which can be detected by the same network segment or by a cross-network segment. DLDP may configure multiple IPs, and when all IPs do not have ICMP responses, then the interface is determined to be in the DOWN state. Once there is an IP resume communication, the interface is considered to resume UP state. The DLDP functions are configured under the interface: and selecting whether to configure parameters such as the next hop IP, the MAC address, the sending interval, the retransmission times, the recovery times and the like according to the actual environment.
The interface configuration DLDP function of the main link detects the link connectivity of the multicast source sip. I.e., whether the multicast source IP is reachable based on the source port ifx. When the main link fails, the state is directly announced DLDP to the data forwarding plane multicast component, and the data forwarding plane multicast component performs fast cutting on all the ifx and sip multicast groups.
Fig. 4 is a schematic diagram of a multicast switching flow when a remote link fails, including the following steps:
4.1, the control plane DLDP component senses the logical link failure at the far end and notifies the data forwarding plane DLDP component.
And 4.2, the data forwarding plane DLDP component receives the link fault information and transmits the link fault information to the data forwarding plane multicast component.
And 4.3, the multicast component of the data forwarding plane receives the link fault information, performs fast switching and switches to send the multicast data from the standby link.
Meanwhile, control plane switching is performed, which comprises the steps of 5.1-5.4:
And 5.1, the control plane DLDP component senses the logic link fault of the far-end link (configuration of a shutdown under an interface, routing protocol de-enabling, calculation of the link fault through the control plane protocol, and the like), notifies the control plane unicast component, and triggers the control plane unicast component to perform unicast routing calculation. (5.1 and 4.1 are performed simultaneously)
And 5.2, after the control plane unicast component finishes calculation (namely unicast route convergence), notifying the control plane multicast component that the unicast route of the multicast source IP address of the control plane multicast component changes, and triggering the control plane multicast component to perform multicast route calculation.
And 5.3, after the control plane multicast component calculates (i.e. the multicast route converges), calculating a new multicast table item according to the unicast route, and notifying the data forwarding plane multicast component of the new multicast table item, wherein the standby link is raised to be a new main link.
And 5.4, the data forwarding plane multicast component installs a new multicast table entry to the ASIC chip, and finally a main link (original standby link) is left.
In addition, each time a PIM FRR multicast table is established, the control plane multicast component may establish DLDP sessions in the control plane DLDP component based on the link information of the current primary link, and may also delete DLDP sessions from the control plane DLDP component when this primary link fails.
It should be noted that, the ASIC chip is an internal chip of the network device, and the control plane DLDP component, the data forwarding plane DLDP component, the data forwarding plane multicast component, the control plane DLDP component, and the control plane multicast component are all pre-installed in the network device.
The following describes embodiments of the present application in connection with specific embodiments.
Fig. 5 is a schematic diagram of a topology structure of multicast according to an embodiment of the present application, including a multicast source, a receiver, and network devices R1, R2, R3, R4 located between the multicast source and the receiver.
Typical scenarios are:
1. PIM FRR multicast data forwarding: if the PIM FRR function is started by R4 in the topology, R4 can respectively send the primary and the secondary joining to the multicast source according to the unicast backup FRR route, and a primary and the secondary multicast forwarding list item is established, so that R4 respectively receives one multicast data from the primary and the secondary links, and the data forwarding surface selects the multicast data from the primary link for forwarding and discards the multicast data from the secondary link.
2. Local link failure detection and protection: if the main link R3- > R4 in the topology breaks down, the R4 can quickly detect the local interface link fault, and immediately select the multicast data from the standby link for forwarding, so that the break-off time of the multicast data is reduced.
3. Remote link failure detection and protection: if the main link R1- > R3 in the topology breaks down, R4 can quickly detect the remote link fault and immediately select the multicast data from the standby link for forwarding, so that the break-down time of the multicast data is reduced.
4. Multicast data backcut after the main link failure recovery: when the main link R1- > R3- > R4 in the topology is recovered, after the PIM protocol layer senses the route change, a route switching flow is started, and then the current optimal main path is converged smoothly.
These several scenarios are each described in detail below.
1. PIM FRR multicast data forwarding
PIM FRR forwarding is to search a standby route (unicast standby FRR route or ECMP route) for a list item of the pointer pair (S, G), if the standby route exists, a standby input interface is added according to the standby route, meanwhile, a PIM joining message is sent to the standby input interface, and a main standby FRR multicast forwarding list item is established. When the network equipment receives the two parts of multicast data of the main link and the standby link, the data forwarding surface forwards the multicast data from the main link and discards the multicast data from the standby link.
Referring to fig. 6, pim FRR multicast data forwarding includes the steps of:
1) The user starts the three-layer multicast function on R1, R2, R3 and R4, and starts the unicast FRR function, PIM SPT function and PIM FRR function on R4.
2) The receiver sends an IGMP join message to R4 requesting to receive multicast data of multicast group G of multicast source S. And the multicast source S sends the multicast data stream with the multicast source S and the multicast group G to the R1.
3) After receiving the IGMP message, R4 inquires the route of the multicast source S, and if the main next hop-out interface is Te0/4, the PIM joining message is sent to Te 0/4.
4) And after receiving the PIM joining message, R3 inquires the route of the multicast source S, and if the main next hop-out interface is Te0/3, the PIM joining message is sent to Te 0/3.
5) After receiving the PIM joining message, R1 creates a multicast list item (S, G), the outlet is Te0/3, and forwards the multicast stream to the outlet Te0/3.
6) R3 creates a multicast table item (S, G) after receiving the multicast stream, the outlet is Te0/4, and forwards the multicast stream to the outlet Te0/4.
7) And R4 receives the multicast stream, performs SPT switching, generates a multicast table item (S, G), has a main input interface Te0/4 and an outlet Te0/5, and forwards the multicast stream to the outlet Te0/5.
8) R4 judges whether the PIM FRR function is started or not, if yes, whether the multicast source S has a backup FRR route or ECMP route is searched, if yes, the step 9 is skipped, and if no, the flow is ended.
9) R4 finds that the multicast source S has backup FRR route or ECMP route, and the backup input interface is Te0/3, then sends PIM joining message to the backup input interface Te0/3, and generates PIM FRR multicast list item (S, G), adds backup input interface to PIM forwarding list, and opens main link to link detection of the multicast source S.
10 Similar to steps 3) and 4), R2 receives the PIM addition message, then sends the PIM addition message to R1, R1 adds the Te0/2 outlet to the (S, G) list item after receiving the PIM addition message, and R2 adds the Te0/3 outlet to the (S, G) list item after receiving the multicast stream. Thus, the establishment of the multicast list item of the main and standby links is completed, and R4 receives the two multicast data of the main and standby.
11 R4 forwards the multicast data of the main access interface and discards the multicast data of the backup access interface.
That is, the multicast data is forwarded in the manner shown in fig. 1, where V1 is the primary ingress interface, V2 is the backup ingress interface, i of ipmc1_ PORTi is 1, and the only one ipmc1_port is the egress.
2. Local link failure detection and protection
After the primary and standby FRR multicast list items are established, the network equipment with the primary and standby routes can receive the double multicast data. The data forwarding surface selectively forwards the multicast data from the main link, discards the multicast data from the standby link, and when the main link locally fails, the link state of the main input interface is detected, so that the link failure can be quickly perceived, the multicast data from the standby link is immediately selected, and the break time of the multicast data is reduced.
Referring to fig. 7, taking the example of the local link R3- > R4 failure after the PIM FRR multicast table entry has been created, the local link failure process includes the steps of:
1) And the PIM FRR multicast list item on the R4 is established at the beginning, the R4 receives the two multicast data of the main part and the standby part at the same time, and only selects the multicast data from the main link to forward to a receiver.
2) When the R3- > R4 link fails, R4 detects that the primary incoming interface of the PIM FRR multicast list item fails, the data forwarding plane immediately performs fast switching, and multicast data from the backup incoming interface is selected for forwarding (i.e. steps 2.1-2.2 or steps 1.1-1.3 in FIG. 3 are executed).
That is, the multicast data is forwarded in the manner shown in fig. 2, where V1 is the primary ingress interface, V2 is the backup ingress interface, i of ipm2_ PORTi is 1, and the only one ipm2_port is the egress of the backup link.
3) The PIM protocol component in R4 detects a main link DOWN, updates the converging multicast entries and updates to the data forwarding plane (i.e., performs steps 3.1-3.4 in fig. 3), at which point the entries are updated from PIM FRR multicast entries to normal multicast entries (i.e., PIM multicast entries), and the main ingress interface updates to Te0/3.
4) The R3 device detects the egress link DOWN, no other device receives the multicast stream (S, G), deletes (S, G) the entry and sends a PIM prune message to the upstream R1.
5) And after receiving the PIM pruning message, R1 deletes the outlet Te0/3 of the (S, G) table item.
3. Remote link failure detection and protection
After the primary and standby FRR multicast list items are established, the network equipment with the primary and standby routes can receive the double multicast data. The data forwarding plane selectively forwards the multicast data from the main link and discards the multicast data from the standby link. When the remote end of the main link fails, whether the multicast source of the main link is reachable or not can be detected, so that the link failure can be quickly perceived, and the multicast data from the standby link can be immediately selected to be forwarded, thereby reducing the break time of the multicast data.
Referring to fig. 8, taking as an example a remote link R1- > R3 failure after the PIM FRR multicast table entry has been created, the remote link failure process includes the steps of:
1) At the beginning, PIM FRR multicast list item on R4 is already established, and two parts of multicast data of the main and the standby are received at the same time, only the multicast data from the main link is selected to be forwarded to the receiver, and the multicast data from the standby link is discarded.
2) After the establishment of the PIM FRR multicast list item is completed, R4 starts the link detection to the multicast source S in the main link, when the R1- > R3 link fails, the link failure of the main incoming interface of the PIM FRR multicast list item can be detected, the data forwarding surface immediately performs fast cutting, and the multicast data of the backup incoming interface is selected to forward (namely, the steps 4.1-4.3 in the figure 4 are executed).
That is, the multicast data is forwarded in the manner shown in fig. 2, where V1 is the primary ingress interface, V2 is the backup ingress interface, i of ipm2_ PORTi is 1, and the only one ipm2_port is the egress of the backup link.
3) After the unicast route from R4 to the multicast source changes, the PIM protocol component in R4 detects the unicast route change, sends a PIM pruning message to R3, updates the convergent multicast table item and updates the convergent multicast table item to the data forwarding plane. At this time, the table entry is updated from the FRR multicast table entry to the normal multicast table entry, the primary ingress interface is updated to Te0/3, and R4 closes the link detection to the multicast source (i.e. execute steps 5.1-5.4 in fig. 4).
4) R3 deletes the (S, G) table item after detecting the entry link DOWN of the multicast source. When the R3 receives the PIM pruning message, the pruning message is not processed if the table item is deleted.
5) And after receiving the PIM pruning message, R1 deletes the outlet Te0/3 of the (S, G) table item.
4. Multicast data back-cut after main link failure recovery
When the main link fault is recovered, after the PIM protocol component in R4 senses the route change, a multicast table item back-cut flow is started, and the multicast table item back-cut flow is smoothly converged on the current optimal main path.
Referring to fig. 9, taking main link local fault recovery (remote fault recovery process is similar) as an example, the procedure of table entry loop-back includes the following steps:
1) After the failure is recovered, the unicast route is re-converged, the route of the unicast announcement to the multicast source S is changed, and the main link is a high-priority route.
2) And after receiving the notification of unicast route convergence, the PIM protocol component in R4 re-sends a PIM joining message to the unicast main inlet interface, so that multicast data reaches R4 from the main inlet interface.
3) And after receiving the PIM joining message, R3 finds that no (S, G) forwarding table item exists currently, and continues to send the PIM joining message to the upstream R1.
4) After receiving the PIM joining message, R1 finds the existing multicast list item (S, G), then increases the outlet Te0/3, and forwards the multicast data to the outlet Te0/3.
5) R3 creates multicast list item (S, G) after receiving multicast data, and the outlet is Te0/4, and forwards multicast data to outlet Te0/4.
6) And after receiving the multicast data, R4 finds Te0/4 as the latest Reverse Path Forwarding (RPF) entry, and updates the primary entry interface of the table entry to Te0/4 and the backup entry interface to Te0/3, updates the table entry to the data forwarding surface, and simultaneously re-starts the link detection of the primary link.
7) The data forwarding plane updates the common multicast list item into a PIM FRR multicast list item, forwards the multicast data from the main input interface, and discards the multicast data from the backup input interface.
The scheme of the embodiment of the application is applicable to all chips, and the chips which do not support hardware failovers can be deployed, and experiments show that when 1000 table entries complete the switching from a main link to a standby link, the scheme of the embodiment of the application can shorten the time from 17s to 1s, thereby meeting the high convergence requirement that the packet loss of 1000 groups is less than 3 s.
Fig. 10 is a flowchart of a multicast switching method according to an embodiment of the present application, where the method includes the following steps.
In step 1001, after the PIM FRR multicast table entry of the multicast source is established, multicast data of the multicast source is received through the primary link and the backup link, respectively.
In step 1002, if it is determined that the main link fails, the method switches to the backup link at the data forwarding plane to forward the multicast data received through the backup link, and switches the backup link to a new main link at the control plane to update the PIM FRR multicast table entry to a PIM multicast table entry.
That is, when the main link fails, the data forwarding plane and the control plane are switched in parallel, instead of waiting until the control plane is switched and then switching is performed on the data forwarding plane, so that the loss of multicast data during switching can be reduced.
In specific implementation, the failure types of the main link include the following:
first, the primary link locally experiences a logical link failure.
The fault can be detected through the control plane interface component, and when the data forwarding plane is switched to the standby link, the control plane interface component can be controlled to send the fault information of the logic link fault to the data forwarding plane multicast component, and then the data forwarding plane multicast component is controlled to forward the multicast data received through the standby link outwards after receiving the fault information.
Second, physical link failures occur locally on the primary link.
The fault can be detected through a chip, when the data forwarding plane is switched to the standby link, the fault information of the physical link fault can be sent to the data forwarding plane interface component by the control chip, the port oscillation suppression processing is carried out on the data forwarding plane interface component to try to cancel the physical link fault of the main link, after the physical link fault of the main link cannot be canceled after the suppression processing is determined, the fault information of the physical link fault is sent to the data forwarding plane multicast component, and then the multicast data received through the standby link is forwarded outwards by the control data forwarding plane multicast component after the fault information is received.
Third, the primary link fails to a remote link.
The fault can be detected by the control plane DLDP component, and when the data forwarding plane is switched to the standby link, the control plane DLDP component can be controlled to send the fault information of the remote link fault to the data forwarding plane DLDP component, the data forwarding plane DLDP component can be controlled to send the fault information of the remote logic link fault to the data forwarding plane multicast component, and then the data forwarding plane multicast component can be controlled to forward the multicast data received through the standby link after receiving the fault information.
The multicast component of the controllable data forwarding plane forwards the multicast data received through the backup link according to the following steps in any fault condition:
Switching the standby link from the corresponding first IPMC resource which can not be forwarded to the corresponding second IPMC resource which can be forwarded, and switching the main link from the corresponding third IPMC resource which can be forwarded to the corresponding first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by a second IPMC resource capable of forwarding so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by a first IPMC resource capable of not forwarding so as to discard the multicast data received through the main link.
And after the data forwarding plane is switched to the standby link, if the original main link is determined to be recovered from the fault, the original main link can be switched back.
Specifically, the original main link is switched from the corresponding first IPMC resource which can not be forwarded to the corresponding third IPMC resource which can be forwarded, the original standby link is switched from the corresponding second IPMC resource which can not be forwarded to the corresponding first IPMC resource which can not be forwarded, then multicast data received through the original main link is submitted to a forwarding outlet pointed by the third IPMC resource which can be forwarded, so as to send the multicast data received through the original main link outwards through the forwarding outlet, and multicast data received through the original standby link is submitted to a non-forwarding outlet pointed by the first IPMC resource which can not be forwarded, so that the multicast data received through the original standby link is discarded.
In addition, it should be noted that, after switching back to the original main link, there are two links, and the PIM FRR multicast table entry is also re-established, the local detection and the remote detection of the main link are started, and the DLDP session is re-established in the control plane DLDP component.
For better remote detection, whenever a PIM FRR multicast table entry is established (i.e., a primary link update), the detection parameters required by the control plane DLDP component in performing link detection may be automatically set based on the link information of the primary link in the current PIM FRR multicast table entry. In this way, the detection parameters of the control surface DLDP component can be changed along with the change of the main link, so that the flexibility of remote detection is improved.
Based on the same technical concept, the embodiment of the application also provides a multicast switching device, and the principle of solving the problem of the multicast switching device is similar to that of the multicast switching method, so that the implementation of the multicast switching device can refer to the implementation of the multicast switching method, and the repetition is omitted.
Fig. 11 is a schematic structural diagram of a multicast switching device according to an embodiment of the present application, which includes a receiving module 1101 and a switching module 1102.
A receiving module 1101, configured to receive multicast data of a multicast source through a main link and a standby link after a protocol independent multicast fast reroute PIM FRR multicast table entry of the multicast source is established;
And a switching module 1102, configured to switch to the backup link at the data forwarding plane if it is determined that the main link fails, so as to forward the multicast data received through the backup link outwards, and switch the backup link to a new main link at the control plane, so as to update the PIM FRR multicast table entry to a PIM multicast table entry.
In some embodiments, further comprising:
the fault processing module 1103 is configured to perform a port oscillation suppression process to attempt to relieve a physical link fault of the main link when it is determined that the type of fault of the main link is a physical link fault after determining that the main link has a fault;
And the switching module 1102 is further configured to switch to the backup link at the data forwarding plane to forward the multicast data received through the backup link if it is determined that the physical link failure of the primary link fails.
In some embodiments, the switching module 1102 is specifically configured to forward the multicast data received through the backup link outwards according to the following steps:
Switching the first IPMC resource which is corresponding to the standby link and can not be forwarded to a second IPMC resource which can be forwarded, and switching the third IPMC resource which is corresponding to the main link and can be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by the second IPMC resource, so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the main link.
In some embodiments, a recovery module 1104 is also included for:
After the data forwarding plane is switched to the standby link, if the original main link is determined to be in fault recovery, switching the first IPMC resource which corresponds to the original main link and can not be forwarded to the third IPMC resource which can be forwarded, and switching the second IPMC resource which corresponds to the original standby link to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the original main link to a forwardable outlet pointed by the third IPMC resource, so as to send the multicast data received through the original main link outwards through the forwardable outlet, and submitting the multicast data received through the original standby link to an unrepeatable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the original standby link.
In some embodiments, detection parameters required for remote link failure detection are automatically set each time a PIM FRR multicast table entry is established based on link information of a primary link in the PIM FRR multicast table entry.
The division of the modules in the embodiments of the present application is schematically only one logic function division, and there may be another division manner in actual implementation, and in addition, each functional module in each embodiment of the present application may be integrated in one processor, or may exist separately and physically, or two or more modules may be integrated in one module. The coupling of the individual modules to each other may be achieved by means of interfaces which are typically electrical communication interfaces, but it is not excluded that they may be mechanical interfaces or other forms of interfaces. Thus, the modules illustrated as separate components may or may not be physically separate, may be located in one place, or may be distributed in different locations on the same or different devices. The integrated modules may be implemented in hardware or in software functional modules.
Having described the multicast switching method and apparatus of an exemplary embodiment of the present application, next, an electronic device according to another exemplary embodiment of the present application is described.
An electronic device 130 implemented according to such an embodiment of the present application is described below with reference to fig. 12. The electronic device 130 shown in fig. 12 is merely an example and should not be construed as limiting the functionality and scope of use of embodiments of the present application.
As shown in fig. 12, the electronic device 130 is embodied in the form of a general-purpose electronic device. Components of electronic device 130 may include, but are not limited to: the at least one processor 131, the at least one memory 132, and a bus 133 connecting the various system components, including the memory 132 and the processor 131.
Bus 133 represents one or more of several types of bus structures, including a memory bus or memory controller, a peripheral bus, a processor, and a local bus using any of a variety of bus architectures.
Memory 132 may include readable media in the form of volatile memory such as Random Access Memory (RAM) 1321 and/or cache memory 1322, and may further include Read Only Memory (ROM) 1323.
Memory 132 may also include a program/utility 1325 having a set (at least one) of program modules 1324, such program modules 1324 include, but are not limited to: an operating system, one or more application programs, other program modules, and program data, each or some combination of which may include an implementation of a network environment.
The electronic device 130 may also communicate with one or more external devices 134 (e.g., keyboard, pointing device, etc.), one or more devices that enable a user to interact with the electronic device 130, and/or any device (e.g., router, modem, etc.) that enables the electronic device 130 to communicate with one or more other electronic devices. Such communication may occur through an input/output (I/O) interface 135. Also, electronic device 130 may communicate with one or more networks such as a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the Internet, through network adapter 136. As shown, network adapter 136 communicates with other modules for electronic device 130 over bus 133. It should be appreciated that although not shown, other hardware and/or software modules may be used in connection with electronic device 130, including, but not limited to: microcode, device drivers, redundant processors, external disk drive arrays, RAID systems, tape drives, data backup storage systems, and the like.
In an exemplary embodiment, a storage medium is also provided, which when a computer program in the storage medium is executed by a processor of an electronic device, the electronic device is capable of performing the above-described multicast switching method. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, and the like.
In an exemplary embodiment, the electronic device of the present application may include at least one processor, and a memory communicatively connected to the at least one processor, where the memory stores a computer program executable by the at least one processor, and the computer program when executed by the at least one processor causes the at least one processor to perform the steps of any of the multicast switching methods provided by the embodiments of the present application.
In an exemplary embodiment, a computer program product is also provided, which, when executed by an electronic device, is capable of carrying out any one of the exemplary methods provided by the application.
Also, a computer program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, a RAM, a ROM, an erasable programmable read-Only Memory (EPROM), flash Memory, optical fiber, compact disc read-Only Memory (Compact Disk Read Only Memory, CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The program product for multicast switching in embodiments of the present application may take the form of a CD-ROM and include program code that can run on a computing device. However, the program product of the present application is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, radio Frequency (RF), etc., or any suitable combination of the foregoing.
Program code for carrying out operations of the present application may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, such as a local area network (Local Area Network, LAN) or wide area network (Wide Area Network, WAN), or may be connected to an external computing device (e.g., connected over the internet using an internet service provider).
It should be noted that although several units or sub-units of the apparatus are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functions of two or more of the elements described above may be embodied in one element in accordance with embodiments of the present application. Conversely, the features and functions of one unit described above may be further divided into a plurality of units to be embodied.
Furthermore, although the operations of the methods of the present application are depicted in the drawings in a particular order, this is not required or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method for multicast switching, comprising:
after establishing a protocol independent multicast fast reroute (PIM FRR) multicast list item of a multicast source, network equipment receives multicast data of the multicast source through a main link and a standby link respectively;
If the main link is determined to have a fault, switching to the standby link at a data forwarding plane to forward the multicast data received through the standby link outwards, and switching the standby link to a new main link at a control plane to update the PIM FRR multicast table item to a PIM multicast table item.
2. The method of claim 1, further comprising, after determining that the primary link has failed:
When the fault type of the main link is determined to be a physical link fault, carrying out port oscillation suppression processing to try to relieve the physical link fault of the main link;
If the physical link failure release failure of the main link is determined, switching to the standby link at a data forwarding plane to forward the multicast data received through the standby link.
3. The method of claim 1, wherein multicast data received over the backup link is forwarded out according to the steps of:
Switching the first IPMC resource which is corresponding to the standby link and can not be forwarded to a second IPMC resource which can be forwarded, and switching the third IPMC resource which is corresponding to the main link and can be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by the second IPMC resource, so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the main link.
4. The method of claim 3, further comprising, after the data forwarding plane switches to the backup link:
if the original main link fault recovery is determined, switching the first IPMC resource which is corresponding to the original main link and can not be forwarded to the third IPMC resource which can be forwarded, and switching the second IPMC resource which is corresponding to the original standby link and can not be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the original main link to a forwardable outlet pointed by the third IPMC resource, so as to send the multicast data received through the original main link outwards through the forwardable outlet, and submitting the multicast data received through the original standby link to an unrepeatable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the original standby link.
5. The method according to any one of claims 1-4, wherein detection parameters required for remote link failure detection are automatically set each time a PIM FRR multicast table entry is established based on link information of a main link in said PIM FRR multicast table entry.
6. A multicast switching device, comprising:
The receiving module is used for receiving multicast data of the multicast source through a main link and a standby link respectively after the protocol independent multicast fast rerouting PIM FRR multicast list item of the multicast source is established;
And the switching module is used for switching to the standby link at the data forwarding plane if the main link fails, forwarding the multicast data received through the standby link outwards, and switching the standby link to a new main link at the control plane so as to update the PIM FRR multicast list item to a PIM multicast list item.
7. The apparatus as recited in claim 6, further comprising:
The fault processing module is used for carrying out port oscillation suppression processing to try to relieve the physical link fault of the main link when the fault type of the main link is determined to be the physical link fault after the main link is determined to be faulty;
And the switching module is further configured to switch to the backup link at the data forwarding plane to forward the multicast data received through the backup link if it is determined that the physical link failure of the primary link fails.
8. The apparatus of claim 6, wherein the switching module is specifically configured to forward the multicast data received over the backup link outwardly according to the following steps:
Switching the first IPMC resource which is corresponding to the standby link and can not be forwarded to a second IPMC resource which can be forwarded, and switching the third IPMC resource which is corresponding to the main link and can be forwarded to the first IPMC resource which can not be forwarded;
And submitting the multicast data received through the standby link to a forwardable outlet pointed by the second IPMC resource, so as to send the multicast data received through the standby link outwards through the forwardable outlet, and submitting the multicast data received through the main link to an non-forwardable outlet pointed by the first IPMC resource, so as to discard the multicast data received through the main link.
9. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein:
The memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-5.
10. A storage medium, characterized in that a computer program in the storage medium, when executed by a processor of an electronic device, is capable of performing the method of any of claims 1-5.
CN202211657445.8A 2022-12-22 2022-12-22 Multicast switching method and device, electronic equipment and storage medium Pending CN118283539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211657445.8A CN118283539A (en) 2022-12-22 2022-12-22 Multicast switching method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211657445.8A CN118283539A (en) 2022-12-22 2022-12-22 Multicast switching method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN118283539A true CN118283539A (en) 2024-07-02

Family

ID=91636503

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211657445.8A Pending CN118283539A (en) 2022-12-22 2022-12-22 Multicast switching method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN118283539A (en)

Similar Documents

Publication Publication Date Title
US8218429B2 (en) Method and device for multicast traffic redundancy protection
EP3151488B1 (en) Multicast only fast re-route over remote loop-free alternate backup path
US9509590B2 (en) Method and apparatus for providing resiliency in multicast networks
Sharma et al. Fast failure recovery for in-band OpenFlow networks
KR101652649B1 (en) System and method using rsvp hello suppression for graceful restart capable neighbors
JP5899305B2 (en) Technology for operating network nodes
KR102153019B1 (en) Enhancements to pim fast re-route with upstream activation packets
EP2437444A1 (en) Method, router and communication system for rapid multicast convergence
US8331222B2 (en) Link fault handling method and data forwarding apparatus
CN101335695B (en) Head node protection method, apparatus and device for point-to-multipoint label switching path
US9031070B2 (en) Methods for controlling elections in a multicast network
US20130272111A1 (en) Method and device for link fault detecting and recovering based on arp interaction
US9160616B2 (en) Multicast packet transmission method, related device and system
CN109218177B (en) Out-of-domain link detection method and device, storage medium and computer equipment
EP3422632A1 (en) Pim join entropy
KR20150028784A (en) Enhancements of the protocol independent multicast (pim) fast re-route methodology with downstream notification packets
US10992539B2 (en) Methods and systems for neighbor-acknowledged graceful insertion/removal protocol
EP2169874A1 (en) Multicast packets forwarding method, apparatus and multicast system
US20230291682A1 (en) Method and device for processing data packet, storage medium, and electronic device
EP4072079A1 (en) Anti-fiber breakage method and device for segment routing tunnel, ingress node and storage medium
CN104937878B (en) The method that Protocol Independent Multicast tree is established in the presence of unidirectional tunnel
JP5618946B2 (en) Communication apparatus and communication system
CN113366804A (en) Method and system for preventing micro-loops during network topology changes
US8848512B2 (en) Rendezvous point convergence method and apparatus
CN118283539A (en) Multicast switching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination