CN115801552A - Protection switching method and network equipment - Google Patents

Protection switching method and network equipment Download PDF

Info

Publication number
CN115801552A
CN115801552A CN202111060965.6A CN202111060965A CN115801552A CN 115801552 A CN115801552 A CN 115801552A CN 202111060965 A CN202111060965 A CN 202111060965A CN 115801552 A CN115801552 A CN 115801552A
Authority
CN
China
Prior art keywords
network device
tunnel
pseudo wire
notification message
fault notification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111060965.6A
Other languages
Chinese (zh)
Inventor
韦锋
姚鹏
薛伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202111060965.6A priority Critical patent/CN115801552A/en
Publication of CN115801552A publication Critical patent/CN115801552A/en
Pending legal-status Critical Current

Links

Images

Abstract

The application provides a protection switching method and network equipment, and belongs to the technical field of networks. According to the method and the device, under the condition that the tunnel OAM is not enabled, the intermediate node can send the fault notification message to the end point of the tunnel under the condition that the tunnel fails, so that the fault of the intermediate link or the intermediate node is transmitted to the end point, and the end point is triggered to carry out PW APS switching. On one hand, tunnel OAM does not need to be enabled, so that OAM resources are saved, and bandwidth resources occupied by OAM messages are also saved. On the other hand, the service protection switching can be triggered in time when the intermediate link or the intermediate node fails, so that the service is prevented from being damaged, and the reliability is improved.

Description

Protection switching method and network equipment
Technical Field
The present application relates to the field of network technologies, and in particular, to a protection switching method and a network device.
Background
Pseudowire (PW) Automatic Protection Switching (APS) is a network protection mechanism. The PW APS can protect the failed working PW through the pre-established protection PW.
Currently, each node in a tunnel enables Operation Administration and Maintenance (OAM) of the tunnel, and when the tunnel fails, PW switching is triggered through an OAM mechanism of the tunnel.
The enabling tunnel OAM needs to occupy too many OAM resources, and after the tunnel OAM is enabled, an OAM message needs to be periodically sent when there is no fault, resulting in too many occupied bandwidth resources.
Disclosure of Invention
The embodiment of the application provides a protection switching method and network equipment, which can save bandwidth resources. The technical scheme is as follows.
In a first aspect, a protection switching method is provided. According to the method, when a first tunnel fails, the first network equipment generates a first fault notification message, the first fault notification message indicates that the first tunnel fails and triggers second network equipment to switch a first service flow from a first pseudo wire to a second pseudo wire, the first network equipment is an intermediate node through which the first tunnel passes, the second network equipment is an end point of the first tunnel, the first tunnel bears the first pseudo wire, each node through which the first tunnel passes does not enable OAM detection of the tunnel, and the second pseudo wire is a pseudo wire configured between the second network equipment and third network equipment; and the first network equipment sends the first fault notification message to the second network equipment.
In the above method, when the tunnel OAM is not enabled, the intermediate node sends a failure notification packet to the end point of the tunnel when the tunnel fails, so as to transmit the failure of the intermediate link or the intermediate node to the end point and trigger the end point to perform PW APS switching. On one hand, tunnel OAM does not need to be enabled, so that OAM resources are saved, and bandwidth resources occupied by OAM messages are also saved. On the other hand, the service protection switching can be triggered in time when the intermediate link or the intermediate node fails, so that the service is prevented from being damaged, and the reliability is improved.
Optionally, before the first network device generates the first fault notification packet, the method further includes:
and the first network equipment determines that the first tunnel fails according to the failure of the first link, wherein the first tunnel passes through the first link.
The intermediate node sends a fault notification message to the end point of the tunnel when detecting that the link has a fault, so that the tunnel end point can be helped to timely sense that the PW intermediate link has the fault.
Optionally, the failure comprises at least one of an error code, a link disconnection, a frame error, and congestion.
Optionally, the first link is an upstream link of the first network device, and the second network device is an egress end point of the first tunnel.
The intermediate node sends a fault notification message to the tunnel egress endpoint when the upstream link fails, so that the fault of the upstream link can be transmitted to the egress endpoint, and the egress endpoint is triggered to perform PW APS switching.
Optionally, the first link is a downstream link of the first network device, and the second network device is an ingress point of the first tunnel.
The intermediate node sends a fault notification message to the tunnel access end point under the condition that the upstream link fails, so that the fault of the downstream link can be transmitted to the access end point, and the access end point is triggered to perform PW APS switching.
Optionally, the first fault notification packet is further configured to trigger the second network device to send a second fault notification packet to a third network device, where the second fault notification packet indicates that the first tunnel fails, and triggers the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
The intermediate node triggers one end point of the tunnel to send another fault notification message to the opposite end by sending the fault notification messages to the two end points of the tunnel, so that the two end points of the tunnel are triggered to carry out PW APS switching, a double-end switching mechanism of the PW APS is supported, service flows in two directions are switched to a protection path from a fault path, the forward traffic and the return traffic are prevented from walking on different paths, the service flows in the two directions are protected, and the reliability is further improved.
Optionally, when the first tunnel fails, the method further includes:
the first network equipment generates a second fault notification message, the second fault notification message indicates that the first tunnel fails, and triggers the third network equipment to switch the first service flow from the first pseudo wire to the second pseudo wire;
and the first network device sends the second fault notification message to a third network device, wherein the third network device is the other end point of the first tunnel.
The intermediate node sends the failure notification message to both the two endpoints of the tunnel to trigger both the two endpoints of the tunnel to perform PW APS switching, thereby supporting a double-end switching mechanism of the PW APS, supporting the switching of the service flow in both directions from the failure path to the protection path, protecting the service flow in both directions, and further improving the reliability.
Optionally, the first tunnel further carries a third pseudo wire, the first fault notification packet is further used to trigger the second network device to switch the second service traffic from the third pseudo wire to a fourth pseudo wire, and the third pseudo wire is a pseudo wire configured between the second network device and the third network device.
By the method, the intermediate node triggers the switching of the pseudo wires on the tunnel by sending a fault notification message, thereby reducing the overall switching time delay of the pseudo wires and improving the pseudo wire switching efficiency. In addition, after the tunnel fails, all the pseudo wires borne by the tunnel are affected, and the service flows borne by all the pseudo wires on the tunnel are possibly damaged, so that a plurality of pseudo wires on the tunnel are triggered to be switched in batch through one fault notification message, the probability of service damage is further reduced, and the reliability is improved.
Optionally, the method further comprises:
the first network equipment determines that a second tunnel fails according to the failure of the first link, wherein the second tunnel passes through the first link and bears a fifth pseudo wire;
the first network equipment generates a third fault notification message, the third fault notification message indicates that the second tunnel has a fault and triggers fourth network equipment to switch a third service flow from the fifth pseudo wire to the sixth pseudo wire, and the fourth network equipment is an endpoint of the second tunnel;
and the first network equipment sends the third fault notification message to the fourth network equipment.
Through the implementation mode, because under the scene that a plurality of tunnels pass through the same link, after the intermediate node detects the link failure, the intermediate node respectively notifies the end points of a plurality of tunnels passing through the link of the failure, thereby triggering the pseudo wires borne on the plurality of tunnels to be switched, avoiding the service damage on the plurality of tunnels, ensuring that the plurality of tunnels through which the intermediate node passes do not need to enable tunnel OAM detection, and further saving OAM resources and bandwidth resources.
Optionally, the first fault notification packet is a forward fault indicator (FDI) packet or an Alarm Indication Signal (AIS) packet.
By adopting OAM messages such as FDI messages or AIS messages to transmit faults, OAM protocol messages can be multiplexed, and the implementation complexity is reduced.
Optionally, the method further comprises:
and within the fault duration of the first tunnel, the first network equipment periodically sends the first fault notification message to the second network equipment.
Through the implementation mode, the intermediate node continuously sends the fault notification message before the fault is not eliminated, and the tunnel endpoint can determine whether the tunnel is recovered from the fault state according to whether the fault notification message is not received any more, so that the method is convenient and fast.
Optionally, the first fault notification packet further includes a label of the first tunnel.
The label of the first tunnel is used to indicate the next hop of the first network device to forward the first fault notification packet to the second network device. For example, the first tunnel is a Label Switched Path (LSP) established based on multi-protocol label switching (MPLS), the label of the first tunnel is an MPLS label, and the label of the first tunnel is carried in an MPLS header of the first failure notification packet.
The first network device instructs each node passing between the first network device and the second network device to forward the message to the second network device according to the label by carrying the label of the first tunnel in the first fault notification message, so that the first fault notification message is ensured to be forwarded to the second network device along the way, and the implementation complexity is reduced.
Optionally, the first fault notification packet further includes an identifier of the first tunnel.
The identification of the first tunnel is used to identify the first tunnel. The first network device indicates which tunnel has a fault by carrying the identifier of the first tunnel in the first fault notification message. Therefore, after the second network device receives the first failure notification message, the second network device can determine that the failed tunnel is the first tunnel through the identifier of the first tunnel, so as to switch the pseudowire loaded on the first tunnel.
Optionally, the first fault notification packet further includes at least one of a type of a fault and a timestamp of the first network device detecting the fault.
The first network device helps to locate the specific fault by carrying the fault type and the time of detecting the fault in the first fault notification message.
In a second aspect, a protection switching method is provided, where in the method, a second network device is implemented as an example, the second network device receives a first fault notification packet from a first network device, where the first fault notification packet indicates that a first tunnel fails, the first network device is an intermediate node through which the first tunnel passes, the first tunnel carries a first pseudo wire, each node through which the first tunnel passes does not enable tunnel OAM detection, and the second network device is an endpoint of the first tunnel; and the second network equipment switches the first service flow from the first pseudo wire to a second pseudo wire, wherein the second pseudo wire is a pseudo wire configured between the second network equipment and third network equipment.
In the above method, when the tunnel OAM is not enabled, the intermediate node sends a failure notification packet to the end point of the tunnel when the tunnel fails, so as to transmit the failure of the intermediate link or the intermediate node to the end point and trigger the end point to perform PW APS switching. On one hand, tunnel OAM does not need to be enabled, so that OAM resources are saved, and bandwidth resources occupied by OAM messages are also saved. On the other hand, the service protection switching can be triggered in time when the intermediate link or the intermediate node fails, so that the service is prevented from being damaged, and the reliability is improved.
Optionally, after the second network device receives the first fault notification packet, the method further includes:
the second network equipment generates a second fault notification message, wherein the second fault notification message is used for indicating that the first tunnel has a fault;
the second network device sends a second failure notification message to the third network device to trigger the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
Through the implementation mode, after receiving the fault notification message of the intermediate node, the tunnel endpoint not only performs PW APS switching at the home terminal, but also sends the fault notification message to the other endpoint of the tunnel to trigger the opposite terminal to perform the PW APS switching, so that a double-end switching mechanism of the PW APS is supported, service flows in two directions can be switched to a protection path from a fault path, the forward flow and the return flow are prevented from walking on different paths, the service flows in the two directions are protected, and the reliability is further improved.
Optionally, the second network device is an egress end point of the first tunnel, and the third network device is an ingress end point of the tunnel. That is, the intermediate node sends the fault notification message to the egress end point of the tunnel, and after the egress end point of the tunnel receives the fault notification message of the intermediate node, the egress end point of the tunnel not only performs PW APS switching on its own end, but also sends the fault notification message to the ingress end point of the tunnel to trigger the ingress end point of the tunnel to perform PW APS switching.
Through the implementation mode, PW APS switching can be performed at two ends of the tunnel under the scene that the upstream link fails, so that the forward traffic and the return traffic are prevented from traveling on different paths, the traffic in two directions is protected, and the reliability is further improved.
Optionally, the second network device is an ingress endpoint of the first tunnel and the third network device is an egress endpoint of the tunnel. That is, the intermediate node sends the fault notification message to the ingress end of the tunnel, and after the tunnel ingress end receives the fault notification message of the intermediate node, the tunnel ingress end not only performs PW APS switching at the local end, but also sends the fault notification message to the tunnel egress end to trigger the tunnel egress end to perform PW APS switching.
Through the implementation mode, PW APS switching can be performed at two ends of the triggered tunnel under the scene that the downstream link fails, so that the forward traffic and the return traffic are prevented from traveling on different paths, the traffic in two directions is protected, and the reliability is further improved.
Optionally, the first fault notification message is an FDI message or an AIS message.
Optionally, the first tunnel further carries a third pseudo wire, and after the second network device receives the first failure notification packet from the first network device, the method further includes:
the second network device switches the second service flow from the third pseudo wire to a fourth pseudo wire, wherein the fourth pseudo wire is a pseudo wire configured between the second network device and the third network device.
Through the mode, under the scene that one tunnel bears a plurality of pseudo wires, the tunnel endpoint can switch the pseudo wires on the tunnel in batches, so that the overall switching time delay of the pseudo wires is reduced, and the pseudo wire switching efficiency is improved.
Optionally, after the second network device receives the first fault notification packet, the method further includes:
and the second network equipment sets the OAM state of each pseudo wire borne by the first tunnel to be a fault state.
In a third aspect, a network device is provided, where the network device has a function of implementing the first aspect or the first network device in any optional manner of the first aspect. The network device comprises at least one unit configured to implement the method provided by the first aspect or any one of the optional manners of the first aspect. In some embodiments, the elements in the network device are implemented in software, and the elements in the network device are program modules. In other embodiments, the elements in the network device are implemented in hardware or firmware. For specific details of the network device provided in the third aspect, reference may be made to the first aspect or any optional manner of the first aspect, which is not described herein again.
In a fourth aspect, there is provided a network device having functionality to implement any of the second or second aspects described above. The network device comprises at least one unit configured to implement the method provided by the second aspect or any of the alternatives of the second aspect. In some embodiments, the elements in the network device are implemented in software, and the elements in the network device are program modules. In other embodiments, the elements in the network device are implemented in hardware or firmware. For specific details of the network device provided in the fourth aspect, reference may be made to the second aspect or any optional manner of the second aspect, which is not described herein again.
In a fifth aspect, a network device is provided, where the network device includes a processor and a network interface, where the processor is configured to execute instructions to cause the network device to perform the method provided by the first aspect or any one of the optional manners of the first aspect, and the network interface is configured to receive or send a message. For specific details of the network device provided in the fifth aspect, reference may be made to the first aspect or any optional manner of the first aspect, and details are not described here.
A sixth aspect provides a network device, where the network device includes a processor and a network interface, where the processor is configured to execute instructions to cause the network device to perform the method provided by the second aspect or any one of the optional manners of the second aspect, and the network interface is configured to receive or send a message. For specific details of the network device provided by the sixth aspect, reference may be made to the second aspect or any optional manner of the second aspect, which is not described herein again.
In a seventh aspect, a computer-readable storage medium is provided, where at least one instruction is stored in the storage medium, and when the instruction is executed on a computer, the instruction causes the computer to perform the method provided by the first aspect or any one of the optional manners of the first aspect.
In an eighth aspect, a computer-readable storage medium is provided, wherein at least one instruction is stored in the storage medium, and when the instruction is executed on a computer, the instruction causes the computer to perform the method provided by the second aspect or any one of the alternatives of the second aspect.
In a ninth aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and executed by a computer, cause the computer to perform the method of the first aspect or any one of the alternatives described above.
In a tenth aspect, there is provided a computer program product comprising one or more computer program instructions which, when loaded and executed by a computer, cause the computer to perform the method of any one of the second or second aspects discussed above.
In an eleventh aspect, a chip is provided, which includes a memory and a processor, where the memory is used to store computer instructions, and the processor is used to call and execute the computer instructions from the memory to perform the method in the first aspect and any possible implementation manner of the first aspect.
In a twelfth aspect, there is provided a chip comprising a memory for storing computer instructions and a processor for calling up and executing the computer instructions from the memory to perform the method provided by the second aspect or any one of the alternatives of the second aspect.
In a thirteenth aspect, a network device is provided, the network device comprising: a main control board and an interface board. The main control board includes: a first processor and a first memory. The interface board includes: a second processor, a second memory, and an interface card. The main control board is coupled with the interface board.
The first memory may be configured to store program code, and the first processor is configured to call the program code in the first memory to perform the following: when a first tunnel fails, a first fault notification message is generated, the first fault notification message indicates that the first tunnel fails and triggers a second network device to switch a first service flow from a first pseudo wire to a second pseudo wire, the first network device is an intermediate node through which the first tunnel passes, the second network device is an endpoint of the first tunnel, the first tunnel bears the first pseudo wire, each node through which the first tunnel passes does not enable tunnel operation management and maintenance OAM detection, and the second pseudo wire is a pseudo wire configured between the second network device and a third network device.
The second memory may be configured to store program code, and the second processor may be configured to invoke the program code in the second memory to trigger the interface card to perform the following: and sending the first fault notification message to the second network equipment.
Optionally, the first processor is further configured to call the program code in the first memory to perform the following operations: and determining that the first tunnel fails according to the failure of the first link, wherein the first tunnel passes through the first link.
Optionally, the first processor is further configured to call the program code in the first memory to perform the following operations: when the first tunnel fails, generating a second fault notification message, wherein the second fault notification message indicates that the first tunnel fails and triggers the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire;
the second processor is further configured to invoke program code in the second memory to trigger the interface card to: and sending the second fault notification message to the third network device, wherein the third network device is the other end point of the first tunnel.
Optionally, the first processor is further configured to call the program code in the first memory to perform the following operations: determining that a second tunnel fails according to the failure of the first link, wherein the second tunnel passes through the first link and bears a fifth pseudo wire; generating a third fault notification message, where the third fault notification message indicates that the second tunnel fails and triggers a fourth network device to switch a third service traffic from the fifth pseudo wire to the sixth pseudo wire, and the fourth network device is an endpoint of the second tunnel;
the second processor is further configured to invoke program code in the second memory to trigger the interface card to perform the following: and sending the third fault notification message to the fourth network device.
Optionally, the network device includes a main control board and an interface board, the central processing unit is disposed on the main control board, the network processor and the physical interface are disposed on the interface board, and the main control board is coupled to the interface board.
In a possible implementation manner, an inter-process communication protocol (IPC) channel is established between the main control board and the interface board, and the main control board and the interface board communicate with each other through the IPC channel.
In a fourteenth aspect, a network device is provided, which includes: a main control board and an interface board. The main control board includes: a first processor and a first memory. The interface board includes: a second processor, a second memory, and an interface card. The main control board is coupled with the interface board.
The second memory may be for storing program code, and the second processor may be for calling the program code in the second memory to trigger the interface card to: receiving a first fault notification message from a first network device, where the first fault notification message indicates that a first tunnel fails, the first network device is an intermediate node through which the first tunnel passes, the first tunnel carries a first pseudo wire, each node through which the first tunnel passes does not enable operation, administration and maintenance (OAM) detection of the tunnel, and the second network device is an endpoint of the first tunnel.
The first memory may be configured to store program code, and the first processor is configured to call the program code in the first memory to perform the following: switching a first service flow from the first pseudo wire to a second pseudo wire, wherein the second pseudo wire is a pseudo wire configured between the second network equipment and third network equipment.
Optionally, the first processor is further configured to call the program code in the first memory to perform the following operations: generating a second fault notification message, wherein the second fault notification message is used for indicating that the first tunnel has a fault;
optionally, the second processor is configured to call the program code in the second memory, and to trigger the interface card to perform the following operations: and sending a second fault notification message to the third network device to trigger the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire, wherein the third network device is the other end point of the first tunnel.
Optionally, the first tunnel further carries a third pseudowire, and the first processor is further configured to invoke program code in the first memory to perform the following operations: switching a second service flow from the third pseudo wire to a fourth pseudo wire, wherein the fourth pseudo wire is a pseudo wire configured between the second network device and a third network device.
In a possible implementation manner, an inter-process communication protocol (IPC) channel is established between the main control board and the interface board, and the main control board and the interface board communicate with each other through the IPC channel.
A fifteenth aspect provides a network system, which includes the network device of the third aspect and the network device of the fourth aspect; alternatively, the network system includes the network device according to the fifth aspect and the network device according to the sixth aspect; alternatively, the network system includes the network device according to the thirteenth aspect and the network device according to the fourteenth aspect.
Drawings
FIG. 1 is a schematic diagram of an exemplary application scenario provided by an embodiment of the present application;
fig. 2 is a flowchart of a protection switching method according to an embodiment of the present application;
fig. 3 is a flowchart of a protection switching method according to an embodiment of the present application;
fig. 4 is a flowchart of a protection switching method according to an embodiment of the present application;
fig. 5 is a flowchart of a protection switching method provided in an embodiment of the present application;
fig. 6 is a flowchart of a protection switching method according to an embodiment of the present application;
fig. 7 is a schematic diagram of a format of an FDI packet according to an embodiment of the present application;
fig. 8 is a schematic format diagram of an AIS message according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a network device according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a network device according to an embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, the following detailed description of the embodiments of the present application will be made with reference to the accompanying drawings.
Some concepts of terms related to the embodiments of the present application are explained below.
(1) Error code (bit error)
An error code refers to a bit error occurring in data received by a network device compared with when the data is sent out, and the error code is usually represented as a packet error detected by the network device in an interface direction according to a Cyclic Redundancy Check (CRC) algorithm.
The implementation of detecting error code failure is, for example, that the network device detects error codes in the direction of the ingress interface through a CRC check algorithm, and calculates the error rate. And if the error rate exceeds the error alarm threshold of the current interface, the network equipment determines that the error fault occurs to the interface. If the error rate of the interface is reduced below the alarm recovery threshold value in subsequent detection, the network equipment considers that the error fault of the interface is recovered. The bit error rate refers to the probability of error or loss of a data packet calculated by using a probability statistical method.
(2) Tunnel (tunnel)
A tunnel is a channel between two PEs that transports traffic. Tunneling uses one protocol to encapsulate another protocol packet. Tunnels are of many types. For example, tunnels include, without limitation, multi-protocol label switching (MPLS) tunnels, generic Routing Encapsulation (GRE) tunnels, segment Routing (SR) tunnels, mobile IP data encapsulation and tunnels (IP-in-IP), and the like. Normally, two edge nodes of a tunnel and each intermediate node through which the tunnel passes have configuration information of the tunnel, so that each hop node on the tunnel can sense the tunnel.
(3) Pseudowires (PW)
A pseudowire is a virtual connection between two provider edge nodes (PEs). Pseudowires are carried over tunnels. Multiple pseudowires are optionally carried through the same tunnel, thereby reducing the number of tunnels that need to be deployed in the network. Optionally, the pseudowires and the services have a corresponding relationship, and one pseudowire is used for transmitting a service flow corresponding to the pseudowire. Typically, both edge nodes of a pseudowire have pseudowire configuration information thereon, while intermediate nodes of a pseudowire do not have pseudowire configuration information thereon, such that the edge nodes of a pseudowire are aware of the pseudowire and the intermediate nodes of the pseudowire are unaware of the pseudowire.
(4)PW APS
PW APS is a network protection mechanism. One possible implementation of PW APS is to protect a failed working PW from a pre-created protection PW. Specifically, a working PW and a protection PW are created between two network devices. And when the working PW is in a normal state, transmitting the service message on the working PW. When the working PW fails, APS protection switching occurs, and the service message is transmitted on the protection PW.
Wherein, the working PW is also called as the main PW, and the protection PW is also called as the standby PW. The protection PW is used for protecting the working PW. The working PW and the protection PW have different paths. The application scenes of the PW APS comprise homologous and homoclinic scenes and homologous and heteroclinic scenes. Under the scene of homologous homologies, two PWs with the same source and destination nodes and different paths are configured to realize PW APS, and the working PW and the protection PW have the same source node and the same destination node; under the scene of homologous heterodromous, PW APS is realized by configuring two PWs with the same source node, different sink nodes and different paths, and the working PW and the protection PW have the same source node and different sink nodes.
(5) Forward failure indication (FDI) messages
The FDI message belongs to an Operation Administration and Maintenance (OAM) message. The FDI message is generated by an intermediate node of the path, and the FDI message is used for notifying a fault of a node or a link in the middle of the path to an egress node of the path.
(6) Alarm Indication Signal (AIS) message
The AIS message belongs to an OAM message, and is used to transmit fault information.
(7)MPLS
MPLS belongs to a tunneling technique. MPLS is located between a link layer and a network layer in a Transmission Control Protocol (TCP)/Internet Protocol (IP) protocol stack, and is configured to provide a connection service to the IP layer and obtain a service from the link layer. MPLS replaces IP forwarding with label switching.
The basic principle of MPLS forwarding is that when an IP packet enters an MPLS network, an ingress edge router of the MPLS network analyzes the content of the IP packet and adds appropriate labels to the IP packet, and intermediate nodes in the MPLS network forward data according to the labels. When the IP packet leaves the MPLS network, the label is deleted by the egress edge router.
The basic principle of implementing the PW based on MPLS is that, after an ingress point of the PW receives a service packet, the ingress point selects a corresponding PW and a tunnel for the service packet according to a forwarding table, the ingress point encapsulates two layers of MPLS labels to the service packet, the outer layer of labels is used for identifying the tunnel carrying the PW, and the inner layer of labels is used for identifying the PW. And after receiving the service message, the intermediate node replaces the outer layer label and keeps the inner layer label unchanged. And after the service message is forwarded to the egress point of the PW through the outer layer label, the egress point of the PW forwards the service message to the corresponding CE equipment according to the inner layer label.
(8) MPLS label
The label is information used for forwarding in MPLS. A label is a short, fixed-length connection identifier that typically has only a local meaning, and is encapsulated between the link layer and the network layer. The length of the tag is 4 bytes. The Label includes 4 fields, which are a Label value (Label) field, a Traffic Class (TC) field, a flag bit (S) field, and a Time To Live (TTL) field. The Label field takes 20 bits to carry the value of the Label. The TC field occupies 3 bits. The S field occupies 1 bit and is used for carrying the stack bottom identification. MPLS supports multi-layer labels, i.e., label nesting. An S value of 1 indicates the lowest label. The TTL field occupies 8 bits, and the TTL field in the label has the same meaning with the TTL field in the IP message header.
(9) Label Switching Router (Label Switching Router, LSR)
An LSR refers to a network device capable of MPLS label switching and packet forwarding, and is also referred to as an MPLS node. LSRs are basic elements in MPLS networks, which support the MPLS protocol. Wherein LSRs located at the edge of the MPLS domain that connect to other networks are called edge routers (LERs).
(10) Identification of the tunnel (tunnel identifier, tunnel ID) and label of the tunnel (tunnel label)
the tunnel ID is an identification of a tunnel. Typically, the same tunnel ID is configured on all nodes that a tunnel passes through. the tunnel label is for forwarding. the tunnel label is typically in the form of an MPLS label. the tunnel labels are divided into a tunnel in label and a tunnel out label.
the tunnel incoming label is generally unique to a network element. the tunnel out label may be different according to the next hop, and the tunnel out label of the present network element is the tunnel in label of the next hop network element.
(11) Dependent protection pair (MC PW)
MC-PW means that two PWs are configured for homologous heterology, and MC-PW technology is utilized to form 1.
In a bearer network of a Packet Transport Network (PTN), there are various levels of protection, of which the most common and important protection is APS protection.
APS protection mainly includes tunnel automatic protection switching (tunnel APS) protection, PW APS protection, and ring APS protection.
At present, after PW APS protection is configured, when an error code or other fault exists in an intermediate node or an intermediate link without configuration of tunnel OAM (tunnel OAM) detection, an endpoint of a PW cannot sense the fault of the intermediate node or the intermediate link, and thus cannot trigger PW APS protection switching, which may result in continuous service damage. And in the case of configuring tunnel OAM detection, periodically sending OAM messages and detecting link states are required, which may occupy OAM resources and link bandwidth resources, and when the traffic volume is large, the occupied amount of resources is considerable, and the configuration is relatively complex.
In view of this, in some embodiments of the present application, without configuring tunnel OAM detection, when an intermediate node detects an error code, an FDI packet is generated and sent to a tunnel endpoint. After the tunnel endpoint recognizes the FDI message, PW APS protection switching is triggered, so that the service reliability is improved, the resources are saved, and the configuration is simplified.
The following illustrates a network system architecture according to an embodiment of the present application.
Fig. 1 is a schematic diagram of a network architecture provided in this embodiment.
The various network devices in the network architecture shown in fig. 1 are, for example, routers, switches, etc. Optionally, each network device in the network architecture shown in fig. 1 supports MPLS label technology. Alternatively, the network architecture shown in fig. 1 is applied to a PTN network.
Specifically, the network architecture includes network device 101, network device 102, network device 103, network device 104, network device 105, network device 106, network device 107, network device 108, network device 109, and network device 110.
Network device 101, network device 102, network device 103, network device 107, and network device 108 are used to process layer two traffic. Network device 104 and network device 109 are used to handle both layer two and layer three services. Network device 105 and network device 110 are used to handle three-tier traffic. The network device 106 is an SGW or MME.
Fig. 1 contains two tunnels. The solid lines in fig. 1 show a working tunnel that passes through nodes including network device 101, network device 102, network device 103, and network device 104. The entry point of the working tunnel is network device 101, the exit point of the working tunnel is network device 104, and the intermediate nodes of the working tunnel are network device 102 and network device 103.
The dotted lines in fig. 1 show a protection tunnel, and the nodes passed by the protection tunnel include network device 101, network device 107, network device 108, and network device 109, where the ingress end point of the protection tunnel is network device 101, the egress end point of the protection tunnel is network device 104, and the intermediate nodes of the protection tunnel are network device 107 and network device 108.
In the networking shown in fig. 1, network device 101 configures PW APS, and network device 104 and network device 109 configure MC-PW APS. A link between the network device 104 and the network device 109 belongs to a protection path shared by the network device 104 and the network device 109, and when a working PW of one node of the network device 104 and the network device 109 fails, the node switches the PW to the shared protection path. Network device 104 and network device 109 may negotiate to maintain state consistency.
The case of 2 intermediate nodes passing through the tunnel shown in fig. 1 is merely an example, and the number of intermediate nodes passing through the tunnel may be optionally greater or smaller. For example, a tunnel only passes through one intermediate node, and for example, a tunnel passes through several tens or hundreds or more intermediate nodes, and the number of intermediate nodes is not limited in this embodiment.
The following illustrates a method flow of the embodiments of the present application.
Fig. 2 is a flowchart of a protection switching method according to an embodiment of the present application. The method shown in fig. 2 includes the following steps S202 to S208.
The method shown in fig. 2 may optionally apply a scenario where multiple tunnels pass through the same intermediate node. In order to distinguish different tunnels, a plurality of different tunnels are described by "first tunnel" and "second tunnel" in a distinguishing manner.
Since a tunnel typically passes through a plurality of network devices, in order to distinguish the different network devices passing through the tunnel, the first network device, the second network device, and the third network device are used to describe the plurality of different network devices. The first network device is an intermediate node through which the first tunnel passes, and the second network device and the third network device are both endpoints of the first tunnel.
The method illustrated in fig. 2 is optionally applied in scenarios where multiple pseudowires are configured. Different pseudowires are optionally responsible for carrying different traffic flows. In order to distinguish different service flows, the service flows borne by different pseudo wires are distinguished and described by using a first service flow and a second service flow.
The network deployment scenario on which the method of fig. 2 is based is optionally as described above with respect to fig. 1. For example, referring to fig. 1, a first network device in the method shown in fig. 2 is network device 103 in fig. 1, a second network device in the method shown in fig. 2 is network device 104 in fig. 1, and a third network device in the method shown in fig. 2 is network device 101 in fig. 1.
Step S202, when the first tunnel has a fault, the first network device generates a first fault notification message.
The first tunnel is a tunnel in which the first network device is located. The type of the first tunnel includes many cases. Optionally, the first tunnel is a tunnel established based on MPLS technology. For example, the first tunnel is an LSP tunnel. Alternatively, the first tunnel is a tunnel established based on SR-MPLS technology or based on internet protocol version 6 segment routing (srv 6) technology. For example, the first tunnel is an SR-Traffic Engineering (TE) policy (policy) tunnel or an SR best-effort (BE) tunnel. Alternatively, the first tunnel is a tunnel established based on GRE, VXLAN or other means, and the embodiment does not limit the type of the first tunnel.
Each node through which the first tunnel passes does not enable tunnel OAM detection. That is, neither of the two endpoints of the first tunnel (e.g., the second network device and the third network device) nor each intermediate node through which the first tunnel passes enables tunnel OAM detection.
The typical performance of enabling tunnel OAM detection is that, no matter the tunnel is in a normal state or a failure state, an ingress end point of the tunnel periodically sends an OAM message to an egress end point of the tunnel, and after each intermediate node through which the tunnel passes receives the OAM message sent by the ingress end point, the intermediate node needs to forward the OAM message sent by the ingress end point to the egress end point of the tunnel. When the OAM detection of a tunnel is enabled, each node through which the tunnel passes also needs to periodically send an OAM message when the tunnel is in a normal state, so that a large amount of link bandwidth is occupied.
One possible expression that each node through which the first tunnel passes does not enable tunnel OAM detection is that, when the first tunnel is in a normal state, an ingress end point (e.g., a third network device) of the tunnel does not send an OAM packet to an egress end point (e.g., a second network device) of the tunnel, and each intermediate node (e.g., a first network device) through which the tunnel passes does not need to be responsible for forwarding the OAM packet sent by the ingress end point to the egress end point of the tunnel, thereby saving link bandwidth.
The first tunnel carries a first pseudowire. The first pseudowire is a pseudowire configured between the second network device and the third network device. That is, the endpoints of the first pseudowire and the endpoints of the first tunnel are the same, and the two endpoints of the first pseudowire are also the second network device and the third network device.
The initiator of the first fault notification packet is an intermediate node (in this embodiment, the first network device) that detects a fault, and the responder of the first fault notification packet is an end point (in this embodiment, the second network device) of the tunnel. The destination IP address field in the first fault notification message includes the IP address of the second network device. The first failure notification message indicates that the first tunnel fails and triggers the second network device to switch the first service flow from the first pseudo wire to the second pseudo wire.
The protocol type of the first fault notification message includes a plurality of conditions. Optionally, the first fault notification message is an OAM message. Optionally, the first fault notification message is an FDI message, a format of the first fault notification message satisfies a specification of a y.1711 protocol, and the FDI message belongs to an OAM message. Optionally, the first fault notification message is an AIS message, a format of the first fault notification message satisfies a specification of a y.1731 protocol, and the AIS message belongs to an OAM message. Optionally, the first failure notification message is a tunnel-level OAM message (tunnel OAM message). Optionally, the first fault notification message is specifically a tunnel FDI message or a tunnel AIS message. Alternatively, the first fault notification message is a message used for notifying a fault in a protocol other than OAM, for example, the first fault notification message is a Bidirectional Forwarding Detection (BFD) message, an internet packet explorer (ping) detection message, a two-way active measurement protocol (TWAMP) message, an internet OAM performance measurement (in-situ flow information measurement, iFit) message based on an internet protocol data flow, and the like, and the embodiment does not limit the protocol type of the first fault notification message.
The content of the first fault notification message includes a wide variety of conditions. The following illustrates the contents that the first fault notification message may contain.
In some embodiments, the first fault notification message includes a fault notification indication, and the fault notification indication is used to indicate that a fault occurs. In a possible implementation, the first fault notification message is an FDI message, and the first fault notification message includes a function type (function type) field, where the function type field carries a fault notification indication. For another example, the first fault notification message is an AIS message, and the first fault notification message includes an operation code (OPCode) field, where the OPCode field carries a fault notification indication.
In some embodiments, the first failure notification message includes a label of the first tunnel. For example, the first failure notification packet includes an LSP LAB field, and the LSP LAB field carries a label of the first tunnel.
The label of the first tunnel is the information used to direct the forwarding. Specifically, the label of the first tunnel is used to indicate a next hop of the first network device to forward the first fault notification packet to the second network device. The next hop of the first network device is, for example, the next node of the first network device on the first tunnel.
Optionally, the label of the first tunnel is in the form of an MPLS label. In one possible implementation, the first fault notification packet is an MPLS packet or an SR-MPLS packet, the first fault notification packet includes an MPLS header, and a label of the first tunnel is carried in the MPLS header of the first fault notification packet. The concept of MPLS labels may be referred to as (8) in the introduction of terms above.
Optionally, the label indication forwarding function of the first tunnel is implemented based on label switching in MPLS. Optionally, in the process of forwarding the first fault notification packet, each hop updates the value of the label of the first tunnel, and replaces the value of the label of the first tunnel from the incoming label of the node to the outgoing label of the node, where the outgoing label of the node is the incoming label of the next hop. Taking the first network device and the next hop of the first network device as examples, the value of the label of the first tunnel in the first failure notification message is the outgoing label of the first network device, and is also the incoming label of the next hop of the first network device. After the next hop of the first network device receives the first fault notification message, the next hop of the first network device searches a label forwarding table according to the label of the first tunnel to obtain an outgoing interface and an outgoing label which correspond to the label of the first tunnel in the label forwarding table. And the next hop of the first network equipment replaces the value of the label of the first tunnel in the first fault notification message with the found outgoing label, and forwards the replaced first fault notification message through the found outgoing interface, so that the replaced first fault notification message is forwarded to the next hop node of the first network equipment. And by analogy, the first fault notification message is forwarded through each intermediate node through which the first tunnel passes until the first fault notification message is forwarded to the second network device.
Optionally, the label of the first tunnel is replaced by a segment list (SID list) of the first tunnel. In one possible implementation, the first fault notification packet includes a segment routing header (SRH, where the SRH includes a segment list (SID list) of the first tunnel, and the SID list of the first tunnel includes a SID of each downstream node of the first network device on the first tunnel.
The first network device instructs, by carrying the label of the first tunnel or the segment list SID list of the first tunnel in the first fault notification packet, how to forward the packet by each downstream node of the first network device on the first tunnel, so that the first fault notification packet can be forwarded to the second network device along the way by each downstream node of the first network device on the first tunnel, thereby ensuring that the fault of the intermediate link can be delivered to the edge (the second network device).
In some embodiments, the first failure notification message includes an identification of the first tunnel (tunnel ID). The identification of the first tunnel is used to identify the first tunnel. The first network device carries the identifier of the first tunnel in the first fault notification message, and after the second network device receives the first fault notification message, the second network device can determine that the failed tunnel is the first tunnel through the identifier of the first tunnel, so that the switching is performed on the pseudo wire loaded on the first tunnel. Optionally, the first fault notification packet includes a source-sink connection identifier (TTSI) field, and the TTSI field carries an identifier of the first tunnel. The identity of the first tunnel configured on each node on the first tunnel is optionally the same. For example, the identifiers of the first tunnels configured on the first network device, the second network device, and the third network device are all the same.
In some embodiments, the first fault advertisement message includes an LSR ID of the second network device. And the destination node of the first fault notification message is identified as the second network equipment by carrying the LSR ID of the second network equipment in the first fault notification message. In one possible implementation, the first failure notification message includes a TTSI field, and the TTSI field carries an LSR ID of the second network device. Optionally, the LSR ID of the second network device is replaced by the SID of the second network device. In one possible implementation, the first failure notification message includes an IPv6 header, and a destination address field of the IPv6 header includes a SID of the second network device.
How the first network device obtains the label of the first tunnel, the identity of the first tunnel, and the LSR ID of the second network device includes various implementations. Optionally, the network administrator performs a tunnel configuration operation on the first network device, and inputs configuration information of the first tunnel, where the configuration information of the first tunnel includes a label of the first tunnel, an identification of the first tunnel, and an LSR ID of the second network device. The first network device obtains and stores the configuration information of the first tunnel according to the tunnel configuration operation of the network administrator.
The configuration information of the tunnel is generally used to instruct the network device to forward the service traffic through the egress interface corresponding to the tunnel. For example, when the first tunnel is in a normal state, the configuration information of the first tunnel is used to indicate that the first network device forwards the first service traffic through an egress interface corresponding to the first tunnel on the first network device after receiving the first service traffic. In this embodiment, the configuration information of the tunnel is not only used to instruct the network device to forward the service traffic through the corresponding egress interface of the tunnel, but also used to support the network device to send a failure notification message to the tunnel endpoint when finding that the intermediate link of the tunnel fails. Specifically, in order to implement forwarding of the traffic flow through the tunnel, each node through which the tunnel passes is generally configured with information such as an IP address of an egress end point of the tunnel, and therefore the intermediate node can send the fault notification packet to the end point of the tunnel by using the configuration of the tunnel.
Alternatively, the label of the first tunnel, the identifier of the first tunnel, or the LSR ID of the second network device is sent to the first network device by another network element (e.g., an upstream node or a downstream node of the first network device on the tunnel, or a controller, etc.) other than the first network device, and the label of the first tunnel, the identifier of the first tunnel, or the LSR ID of the second network device sent by the other network element is received by the first network device.
In some embodiments, the first fault notification message includes a type of fault. The type of failure is used to indicate whether the failure occurred is an error, or a link down, or a frame error, or congestion.
In some embodiments, the first fault notification message includes a timestamp of the detection of the fault by the first network device.
The types of failures that can trigger the first failure notification message include many situations. For example, the first fault notification packet is triggered by any fault that can be sensed by the intermediate node. For example, when a connectivity fault or a network performance fault occurs on the first tunnel, the first network device generates a first fault notification message. For example, when at least one of an error code, a link disconnection, a frame error, and congestion occurs in the first tunnel, the first network device generates a first failure notification message. In this embodiment, the type of the fault triggering the first fault notification packet is not limited.
Step S204, the first network equipment sends a first fault notification message to the second network equipment.
The second network device is an end point (endpoint) of the first tunnel and is also an end point of the first pseudowire. The endpoints are also referred to as the endpoints of the tunnel.
Optionally, the second network device is an ingress point of the first tunnel. In case of acting as an ingress point, the second network device is responsible for directing traffic to the first tunnel. For example, when the second network device receives a service packet from the user side, the second network device encapsulates a tunnel header (such as an MPLS header, a GRE header, or an SRH) corresponding to the first tunnel to the service packet, and forwards the service packet encapsulated with the tunnel header. The tunnel header contains path information for the first tunnel, e.g., the tunnel header contains an identification of each node through which the first tunnel passes. The tunnel header is used for indicating each intermediate node passed by the first tunnel to forward the service packet to the egress point of the first tunnel.
Optionally, the second network device is an egress endpoint of the first tunnel. In the case of serving as the egress end point, in some embodiments, when the second network device receives the service packet, which is sent by the upstream node and encapsulated with the tunnel header, the second network device decapsulates the tunnel header, so as to restore the encapsulation format of the service packet to the original encapsulation format before entering the first tunnel, and the second network device forwards the decapsulated service packet to the user side.
Step S206, the second network device receives the first fault notification packet from the first network device.
In some embodiments, after the second network device receives the first fault notification message, the second network device identifies the content of the first fault notification message, and determines that the first tunnel has a fault according to the content of the first fault notification message (e.g., the fault notification indicator and the identifier of the first tunnel). And the second network equipment inquires the corresponding relation between the tunnel and the pseudo wires and determines that the pseudo wires corresponding to the first tunnel comprise the first pseudo wires. The first network device sets the OAM state of the first pseudo wire to a failure state, thereby triggering the first pseudo wire switching. The OAM state is used to identify whether the corresponding pseudowire is in a failed state or a normal state. In the process of forwarding the traffic, the pseudowire with the OAM state being the normal state is selected to send the traffic.
Step S208, the second network device switches the first service traffic from the first pseudowire to the second pseudowire.
The first service traffic is the service traffic originally borne by the first pseudo wire before the first tunnel fails. Optionally, the service flows related to this embodiment are both understood as bidirectional packet flows, for example, the service flow sent by the user equipment a to the user equipment B and the service flow sent by the user equipment B to the user equipment a are optionally understood as the same service flow. The first traffic flow is optionally one traffic flow or a collection of multiple traffic flows. In other words, pseudowire switching refers to switching for both bidirectional traffic flows. After switching to the second pseudo wire, for the sending direction, the second network equipment switches from sending the first service flow through the first pseudo wire to sending the first service flow through the second pseudo wire; for a receive direction, the second network device switches from receiving the first traffic over the first pseudowire to receiving the first traffic over the second pseudowire.
The second pseudowire is a pseudowire configured between the second network device and a third network device. The relationship between the second pseudowire and the first pseudowire is illustrated below.
The second pseudowire and the first pseudowire have the same endpoints, which are both a second network device and a third network device.
The second pseudowire and the first pseudowire are located on different paths. Therefore, by switching the first service flow from the first pseudo wire to the second pseudo wire, the path for transmitting the first service flow is switched from the failed path to the other path, thereby avoiding the service flow from being damaged due to the failure of the path and realizing the effect of protecting the service flow.
Optionally, all intermediate nodes traversed by the path on which the second pseudowire is located are different from the intermediate nodes traversed by the path on which the first pseudowire is located. Alternatively, a part of intermediate nodes through which a path in which the second pseudo wire is located passes are different from those through which a path in which the first pseudo wire is located passes, and another part of intermediate nodes through which a path in which the second pseudo wire is located passes are the same as those through which a path in which the first pseudo wire is located passes.
Optionally, the second pseudowire and the first pseudowire are carried on different tunnels. For example, a first pseudowire is carried over a first tunnel and a second pseudowire is carried over a second tunnel.
Optionally, the second pseudowire and the first pseudowire belong to the same protection group. When the first tunnel does not fail, the first pseudo wire is a working pseudo wire in the protection group, and the second pseudo wire is a protection pseudo wire in the protection group. After the switch, the second pseudowire becomes the working pseudowire in the protection group.
Optionally, the second pseudowire is used to protect the first pseudowire. For example, when a first pseudowire is in a working state, a second pseudowire is in an idle state, and the second pseudowire does not carry service traffic; when the first pseudo wire is in a failure state, the second pseudo wire bears the service flow originally borne by the first pseudo wire. Alternatively, while the first pseudowire is in the working state, the second pseudowire is also in the working state. For example, when the first tunnel does not fail, the second pseudowire is used to carry other traffic than the first traffic, such as the second pseudowire is used to carry traffic with a lower priority than the first traffic.
In a possible implementation, after receiving a first service traffic sent by a user equipment, a second network device checks a PW APS state, where the PW APS state includes an OAM state of a first pseudowire and an OAM state of a second pseudowire. And the second network equipment determines to use the second pseudo wire to bear the first service flow according to the condition that the OAM state of the first pseudo wire is a fault state and the OAM state of the second pseudo wire is a normal state, and then the second network equipment packages the label of the second pseudo wire to the first service flow and forwards the first service flow containing the label of the second pseudo wire, so that the first service flow is transmitted through the second pseudo wire.
In the method provided in this embodiment, when the tunnel OAM is not enabled, the intermediate node sends a failure notification packet to the end point of the tunnel when the tunnel fails, so as to transmit the failure of the intermediate link or the intermediate node to the end point and trigger the end point to perform PW APS switching. On one hand, tunnel OAM does not need to be enabled, so that OAM resources are saved, and bandwidth resources occupied by OAM messages are also saved. On the other hand, the service protection switching can be triggered in time when the intermediate link or the intermediate node fails, so that the service is prevented from being damaged, and the reliability is improved.
In some embodiments, the first network device further performs the following step S201 before performing step S202. The complete flow chart including step S201 and the method shown in fig. 2 can refer to fig. 3.
Step S201, the first network device determines that the first tunnel fails according to the failure of the first link.
The first link is a link through which the first tunnel passes. The first link is a physical link or a virtual link. Optionally, the first link is a link directly connected to the first network device. After detecting that the first link fails, the first network device determines that the first tunnel fails according to the fact that the first tunnel is a tunnel configured on the first link.
Optionally, the first link is an upstream link of the first network device. Upstream is, for example, the direction from the first network device to the ingress point of the first tunnel. When the first link is not failed, the first service flow reaches the first network equipment through the first link, and the first network equipment receives the first service flow from the first link.
Optionally, the first link is a downstream link of the first network device. Downstream is, for example, the direction from the first network device to the egress point of the first tunnel. And when the first link does not have a fault, the first network equipment sends the first service flow through the first link after receiving the first service flow.
The failure of the first link includes, but is not limited to, at least one of an error code, a link disconnection, a frame error, and congestion. The specific implementation manner of detecting the error of the first link can refer to (1) in the term interpretation section. The implementation of detecting the frame error of the first link is similar to the implementation of detecting the error code, except that the error code is usually a failure of the physical layer, and is checked by a field in the PPDU, and the frame error is usually a failure of the data link layer, and is checked by a Frame Check Sequence (FCS) field at the end of the frame. Detecting congestion may be accomplished by determining whether bandwidth utilization exceeds a threshold or determining whether queue length exceeds a threshold.
Failures include, without limitation, port level failures or board level failures.
A failure at the port level refers to one or more ports detecting a failure. For example, the first network device includes a first port, where the first port is configured to receive a packet on a first link or send a packet through the first link, a tunnel configured on the first port includes a first tunnel, and step S201 specifically includes: and the first network equipment determines that the first tunnel has a fault according to the fault detected by the first port.
The single board level failure refers to a hardware failure of the single board. For example, the first network device includes a first board, where the first board includes a first port, the first port is configured to receive a packet on a first link or send the packet through the first link, a tunnel configured on the first port includes a first tunnel, and step S201 specifically includes: and the first network equipment determines that the first tunnel fails according to the failure of the first single board.
In some embodiments, the first network device determines a destination device of the fault notification message or a sending direction of the fault notification message according to whether the failed link is an upstream link or a downstream link, which is described in the following first and second cases.
Case one, failure of upstream link
If the first link with the fault is an upstream link of the first network device, the first network device determines the egress point of the first tunnel as a destination device of the first fault notification message, and the first network device sends the first fault notification message to the egress point of the first tunnel, so that the fault of the upstream link can be transmitted to the egress point, and the egress point is triggered to perform PW APS switching. That is, when the first link is an upstream link of the first network device, the second network device is an egress node of the first tunnel.
For example, referring to fig. 1, the first network device is, for example, the network device 103 in fig. 1, an upstream link (first link) of the network device 103 is a link between the network device 103 and the network device 102 in fig. 1, and the first fault notification packet is, for example, an FDI packet. If port a of network device 103 detects the presence of an error, network device 103 sends an FDI message to network device 104.
Case two, downstream link failure
If the first link with the fault is a downstream link of the first network device, the first network device determines an entry end point of the first tunnel as a destination device of the first fault notification message, and the first network device sends the first fault notification message to the entry end point of the first tunnel, so that the fault of the downstream link can be transmitted to the entry end point, and the entry end point is triggered to perform PW APS switching. That is, when the first link is a downstream link of the first network device, the second network device is an ingress point of the first tunnel.
For example, referring to fig. 1, the first network device is, for example, the network device 103 in fig. 1, a downstream link (first link) of the network device 103 is a link between the network device 103 and the network device 104 in fig. 1, and the first fault notification packet is, for example, an FDI packet. If port B of network device 103 detects an error, network device 103 sends an FDI message to network device 101.
Taking the first pseudowire carried on the first tunnel as an example, the switching procedure for one pseudowire on one tunnel is introduced above. Optionally, when a plurality of pseudowires are carried in one tunnel, switching is performed on each pseudowire in the tunnel based on the above method flow.
For ease of understanding, the following description will take two pseudowires, a first pseudowire and a third pseudowire, as an example, to support multiple pseudowires in a tunnel.
The third pseudowire and the first pseudowire are carried over the same tunnel (first tunnel), the third pseudowire and the first pseudowire having the same endpoints. The third pseudowire is a pseudowire configured between the second network device and the third network device. Optionally, when the first tunnel fails, the third pseudowire and the first pseudowire are used for carrying different service flows. The service traffic that the third pseudowire is responsible for carrying is described as the second service traffic.
Under the condition that the first pseudo wire and the third pseudo wire are borne on the first tunnel, the first fault notification message is not only used for triggering the second network device to switch the first service flow from the first pseudo wire to the second pseudo wire, but also used for triggering the second network device to switch the second service flow from the third pseudo wire to the fourth pseudo wire. After the second network device receives the first fault notification message, the second network device not only switches the first service traffic from the first pseudo wire to the second pseudo wire, but also switches the second service traffic from the third pseudo wire to the fourth pseudo wire. Specifically, the second network device identifies the content of the first fault notification packet, and determines that the first tunnel fails according to the content of the first fault notification packet (such as a fault notification indicator and an identifier of the first tunnel). And the second network equipment inquires the corresponding relation between the tunnel and the pseudo wires and determines that the pseudo wires corresponding to the first tunnel comprise the first pseudo wire and the third pseudo wire. The first network equipment sets the OAM state of the first pseudo wire and the OAM state of the third pseudo wire to be in a failure state, so that the two pseudo wires, namely the first pseudo wire and the third pseudo wire, are triggered to be switched.
The fourth pseudowire is a pseudowire configured between the second network device and the third network device. The relationship between the fourth pseudowire and the third pseudowire may refer to a description of the relationship between the second pseudowire and the first pseudowire.
Optionally, the first tunnel further carries other pseudowires besides the first pseudowire and the third pseudowire, and the switching procedure of the other pseudowires carried on the first tunnel may refer to the introduction of the third pseudowire. Optionally, after receiving the first failure notification message, the second network device checks each pseudo wire borne by the first tunnel, and the second network device sets the OAM state of each pseudo wire borne by the first tunnel to a failure state, thereby triggering each pseudo wire borne by the first tunnel to switch. For example, the second network device switches the service traffic originally responsible for carrying by all pseudowires in the first tunnel to the pseudowires in other tunnels.
In this embodiment, the time sequence of switching different pseudowires carried in the same tunnel is not limited. Taking the first pseudowire and the third pseudowire as an example, the second network device optionally switches the second service traffic from the third pseudowire to the fourth pseudowire while switching the first service traffic from the first pseudowire to the second pseudowire; or the second network device switches the first service flow from the first pseudo wire to the second pseudo wire and then switches the second service flow from the third pseudo wire to the fourth pseudo wire; or the second network device switches the second service flow from the third pseudo wire to the fourth pseudo wire and then switches the first service flow from the first pseudo wire to the second pseudo wire.
By the method, the intermediate node triggers the switching of the pseudo wires on the tunnel by sending a fault notification message, thereby reducing the overall switching time delay of the pseudo wires and improving the pseudo wire switching efficiency. In addition, after the tunnel fails, all the pseudo wires borne by the tunnel are affected, and the service flows borne by all the pseudo wires on the tunnel are possibly damaged, so that a plurality of pseudo wires on the tunnel are triggered to be switched in batch through one failure notification message, the probability of service damage is further reduced, and the reliability is improved.
In the above, with reference to the method shown in fig. 2, how to transfer the failure detected by the intermediate node to one of the endpoints of the tunnel, and trigger the endpoint to perform PW APS protection switching is described. Optionally, another endpoint of the tunnel is also triggered to perform PW APS protection switching, so as to support a dual-end switching mechanism of PW APS. The following description will be made in detail with reference to two implementations, which are described in detail in the following implementation (1) and implementation (2).
In the implementation mode (1), an end point receiving the fault notification message of the intermediate node triggers an opposite end to carry out PW APS protection switching.
For example, the first fault notification message is also used to trigger the second network device to send a second fault notification message to the third network device, so as to trigger the following steps S209 to S212. Referring to fig. 4, a complete flowchart including the following steps S209 to S212 and the method shown in fig. 2 can be obtained. The same steps in fig. 4 as in fig. 2 can be referred to as the description of the method shown in fig. 2.
Step S209, the second network device generates a second fault notification message.
The second failure notification message indicates that the first tunnel fails and triggers the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire. The initiator of the second failure notification packet is an end point (in this embodiment, the second network device) of the first tunnel, and the responder of the second failure notification packet is another end point (in this embodiment, the third network device) of the first tunnel.
In some embodiments, the intermediate node sends a failure notification message to the egress end point of the tunnel through the downstream link, and the egress end point of the tunnel sends another failure notification message to the ingress end point of the tunnel through the upstream link in response to the failure notification message sent by the intermediate node, so as to trigger the ingress end point of the tunnel to perform PW APS protection switching. When the method is adopted, the responder (i.e., the second network device) of the first fault notification message is the outgoing end point of the first tunnel, the initiator (i.e., the second network device) of the second fault notification message is the outgoing end point of the first tunnel, and the responder (i.e., the third network device) of the second fault notification message is the incoming end point of the first tunnel.
In other embodiments, the intermediate node sends a failure notification message to the ingress end of the tunnel through the upstream link, and the ingress end of the tunnel responds to the failure notification message sent by the intermediate node, and sends another failure notification message to the egress end of the tunnel through the downstream link, so as to trigger the egress end of the tunnel to perform PW APS protection switching. When the method is adopted, a responder (i.e., the second network device) of the first fault notification message is an ingress end point of the first tunnel, an initiator (i.e., the second network device) of the second fault notification message is an ingress end point of the first tunnel, and a responder (i.e., the third network device) of the second fault notification message is an egress end point of the first tunnel.
The protocol type of the second fault notification message includes a plurality of conditions. Optionally, the second fault notification message is an OAM message. Optionally, the second fault notification message is specifically a BDI message or an AIS message. The second failure notification packet includes at least one of a label of the first tunnel, an identity of the first tunnel, or an LSR ID of the third network device.
Optionally, the second fault notification message is a pseudo-wire level OAM message (PW OAM message). For example, the second failure notification packet is specifically a PW backward defect notification (BDI) packet. The main difference between the PW BDI packet and the tunnel BDI packet is that the PW BDI packet has one more layer of PW label than the tunnel BDI packet. The PW BDI message also typically contains the label of the tunnel. And under the condition that the second fault notification message is a PW BDI message, the second fault notification message comprises a label of the first tunnel and a label of the first pseudo wire. The label of the first tunnel is used to identify the tunnel bearing the pseudowire as the first tunnel. The label of the first pseudowire is used to identify the pseudowire as the first pseudowire. The label of the first tunnel is on the outside layer and the label of the first pseudowire is on the inside layer. The second failure notification message is specifically used to indicate that the first pseudo wire on the first tunnel fails.
Optionally, the second failure notification message is a tunnel-level OAM message (tunnel OAM message). For example, the second failure notification message is specifically a tunnel BDI message.
Step S210, the second network device sends a second failure notification message to the third network device.
Step S211, the third network device receives the second fault notification packet.
Step S212, the third network device switches the first service traffic from the first pseudowire to the second pseudowire, and the third network device is another endpoint of the first tunnel.
In the implementation manner (1), when a tunnel carries multiple pseudo wires, an endpoint of the tunnel sends a failure notification message to another endpoint specifically includes the following two implementation manners, which are detailed in the implementation manners (1-1) to (1-2).
Implementation (1-1) an endpoint of a tunnel sends a plurality of pseudowire level failure notification messages to another endpoint. The fault notification message of each pseudo wire level is used for triggering one pseudo wire switch on the tunnel. The fault notification message of each pseudo wire level carries the label of the corresponding pseudo wire.
For example, the first tunnel carries a first pseudo wire and a third pseudo wire, the second network device generates a second fault notification message and a fourth fault notification message, and the second fault notification message and the fourth fault notification message are both pseudo wire-level fault notification messages (e.g., PW BDI messages). And the second network equipment sends a second fault notification message and a fourth fault notification message to the third network equipment. And after receiving the second fault notification message, the third network equipment identifies the label of the first pseudo wire in the second fault notification message, so as to determine that the first pseudo wire is the pseudo wire to be switched. The third network device switches the first traffic flow from the first pseudowire to the second pseudowire. And after receiving the fourth fault notification message, the third network equipment identifies the label of the third pseudo wire in the fourth fault notification message, so as to determine that the third pseudo wire is the pseudo wire to be switched. The third network device switches the second traffic flow from the third pseudowire to the fourth pseudowire.
The second failure notification message includes a label of the first tunnel and a label of the first pseudo wire. The second failure notification message is used for indicating that the first pseudo wire on the first tunnel fails, and the second failure notification message is used for triggering the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire.
The fourth failure notification packet includes a label of the first tunnel and a label of the third pseudo wire. The fourth failure notification message is used for indicating that the third pseudo wire on the first tunnel fails, and the fourth failure notification message is used for triggering the third network device to switch the second service flow from the third pseudo wire to the fourth pseudo wire.
In the implementation manner (1-2), an endpoint of a tunnel sends a tunnel-level failure notification message (e.g., a tunnel BDI message) to another endpoint, where the tunnel-level failure notification message is used to trigger all pseudowire switching on the tunnel.
For example, the first tunnel carries a first pseudo wire and a third pseudo wire, the second network device generates a second fault notification message, and the second fault notification message is a tunnel-level fault notification message. And the second network equipment sends a second fault notification message to the third network equipment. And after receiving the second fault notification message, the third network equipment identifies the identifier of the first tunnel in the second fault notification message, so that the first pseudo wire and the third pseudo wire on the first tunnel are determined to be pseudo wires to be switched. The third network device switches the first traffic flow from the first pseudowire to the second pseudowire and switches the second traffic flow from the third pseudowire to the fourth pseudowire.
The second failure notification message includes a label of the first tunnel. The second failure notification message is used for indicating that a failure occurs in the first tunnel, and the second failure notification message is used for triggering the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire and switch the second service flow from the third pseudo wire to the fourth pseudo wire.
In the implementation mode (2), the middle node which detects the fault triggers the two endpoints to carry out PW APS protection switching.
Optionally, the implementation (2) is applicable to a case where the type of the failure is a failure that can still forward the packet although affecting the service.
For example, in the case where the failure is an error code, a frame error, congestion, or the like, the implementation (2) may be adopted.
In the case of the implementation (2), step S202 in the method shown in fig. 4 may be replaced with the following step S202", and step S210 in the method shown in fig. 4 may be replaced with the following step S210'. The complete flowchart including the following steps S202 "to S212 may refer to fig. 5.
Step S202', when the first tunnel has a fault, the first network device generates a first fault notification message and a second fault notification message.
The second failure notification message indicates that the first tunnel fails and triggers the third network device to switch the first service flow from the first pseudo wire to the second pseudo wire. Optionally, the second failure notification message is a tunnel-level OAM message (tunnel OAM message). For example, the second failure notification message is specifically a tunnel BDI message.
Step S210', the first network device sends a second failure notification packet to a third network device, where the third network device is another endpoint of the first tunnel.
In this embodiment, the timing sequence of step S210' and step S204 is not limited. Optionally, step S204 is executed first, and then step S210' is executed; or executing step S210 first and then executing step S204; or step S204 and step S210' are performed simultaneously.
Step S211, the third network device receives the second failure notification message.
Step S212, the third network device switches the first service traffic from the first pseudowire to the second pseudowire.
If only one of the two endpoints of the PW performs PW switching, but the other endpoint does not perform PW switching, a situation that traffic in one direction flows away from the working PW and traffic in the other direction flows away from the protection PW may occur. Through the implementation mode (1) and the implementation mode (2), the fault detected by the intermediate node is transmitted to the two endpoints of the tunnel, and the two endpoints of the tunnel are triggered to perform PW APS switching, so that a dual-end switching mechanism of PW APS is supported, service flows in two directions are supported to be switched from a fault path to a protection path, the service flows in the two directions are protected, and the reliability is further improved.
In the above, the first tunnel is taken as an example, and how the intermediate node notifies the end point of a tunnel passing through a link of a failure when detecting that the intermediate node has failed is described. In many scenarios, multiple tunnels may be configured on a link, i.e., multiple tunnels have a common link. When the intermediate node detects a link failure, the intermediate node optionally notifies the endpoints of the multiple tunnels passing through the link of the failure respectively, so as to trigger the pseudowire switching carried by the multiple tunnels. Next, a flow of a case of a plurality of tunnels will be described by taking the first tunnel and the second tunnel as an example. In the case of multiple tunnels, step S201 may be replaced by the following step S201', step S202 may be replaced by the following step S202' in the method shown in fig. 2, and the method shown in fig. 2 further includes the following steps S203, S205, and S207. The complete flow chart including steps S201' to S207 and the method shown in fig. 2 can refer to fig. 6.
Step S201', the first network device determines that both the first tunnel and the second tunnel have a failure according to the failure of the first link.
After detecting that the first link fails, the first network device determines that both the first tunnel and the second tunnel fail according to the fact that both the first tunnel and the second tunnel are tunnels configured on the first link.
The second tunnel passes through the first link, and the second tunnel carries a fifth pseudowire. Optionally, each node traversed by the second tunnel does not enable OAM detection.
Step S202', the first network device generates a first fault notification message and a third fault notification message.
The third failure notification message indicates that the second tunnel fails, and triggers the fourth network device to switch the third service traffic from the fifth pseudo wire to the sixth pseudo wire. The third fault notification message may optionally be an FDI message or an AIS message.
Step S203, the first network device sends a third failure notification message to the fourth network device.
The fourth network device is an endpoint of the second tunnel. Optionally, the first link is an upstream link of the first network device, and the fourth network device is an egress end point of the second tunnel; or, the first link is a downstream link of the first network device, and the fourth network device is an ingress point of the second tunnel.
Optionally, the fourth network device and the second network device are different devices. Alternatively, the fourth network device and the second network device are the same device. In other words, the second network device is both an endpoint of the first tunnel and an endpoint of the second tunnel.
The timing sequence of step S203 and step S204 is not limited in this embodiment. Optionally, step S204 is executed first, and then step S203 is executed; or executing step S210 first and then executing step S204; or step S204 and step S203 are performed simultaneously.
Step S205, the fourth network device receives the third failure notification message from the first network device.
In some embodiments, after the fourth network device receives the third fault notification message, the fourth network device identifies the content of the third fault notification message, and determines that the second tunnel has a fault according to the content of the third fault notification message (e.g., the fault notification indicator and the identifier of the second tunnel). And the fourth network equipment inquires the corresponding relation between the tunnel and the pseudo wires and determines that the pseudo wires corresponding to the second tunnel comprise a fifth pseudo wire. And the fourth network equipment sets the OAM state of the fifth pseudo wire to be a fault state, so that the fifth pseudo wire is triggered to be switched.
Step S207, the fourth network device switches the third service traffic from the fifth pseudowire to the sixth pseudowire.
The specific process of the fourth network device for switching the pseudo wire may refer to the process of the second network device for switching the pseudo wire.
Optionally, one or more pseudowires are carried over the second tunnel in addition to the fifth pseudowire. Optionally, the fourth network device switches each pseudo wire loaded on the second tunnel under the trigger of the third failure notification packet when the second tunnel carries multiple pseudo wires.
In the method provided by this embodiment, after detecting that one link fails, the intermediate node respectively sends the failure notification message to the endpoints of the multiple tunnels passing through the failed link, so as to trigger switching of all pseudo wires borne by the multiple tunnels, thereby avoiding damage to services in the multiple tunnels, and enabling the multiple tunnels through which the intermediate node passes without enabling OAM in the tunnels, thereby further saving OAM resources and bandwidth resources. In particular, OAM resources may be saved by 100%. Taking the example that the sending cycle of the OAM message is 3.3ms and the data amount is 74 bytes, under the condition that one network element passes through 1000 tunnels, the saved bandwidth resource is 74 × 8 × 1000/3.3=166mbit/s.
The above embodiments describe a mechanism for triggering pseudowire switch, and how to trigger pseudowire switch back is described below.
Optionally, the intermediate node stops sending the fault notification message to trigger the pseudowire back-switch when the fault is eliminated, which is described in detail below.
For the first network device which detects the failure, the first network device periodically sends a first failure notification message to the second network device within the failure duration of the first tunnel. That is, the first network device may continuously send the first failure notification packet from when it is determined that the first tunnel fails to be eliminated. When the first tunnel is recovered from the failure state, the first network device stops sending the first failure notification message to the second network device. Optionally, the first network device determines that the first tunnel has recovered from the failed state according to failure resolution on the first link.
Optionally, a time period for the first network device to send the first fault notification packet is consistent with a time period for sending the tunnel FDI packet when the tunnel OAM is enabled. For example, the time period for sending the first fault notification message is 3.3 milliseconds (ms), 10ms, 100ms, or 1 second(s).
For the second network device which switches the pseudo wire, the second network device counts the time length of not receiving the first fault notification message, and judges whether the time length of not receiving the first fault notification message reaches the set time length. And under the condition that the time length of not receiving the first fault notification message reaches the set time length, the second network equipment switches the first service flow from the second pseudo wire to the first pseudo wire. The set time period is, for example, 3 minutes.
The pseudo wire back-cut is realized by utilizing the above mode, on one hand, the pseudo wire back-cut accords with the protocol standard, and on the other hand, the pseudo wire back-cut is convenient.
Alternatively, the intermediate node triggers pseudowire fallback by actively sending back a switch request upon failover, which is explained in detail below.
For a first network device which detects a fault, when a first tunnel recovers from a fault state, the first network device generates a switch-back request message, and the switch-back request message indicates a second network device to switch a service message carried by a second pseudo wire to the first pseudo wire; the first network equipment sends a switching request message back to the second network equipment. For the second network device that switches the pseudo wire, when the second network device receives the switch-back request message, the second network device switches the first service traffic from the second pseudo wire to the first pseudo wire.
The method shown in FIG. 2 above is illustrated below with reference to an example. The FDI message or the AIS message in the following example 1 is an example of the fault notification message in the method shown in fig. 2, and the error code in the following example 1 is an example of the fault.
Example 1
Example 1 includes the following step one and step two.
Step one, after the intermediate node detects the error code, the intermediate node generates an FDI message and sends the FDI message to an output end of the tunnel.
For example, referring to fig. 1, after port a of the network device 103 detects that there is an error, the network device 103 identifies a tunnel configured by port a. The network device 103 generates an FDI packet according to the entry label of the tunnel. The message generated by network device 103 is identical to the message generated when tunnel OAM is enabled.
The implementation manner of generating the FDI packet according to the tunnel label is consistent with the implementation manner of generating the FDI packet under the condition of enabling the tunnel OAM, and reference may be made to the introduction of the standard y.1711. Specifically, the network device 103 adds the tunnel label to the FDI packet to form the FDI packet of the tunnel. The tunnel label in the FDI message is used to identify which tunnel the FDI message corresponds to. A link usually has only one egress port, and the port is optionally configured with a plurality of tunnels, and the ingress label of each tunnel is unique, so that a specific tunnel is corresponding to through the tunnel label.
Optionally, the FDI packet generated by the network device 103 is an FDI packet conforming to the y.1711 protocol. For example, referring to fig. 7, fig. 7 shows a format schematic diagram of an FDI packet, and some key fields in the FDI packet are described below with reference to fig. 7.
A Destination Address (DA) field in the FDI message is used to carry an MAC address of a link peer end in the transmission direction of the FDI message, that is, an MAC address of a next-hop network element connected to an egress port used when the FDI message is transmitted.
The Source Address (SA) field in the FDI message carries the MAC address of the local end of the FDI transmission direction link, that is, the MAC address of the egress port used when the FDI message is transmitted.
The Label Switched Path (LSP) Lab field in the FDI message carries the tunnel entry label.
The function type (function type) field in the FDI packet carries FDI information, where the FDI information is used to identify that the packet is an FDI packet, and the value of the FDI information is 0x02.
The TTSI field in the FDI message occupies 0 to 20 bytes. The TTSI field is used to carry the LSR ID and the tunnel ID of the tunneled endpoint.
A Virtual Local Area Network (VLAN) field in the FDI packet carries a default VLAN number of an egress port used when the FDI packet is sent.
The FDI message can be replaced by an AIS message of the Y.1731 protocol. For example, referring to fig. 8, fig. 8 shows a format diagram of an AIS message, and some key fields of the AIS message are described below with reference to fig. 8.
The DA field in the AIS message is used for carrying the MAC address of the link opposite end in the transmitting direction of the AIS message, namely the MAC address of the next-hop network element connected with the output port used when the AIS message is transmitted.
The SA field in the AIS message carries the MAC address of the local end of the link of the AIS message transmission direction, i.e., the MAC address of the egress port used when transmitting the AIS message.
The LSP Lab field in the AIS message carries a tunnel entry label.
The OPCode field in the AIS message carries AIS information, the AIS information is used for identifying that the message is the AIS message, and the value of the AIS information is 0x21.
The VLAN field in the AIS message carries the default VLAN number of the egress port used when sending the AIS message.
The following introduces a method for acquiring some information in the FDI message or the AIS message.
The LSR ID and tunnel ID are obtained from the tunnel configuration. Specifically, the intermediate node of the tunnel holds tunnel configuration information that includes LSR IDs of both endpoints of the tunnel and a tunnel ID. And the intermediate node of the tunnel acquires the LSR ID and the tunnel ID from the tunnel configuration information in the process of generating the FDI message.
The MAC address of the link peer is obtained in advance through Address Resolution Protocol (ARP) message learning. Specifically, network device 103 configures an IP address with the next hop of network device 103. The network device 103 and its next hop will send ARP messages to each other before sending the message. After receiving the ARP packet sent by the link peer, the network device 103 obtains the MAC address of the link peer from the ARP packet, and generates an ARP table according to the MAC address, where the ARP table stores the MAC address of the link peer. The network device 103 queries the ARP table to obtain the MAC address during the process of generating the FDI packet.
In the process of forwarding the FDI message or the AIS message from the intermediate node to the tunnel endpoint, the DA field, the SA field, the VLAN field, and the LSP LAB field change with each hop of forwarding the message. The function type field in the FDI message and the OPCode field in the AIS message remain unchanged during the forwarding process, that is, the function type field in the FDI message or the OPCode field in the AIS message received by the tunnel endpoint has the same content as the function type field in the FDI message or the OPCode field in the AIS message sent by the intermediate node.
When there is an error, the network device 103 continues to send FDI packets. When the error code is cleared, the network device 103 does not send FDI messages any more.
And step two, the end points of the tunnel identify the FDI messages and trigger PW APS switching.
For example, referring to fig. 1, after the network device 104 receives the FDI packet sent by the failed network element (the network device 103), the network device 104 identifies the FDI packet, sets an FDI status (FDI status) to 1, and then the network device 104 performs the following steps (1) to (3).
And (1) checking all PWs borne by the tunnel.
And (2) setting the near-end state of the PW OAM to be SD (error code exists), thereby triggering the PW APS switching.
And (3) the network device 104 sends a PW BDI message to the opposite terminal (the network device 101) at the same time on the basis of the step (2). The network device 101 identifies the PW indicated by the PW BDI packet, and under the trigger of the PW BDI packet, the network device 101 performs PW APS switching on the PW.
If the network device 104 does not receive the FDI message within 3 minutes, the network device 104 sets the FDI state to 0, restores the PW OAM state to normal, and the network device 104 does not send any PW BDI message any more, and triggers the network device 101 to perform PW APS switchback.
This embodiment takes an example in which the network device 104 sends a PW BDI message to the network device 101. In other embodiments, network device 104 sends a tunnel BDI message to network device 101. Network device 101 identifies the tunnel indicated by the tunnel BDI message. Under the trigger of the tunnel BDI packet, the network device 101 performs PW APS switching on all PWs loaded on the tunnel.
The FDI state is a flag bit used in software processing, and particularly is a flag bit used for software to identify whether the PW OAM state is normal or not. When the value of the FDI state is 1, the fault exists, and the switching is needed. An FDI state value of 0 indicates that no fault exists. Setting the FDI state in the PW APS flow is an optional step. In other embodiments, after receiving the FDI packet, the network device 104 does not perform the step of setting the FDI state, but directly sets the PA OAM states corresponding to all PWs in the tunnel to a failure state.
Fig. 9 is a schematic structural diagram of a network device according to an embodiment of the present application, and the network device 500 shown in fig. 9 is used to implement a method flow at an intermediate node side in an embodiment of the present application. The network device 500 comprises a processing unit 501 and a transmitting unit 502.
Optionally, from the application scenario shown in fig. 1, the network device 500 shown in fig. 9 is the network device 103 in fig. 1.
Optionally, in view of fig. 2, the network device 500 shown in fig. 9 is a first network device in the method flow shown in fig. 2. The processing unit 501 is configured to support the network device 500 to execute step S202 in fig. 2. The sending unit 502 is configured to support the network device 500 to execute step S204 in fig. 2.
Optionally, referring to fig. 3, the network device 500 shown in fig. 9 is a first network device in the method flow shown in fig. 3. The processing unit 501 is configured to support the network device 500 to perform step S201 and step S202 in fig. 3. The sending unit 502 is configured to support the network device 500 to execute step S204 in fig. 3.
Optionally, in conjunction with fig. 4, the network device 500 shown in fig. 9 is the first network device in the method flow shown in fig. 4. The processing unit 501 is configured to support the network device 500 to execute step S202 in fig. 4. The sending unit 502 is configured to support the network device 500 to execute step S204 in fig. 4.
Optionally, in conjunction with fig. 5, the network device 500 shown in fig. 9 is the first network device in the method flow shown in fig. 5. The processing unit 501 is used to support the network device 500 to execute step S202 ″ in fig. 5. The sending unit 502 is used to support the network device 500 to perform step S204 and step S210' in fig. 5.
Optionally, in conjunction with fig. 6, the network device 500 shown in fig. 9 is the first network device in the method flow shown in fig. 6. The processing unit 501 is used to support the network device 500 to execute step S201 'and step S202' in fig. 6. The sending unit 502 is configured to support the network device 500 to perform step S203 and step S204 in fig. 6.
The apparatus embodiment depicted in fig. 9 is merely illustrative, and for example, the division of the above units is only one type of logical functional division, and other division manners may be available in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The various elements in network device 500 are implemented in whole or in part by software, hardware, firmware, or any combination thereof.
In the case of software implementation, for example, the processing unit 501 is implemented by a software functional unit generated by at least one processor 701 in fig. 11 after reading program codes stored in a memory 702. For another example, the processing unit 501 is implemented by a software functional unit generated by the central processing unit 811 of the main control board 810 in fig. 12 reading the program code stored in the memory 812.
In the case of a hardware implementation, for example, the above units in fig. 9 are implemented by different hardware in a network device, for example, the processing unit 501 is implemented by a part of processing resources (for example, one core or two cores in a multi-core processor) in at least one processor 701 in fig. 11, and the processing unit 501 is implemented by a field-programmable gate array (FPGA), a coprocessor, or other programmable devices. The transmitting unit 502 is implemented by, for example, the network interface 703 in fig. 11. As another example, the transmission unit 502 is implemented by the physical interface card 833 of the interface board 830 in fig. 12.
Fig. 10 is a schematic structural diagram of a network device according to an embodiment of the present application, and the network device 600 shown in fig. 10 is used to implement a method flow at a tunnel endpoint side according to an embodiment of the present application. The network device 600 comprises a receiving unit 601 and a processing unit 602. Optionally, the network device 600 further includes a sending unit 603.
Optionally, from the application scenario shown in fig. 1, the network device 600 shown in fig. 10 is the network device 104 or the network device 101 in fig. 1.
Optionally, referring to fig. 2, the network device 600 shown in fig. 10 is a second network device in the method flow shown in fig. 2. The receiving unit 601 is configured to support the network device 600 to execute step S206 in fig. 2. The processing unit 602 is configured to support the network device 600 to execute step S208 in fig. 2.
Optionally, referring to fig. 3, the network device 600 shown in fig. 10 is a second network device in the method flow shown in fig. 3. The receiving unit 601 is used to support the network device 600 to execute step S206 in fig. 3. The processing unit 602 is configured to support the network device 600 to execute step S208 in fig. 3.
Optionally, referring to fig. 4, the network device 600 shown in fig. 10 is a second network device in the method flow shown in fig. 4. The receiving unit 601 is used to support the network device 600 to execute step S206 in fig. 4. The processing unit 602 is configured to support the network device 600 to perform step S208 and step S209 in fig. 4. The sending unit 603 is configured to support the network device 600 to perform step S210 in fig. 4. Alternatively, the network device 600 shown in fig. 10 is a third network device in the method flow shown in fig. 4. The receiving unit 601 is used to support the network device 600 to execute step S211 in fig. 4. The processing unit 602 is configured to support the network device 600 to execute step S212 in fig. 4.
Optionally, referring to fig. 5, the network device 600 shown in fig. 10 is a second network device in the method flow shown in fig. 5. The receiving unit 601 is used to support the network device 600 to execute step S206 in fig. 5. The processing unit 602 is configured to support the network device 600 to execute step S208 in fig. 5. Alternatively, network device 600 shown in fig. 10 is a third network device in the method flow shown in fig. 5. The receiving unit 601 is used to support the network device 600 to execute step S211 in fig. 5. The processing unit 602 is configured to support the network device 600 to execute step S212 in fig. 5.
Optionally, as seen in fig. 6, the network device 600 shown in fig. 10 is a second network device in the method flow shown in fig. 6. The receiving unit 601 is used to support the network device 600 to execute step S206 in fig. 6. The processing unit 602 is configured to support the network device 600 to execute step S208 in fig. 6. Alternatively, network device 600 shown in fig. 10 is a third network device in the method flow shown in fig. 6. The receiving unit 601 is used to support the network device 600 to execute step S205 in fig. 6. The processing unit 602 is configured to support the network device 600 to execute step S207 in fig. 6.
The apparatus embodiment depicted in fig. 10 is merely illustrative, and for example, the above described division of units is only one logical division, and in actual implementation, there may be other divisions, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. Each functional unit in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The various elements in network device 600 are implemented in whole or in part by software, hardware, firmware, or any combination thereof.
In the case of software implementation, for example, the processing unit 602 is implemented by a software functional unit generated by reading program codes stored in the memory 702 by at least one processor 701 in fig. 11. For another example, the processing unit 602 is implemented by a software functional unit generated by the central processing unit 811 of the main control board 810 in fig. 12 reading the program code stored in the memory 812.
In the case of a hardware implementation, for example, the above units in fig. 10 are implemented by different hardware in a network device, for example, the processing unit 602 is implemented by a part of processing resources (e.g., one core or two cores in a multi-core processor) in at least one processor 701 in fig. 11, and as the processing unit 602 is implemented by a field-programmable gate array (FPGA), a coprocessor, or other programmable devices. For example, the receiving unit 601 and the transmitting unit 603 are implemented by the network interface 703 in fig. 11. As another example, the receiving unit 601 and the transmitting unit 603 are implemented by a physical interface card 833 of the interface board 830 in fig. 12.
The following illustrates a basic hardware structure of a network device.
Fig. 11 is a schematic structural diagram of a network device according to an embodiment of the present application.
Optionally, from the application scenario shown in fig. 1, the network device 700 shown in fig. 11 is the network device 103, the network device 104, or the network device 101 in fig. 1.
Optionally, referring to fig. 2, the network device 700 shown in fig. 11 is a first network device in the method flow shown in fig. 2. The processor 701 is configured to support the network device 700 to execute step S202 in fig. 2. The network interface 703 is used to support the network device 700 to execute step S204 in fig. 2. Alternatively, the network device 700 shown in fig. 11 is the second network device in the method flow shown in fig. 2. The network interface 703 is used to support the network device 700 to execute step S206 in fig. 2. The processor 701 is configured to support the network device 700 to execute step S208 in fig. 2.
Optionally, referring to fig. 3, the network device 700 shown in fig. 11 is a first network device in the method flow shown in fig. 3. The processor 701 is configured to support the network device 700 to perform step S201 and step S202 in fig. 3. The network interface 703 is used to support the network device 700 to execute step S204 in fig. 3. Alternatively, network device 700 shown in fig. 11 is a second network device in the method flow shown in fig. 3. The network interface 703 is used to support the network device 700 to execute step S206 in fig. 3. The processor 701 is configured to support the network device 700 to execute step S208 in fig. 3.
Optionally, referring to fig. 4, the network device 700 shown in fig. 11 is the first network device in the method flow shown in fig. 4. The processor 701 is configured to support the network device 700 to execute step S202 in fig. 4. The network interface 703 is used to support the network device 700 to execute step S204 in fig. 4. Alternatively, network device 700 shown in fig. 11 is a second network device in the method flow shown in fig. 4. The network interface 703 is used to support the network device 700 to execute step S206 in fig. 4. The processor 701 is configured to support the network device 700 to perform step S208 and step S209 in fig. 4. The network interface 703 is used to support the network device 700 to perform step S210 in fig. 4. Alternatively, network device 700 shown in fig. 11 is a third network device in the method flow shown in fig. 4. The network interface 703 is used to support the network device 700 to execute step S211 in fig. 4. The processor 701 is configured to support the network device 700 to execute step S212 in fig. 4.
Optionally, referring to fig. 5, the network device 700 shown in fig. 11 is a first network device in the method flow shown in fig. 5. The processor 701 is configured to support the network device 700 to execute step S202 ″ in fig. 5. The network interface 703 is used to support the network device 700 to perform steps S204 and S210' in fig. 5. Alternatively, network device 700 shown in fig. 11 is a second network device in the method flow shown in fig. 5. The network interface 703 is used to support the network device 700 to execute step S206 in fig. 5. The processor 701 is configured to support the network device 700 to execute step S208 in fig. 5. Alternatively, network device 700 shown in fig. 11 is a third network device in the method flow shown in fig. 5. The network interface 703 is used to support the network device 700 to execute step S211 in fig. 5. The processor 701 is configured to support the network device 700 to execute step S212 in fig. 5.
Optionally, referring to fig. 6, the network device 700 shown in fig. 11 is the first network device in the method flow shown in fig. 6. The processor 701 is configured to support the network device 700 to perform step S201 'and step S202' in fig. 6. The network interface 703 is used to support the network device 700 to perform steps S203 and S204 in fig. 6. Alternatively, network device 700 shown in fig. 11 is a second network device in the method flow shown in fig. 6. The network interface 703 is used to support the network device 700 to execute step S206 in fig. 6. The processor 701 is configured to support the network device 700 to execute step S208 in fig. 6. Alternatively, network device 700 shown in fig. 11 is a third network device in the method flow shown in fig. 6. The network interface 703 is used to support the network device 700 to execute step S205 in fig. 6. The processor 701 is configured to support the network device 700 to execute step S207 in fig. 6.
Network device 700 includes at least one processor 701, memory 702, and at least one network interface 703.
The processor 701 is, for example, a Central Processing Unit (CPU), a Network Processor (NP), a Graphics Processing Unit (GPU), a neural-Network Processing Unit (NPU), a Data Processing Unit (DPU), a microprocessor, or one or more integrated circuits for implementing the present disclosure. For example, the processor 701 may include an application-specific integrated circuit (ASIC), a Programmable Logic Device (PLD), or a combination thereof. PLDs are, for example, complex Programmable Logic Devices (CPLDs), field-programmable gate arrays (FPGAs), general Array Logic (GAL), or any combination thereof.
The Memory 702 is, for example, but not limited to, a read-only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only Memory (EEPROM), a compact disc read-only Memory (CD-ROM) or other optical disc storage, optical disc storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), a magnetic disc storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Optionally, the memory 702 is separate and coupled to the processor 701 through an internal connection 704. Alternatively, the memory 702 and the processor 701 may alternatively be integrated.
The network interface 703 uses any transceiver or the like for communicating with other devices or a communication network. The network interface 703 includes, for example, at least one of a wired network interface or a wireless network interface. The wired network interface is, for example, an ethernet interface. The ethernet interface is for example an optical interface, an electrical interface or a combination thereof. The wireless network interface is, for example, a Wireless Local Area Network (WLAN) interface, a cellular network interface, or a combination thereof.
In some embodiments, processor 701 includes one or more CPUs, such as CPU0 and CPU1 shown in FIG. 11.
In some embodiments, network device 700 optionally includes multiple processors, such as processor 701 and processor 705 shown in fig. 11. Each of these processors is, for example, a single-core processor (single-CPU) or, for example, a multi-core processor (multi-CPU). A processor herein may alternatively refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In some embodiments, network device 700 also includes internal connection 704. The processor 701, the memory 702, and the at least one network interface 703 are connected by an internal connection 704. The internal connections 704 comprise pathways that convey information between the aforementioned components. Optionally, internal connection 704 is a single board or bus. Optionally, internal connection 704 is divided into an address bus, a data bus, a control bus, and the like.
In some embodiments, network device 700 also includes input-output interface 706. The input output interface 706 is connected to the internal connection 704.
Alternatively, the processor 701 may implement the method in the above-described embodiment by reading the program code 710 stored in the memory 702, or the processor 701 may implement the method in the above-described embodiment by an internally stored program code. In the case where the processor 701 implements the method in the above-described embodiment by reading the program code 710 stored in the memory 702, the program code implementing the method provided by the embodiment of the present application is stored in the memory 702.
For more details of the processor 701 to realize the above functions, reference is made to the description of the foregoing method embodiments, which are not repeated here.
Referring to fig. 12, fig. 12 is a schematic structural diagram of a network device according to an embodiment of the present application. Network device 700 in fig. 11 may optionally embody the structure of network device 800 in fig. 12.
The network device 800 includes: a main control board 810 and an interface board 830.
The main control board 810 is also called a Main Processing Unit (MPU) or a route processor card (route processor card), and the main control board 810 is used for controlling and managing each component in the network device 800, including routing computation, device management, device maintenance, and protocol processing functions. The main control board 810 includes: a central processor 811, and a memory 812.
The interface board 830 is also called a Line Processing Unit (LPU), a line card (line card), or a service board. The interface board 830 is used to provide various service interfaces and implement packet forwarding. The service interface includes, but is not limited to, an ethernet interface, such as flexible ethernet services interfaces (FlexE clients), a POS (packet over son/SDH) interface, and the like. The interface board 830 includes: a central processor 831, a network processor 832, a forwarding table entry memory 834, and a Physical Interface Card (PIC) 833.
The central processor 831 on the interface board 830 is used for controlling and managing the interface board 830 and communicating with the central processor 811 on the main control board 810.
The network processor 832 is used for implementing the forwarding processing of the message. The network processor 832 may take the form of, for example, a forwarding chip. Specifically, the network processor 832 is configured to forward the received message based on the forwarding table stored in the forwarding table entry memory 834, and if the destination address of the message is the address of the network device 800, send the message to a CPU (e.g., the central processing unit 811) for processing; if the destination address of the message is not the address of the network device 800, the next hop and the outgoing interface corresponding to the destination address are found from the forwarding table according to the destination address, and the message is forwarded to the outgoing interface corresponding to the destination address. The processing of the uplink message comprises the following steps: processing a message input interface and searching a forwarding table; and (3) downlink message processing: forwarding table lookups, and the like.
The physical interface card 833 is used to implement the interfacing function of the physical layer, from which the original traffic enters the interface board 830, and the processed packet is sent out from the physical interface card 833. The physical interface card 833, also called a daughter card, may be installed on the interface board 830, and is responsible for converting the optical signal into a message, performing validity check on the message, and forwarding the message to the network processor 832 for processing. In some embodiments, the central processor may also perform the functions of network processor 832, such as implementing software forwarding based on a general purpose CPU, so that network processor 832 is not required in physical interface card 833.
Optionally, the network device 800 includes a plurality of interface boards, for example, the network device 800 further includes an interface board 840, and the interface board 840 includes: central processor 841, network processor 842, forwarding table entry memory 844, and physical interface card 843.
Optionally, the network device 800 further comprises a switch board 820. The switch board 820 is also called a Switch Fabric Unit (SFU), for example. In the case of a network device having a plurality of interface boards 830, the switch board 820 is used to complete data exchange between the interface boards. For example, interface board 830 communicates with interface board 840, for example, through switch board 820.
The main control board 810 is coupled to an interface board 830. For example. The main control board 810, the interface board 830, the interface board 840, and the switch board 820 are connected to the system backplane through a system bus to achieve intercommunication. In a possible implementation manner, an inter-process communication (IPC) channel is established between the main control board 810 and the interface board 830, and the main control board 810 and the interface board 830 communicate with each other through the IPC channel.
Logically, the network device 800 includes a control plane including the main control board 810 and the central processor 831, and a forwarding plane including various components performing forwarding, such as a forwarding table entry memory 834, a physical interface card 833, and a network processor 832. The control plane executes functions such as a router, generating a forwarding table, processing signaling and protocol messages, and configuring and maintaining the state of the device, and issues the generated forwarding table to the forwarding plane, and in the forwarding plane, the network processor 832 looks up a table for forwarding the message received by the physical interface card 833 based on the forwarding table issued by the control plane. The forwarding table issued by the control plane is stored in, for example, a forwarding table entry storage 834. In some embodiments, the control plane and the forwarding plane are, for example, completely separate and not on the same device.
In the embodiment of the present application, operations on the interface board 840 are the same as those of the interface board 830, and for brevity, are not described again.
The main control board may have one or more blocks, and when there are more blocks, the main control board includes, for example, an active main control board and a standby main control board. The interface board may have one or more blocks, and the stronger the data processing capability of the network device, the more interface boards are provided. There may also be one or more physical interface cards on an interface board. The exchange network board may not have one or more blocks, and when there are more blocks, the load sharing redundancy backup can be realized together. Under the centralized forwarding architecture, the network device does not need a switching network board, and the interface board undertakes the processing function of the service data of the whole system. Under the distributed forwarding architecture, the network device can have at least one switching network board, and the data exchange among a plurality of interface boards is realized through the switching network board, so that the high-capacity data exchange and processing capacity is provided. Therefore, the data access and processing capabilities of network devices in a distributed architecture are greater than those of devices in a centralized architecture. Optionally, the form of the network device may also be only one board card, that is, there is no switching network board, and the functions of the interface board and the main control board are integrated on the one board card, at this time, the central processing unit on the interface board and the central processing unit on the main control board may be combined into one central processing unit on the one board card to perform the function after the two are superimposed, and the data switching and processing capability of the device in this form is low (for example, network devices such as a low-end switch or a router, etc.). Which architecture is specifically adopted depends on the specific networking deployment scenario, and is not limited herein.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments can be referred to each other, and each embodiment focuses on differences from other embodiments.
A refers to B and refers to simple variations where A is the same as B or A is B.
The terms "first" and "second," and the like, in the description and in the claims of the embodiments of the present application, are used for distinguishing between different objects, and not for describing a particular order of the objects, nor are they to be construed as indicating or implying relative importance. For example, the first fault notification message and the second fault notification message are used to distinguish different fault notification messages, but not to describe a specific sequence of the fault notification messages, and it cannot be understood that the first fault notification message is more important than the second fault notification message.
In the present examples, unless otherwise specified, the meaning of "at least one" means one or more, and the meaning of "a plurality" means two or more. For example, a plurality of pseudowires refers to two or more pseudowires.
The above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the application are all or partially generated when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (28)

1. A protection switching method is characterized in that the method comprises the following steps:
when a first tunnel fails, a first network device generates a first fault notification message, the first fault notification message indicates that the first tunnel fails and triggers a second network device to switch a first service flow from a first pseudo wire to a second pseudo wire, the first network device is an intermediate node through which the first tunnel passes, the second network device is an endpoint of the first tunnel, the first tunnel bears the first pseudo wire, each node through which the first tunnel passes does not enable Operation Administration and Maintenance (OAM) detection of the tunnel, and the second pseudo wire is a pseudo wire configured between the second network device and a third network device;
and the first network equipment sends the first fault notification message to the second network equipment.
2. The method of claim 1, wherein before the first network device generates the first fault notification packet, the method further comprises:
and the first network equipment determines that the first tunnel fails according to the failure of the first link, wherein the first tunnel passes through the first link.
3. The method of claim 2, wherein the failure comprises at least one of an error code, a link down, a frame error, and congestion.
4. The method of claim 2 or 3,
the first link is an upstream link of the first network device, and the second network device is an egress point of the first tunnel; or
The first link is a downstream link of the first network device, and the second network device is an ingress point of the first tunnel.
5. The method according to any one of claims 1 to 4,
the first fault notification message is further configured to trigger the second network device to send a second fault notification message to a third network device, where the second fault notification message indicates that the first tunnel fails, and triggers the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
6. The method according to any of claims 1-4, wherein upon failure of the first tunnel, the method further comprises:
the first network equipment generates a second fault notification message, the second fault notification message indicates that the first tunnel fails, and triggers the third network equipment to switch the first service flow from the first pseudo wire to the second pseudo wire;
and the first network device sends the second fault notification message to a third network device, wherein the third network device is the other end point of the first tunnel.
7. The method of any of claims 1-6, wherein the first tunnel further carries a third pseudo wire, wherein the first failure notification packet is further used to trigger the second network device to switch the second traffic flow from the third pseudo wire to a fourth pseudo wire, and wherein the third pseudo wire is a pseudo wire configured between the second network device and the third network device.
8. The method according to any one of claims 2 to 7, further comprising:
the first network equipment determines that a second tunnel fails according to the failure of the first link, wherein the second tunnel passes through the first link and bears a fifth pseudo wire;
the first network equipment generates a third fault notification message, the third fault notification message indicates that the second tunnel has a fault and triggers fourth network equipment to switch a third service flow from the fifth pseudo wire to the sixth pseudo wire, and the fourth network equipment is an endpoint of the second tunnel;
and the first network equipment sends the third fault notification message to the fourth network equipment.
9. The method according to any of claims 1 to 8, wherein the first fault notification message is a forward fault indication (FDI) message or an Alarm Indication Signal (AIS) message.
10. The method according to any one of claims 1 to 9, further comprising:
and within the fault duration of the first tunnel, the first network equipment periodically sends the first fault notification message to the second network equipment.
11. The method according to any one of claims 1 to 10, wherein the first fault notification packet further includes at least one of a type of the fault, a label of the first tunnel, an identification of the first tunnel, and a timestamp of the first network device detecting the fault, and wherein the label of the first tunnel is used to indicate a next hop of the first network device to forward the first fault notification packet to the second network device.
12. A protection switching method is characterized in that the method comprises the following steps:
a second network device receives a first fault notification message from a first network device, wherein the first fault notification message indicates that a first tunnel fails, the first network device is an intermediate node through which the first tunnel passes, the first tunnel carries a first pseudo wire, each node through which the first tunnel passes does not enable tunnel operation, administration and maintenance (OAM) detection, and the second network device is an endpoint of the first tunnel;
the second network device switches the first service flow from the first pseudo wire to a second pseudo wire, wherein the second pseudo wire is a pseudo wire configured between the second network device and a third network device.
13. The method of claim 12, wherein after the second network device receives the first fault notification packet, the method further comprises:
the second network equipment generates a second fault notification message, wherein the second fault notification message is used for indicating that the first tunnel has a fault;
the second network device sends a second failure notification message to the third network device to trigger the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
14. The method of claim 12 or 13,
the second network device is an egress point of the first tunnel.
15. Method according to any of the claims 12 to 14, wherein said first fault notification message is a forward fault indication, FDI, message or an alarm indication signal, AIS, message.
16. The method according to any of claims 12 to 15, wherein the first tunnel further carries a third pseudo wire, and wherein after the second network device receives the first failure notification packet from the first network device, the method further comprises:
and the second network equipment switches the second service flow from the third pseudo wire to a fourth pseudo wire, wherein the fourth pseudo wire is a pseudo wire configured between the second network equipment and the third network equipment.
17. The method according to any of claims 12 to 16, wherein after the second network device receives the first fault notification message, the method further comprises:
and the second network equipment sets the OAM state of each pseudo wire borne by the first tunnel to be a fault state.
18. A network device, wherein the network device is a first network device, the network device comprising:
a processing unit, configured to generate a first fault notification message when a first tunnel fails, where the first fault notification message indicates that the first tunnel fails and triggers a second network device to switch a first service traffic from a first pseudo wire to a second pseudo wire, the first network device is an intermediate node through which the first tunnel passes, the second network device is an endpoint of the first tunnel, the first tunnel carries the first pseudo wire, each node through which the first tunnel passes does not enable OAM detection, and the second pseudo wire is a pseudo wire configured between the second network device and a third network device;
a sending unit, configured to send the first fault notification packet to the second network device.
19. The network device of claim 18, wherein the processing unit is further configured to determine that the first tunnel fails based on a failure of a first link, the first tunnel traversing the first link.
20. Network device of claim 18 or 19,
the first fault notification message is further configured to trigger the second network device to send a second fault notification message to a third network device, where the second fault notification message indicates that the first tunnel fails, and triggers the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
21. The network device according to any one of claims 18 to 20, wherein when the first tunnel fails, the processing unit is further configured to generate a second failure notification packet, where the second failure notification packet indicates that the first tunnel fails, and triggers the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire;
the sending unit is further configured to send the second fault notification packet to the third network device, where the third network device is another end point of the first tunnel.
22. The network device according to any of claims 19 to 21, wherein the processing unit is further configured to determine that a second tunnel fails according to the failure of the first link, the second tunnel passing through the first link, the second tunnel carrying a fifth pseudowire; generating a third fault notification message, where the third fault notification message indicates that the second tunnel fails and triggers a fourth network device to switch a third service traffic from the fifth pseudo wire to the sixth pseudo wire, and the fourth network device is an endpoint of the second tunnel;
the sending unit is further configured to send the third fault notification packet to the fourth network device.
23. A network device, wherein the network device is a second network device, the network device comprising:
a receiving unit, configured to receive a first fault notification packet from a first network device, where the first fault notification packet indicates that a first tunnel has a fault, the first network device is an intermediate node through which the first tunnel passes, the first tunnel carries a first pseudo wire, each node through which the first tunnel passes does not enable operation, administration and maintenance (OAM) detection of the tunnel, and the second network device is an endpoint of the first tunnel;
a processing unit, configured to switch a first service traffic from the first pseudowire to a second pseudowire, where the second pseudowire is a pseudowire configured between the second network device and a third network device.
24. The network device according to claim 23, wherein the processing unit is further configured to generate a second failure notification packet, where the second failure notification packet is used to indicate that the first tunnel fails;
the network device further comprises: a sending unit, configured to send a second fault notification packet to the third network device, so as to trigger the third network device to switch the first service traffic from the first pseudo wire to the second pseudo wire, where the third network device is another endpoint of the first tunnel.
25. The network device of claim 23 or 24,
the second network device is an egress end point of the first tunnel.
26. The network device of any of claims 23 to 25, wherein the first tunnel further carries a third pseudowire, and wherein the processing unit is further configured to switch second traffic from the third pseudowire to a fourth pseudowire, and wherein the fourth pseudowire is a pseudowire configured between the second network device and the third network device.
27. A network system, characterized in that the network system comprises a network device according to any of claims 18 to 22 and a network device according to any of claims 23 to 26.
28. A computer-readable storage medium having stored therein at least one instruction which, when executed on a computer, causes the computer to perform the method of any one of claims 1 to 17.
CN202111060965.6A 2021-09-10 2021-09-10 Protection switching method and network equipment Pending CN115801552A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111060965.6A CN115801552A (en) 2021-09-10 2021-09-10 Protection switching method and network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111060965.6A CN115801552A (en) 2021-09-10 2021-09-10 Protection switching method and network equipment

Publications (1)

Publication Number Publication Date
CN115801552A true CN115801552A (en) 2023-03-14

Family

ID=85417071

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111060965.6A Pending CN115801552A (en) 2021-09-10 2021-09-10 Protection switching method and network equipment

Country Status (1)

Country Link
CN (1) CN115801552A (en)

Similar Documents

Publication Publication Date Title
CN109873760B (en) Method and device for processing route, and method and device for data transmission
US10432514B2 (en) Multiprotocol label switching traffic engineering tunnel establishing method and device
US20220407802A1 (en) Packet processing method and apparatus, network device, and storage medium
US10554542B2 (en) Label distribution method and device
KR102342286B1 (en) DCN message processing method, network device, and network system
US6901048B1 (en) Link-level protection of traffic in a packet-switched network
US20150003232A1 (en) Scaling OAM for Point-to-Point Trunking
CN113411834B (en) Message processing method, device, equipment and storage medium
US9755957B2 (en) Pseudowire control channel for signaling events
JP2014090468A (en) Ethernet oam at intermediate nodes in pbt network
US11606288B2 (en) Network communication method and apparatus
WO2016197950A1 (en) Route detection method, router device and system
CN114257494A (en) Method, equipment and system for realizing service path detection
CN115801552A (en) Protection switching method and network equipment
CN111885630B (en) Data transmission method and communication device
CN115914087A (en) Message forwarding method, device, equipment, system and storage medium
CN111435948A (en) Method for transmitting message in network and network equipment
WO2023040783A1 (en) Method, apparatus and system for acquiring capability, method, apparatus and system for sending capability information, and storage medium
WO2022222884A1 (en) Failure sensing method, apparatus and system for forwarding path
WO2005050933A1 (en) Point-to-point route monitoring in a packet-based core network
CN117768340A (en) Fault detection method, device and system
CN114915538A (en) Fault detection method, network device and system
CN114760244A (en) Method, device and network equipment for transmitting Binding Segment Identification (BSID)
CN114143142A (en) Message transmission method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination