WO2021089004A1 - 报文传输方法、代理节点及存储介质 - Google Patents

报文传输方法、代理节点及存储介质 Download PDF

Info

Publication number
WO2021089004A1
WO2021089004A1 PCT/CN2020/127209 CN2020127209W WO2021089004A1 WO 2021089004 A1 WO2021089004 A1 WO 2021089004A1 CN 2020127209 W CN2020127209 W CN 2020127209W WO 2021089004 A1 WO2021089004 A1 WO 2021089004A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
node
srh
proxy
sid
Prior art date
Application number
PCT/CN2020/127209
Other languages
English (en)
French (fr)
Inventor
张永康
储伯森
郑奎利
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP20884732.7A priority Critical patent/EP4040743B1/en
Priority to BR112022007852A priority patent/BR112022007852A2/pt
Priority to JP2022523379A priority patent/JP7327889B2/ja
Publication of WO2021089004A1 publication Critical patent/WO2021089004A1/zh
Priority to US17/737,637 priority patent/US20220263758A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/742Route cache; Operation thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/02Details
    • H04L12/12Arrangements for remote connection or disconnection of substations or of equipment thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/02Topology update or discovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/34Source routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/50Routing or path finding of packets in data switching networks using label swapping, e.g. multi-protocol label switch [MPLS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/645Splitting route computation layer and forwarding layer, e.g. routing according to path computational element [PCE] or based on OpenFlow functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/74Address processing for routing
    • H04L45/745Address table lookup; Address filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/02Network architectures or network communication protocols for network security for separating internal from external traffic, e.g. firewalls
    • H04L63/0272Virtual private networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L61/00Network arrangements, protocols or services for addressing or naming
    • H04L61/09Mapping addresses
    • H04L61/25Mapping addresses of the same type
    • H04L61/2503Translation of Internet protocol [IP] addresses
    • H04L61/2521Translation architectures other than single NAT servers
    • H04L61/2528Translation at a proxy

Definitions

  • This application relates to the field of communication technology, and in particular to a message transmission method, proxy node and storage medium.
  • Segment routing (English: segment routing, abbreviation: SR) is a technology that forwards messages based on the concept of source routing. It is based on Internet Protocol version 6 (English: internet protocol version 6, abbreviation: IPv6). : SRv6) is a technology that combines SR technology with IPv6 protocol. Specifically, the SRv6 message will carry an IPv6 header (English: IPv6 header) and also a segment routing header (English: segment routing header, SRH for short).
  • the SRH includes information such as a segment identifier (English: segment ID, abbreviation: SID, also known as segment) list (English: SID list, also known as segment list) and the number of remaining segments (English: segment left, abbreviation: SL) and other information.
  • the segment list contains one or more SIDs arranged in sequence. Each SID is a 128-bit IPv6 address in form, and each SID can essentially represent a topology, a command, or a service.
  • SL is equivalent to a pointer, a value not less than 0, which points to the active SID (English: active SID) in the segment list, and the active SID is the destination address in the IPv6 header.
  • a node supporting SRv6 When a node supporting SRv6 receives a message, it will read the destination address of the message and query the local SID table (English: local SID table) according to the destination address. When the destination address is the SID in the local SID table, the message Recognized as an SRv6 message, and perform corresponding operations based on the topology, instruction, or service corresponding to the SID.
  • the local SID table English: local SID table
  • Service function chain (English: service function chain, abbreviation: SFC, also known as service chain) is an orderly collection of service functions that allows traffic to pass through multiple service functions in a specified order (English: service function, abbreviation: SF)
  • SFC service function chain
  • SF service function chain
  • proxy node When the message is an SRv6 message and the SF node does not support SRv6, in order to allow the SF node to process the message normally, a proxy (English: proxy) node is introduced.
  • the proxy node is used to process the SRH on behalf of the SF node.
  • the proxy node can be divided into dynamic proxy, static proxy, shared memory proxy, and pseudo proxy.
  • the processing flow of the dynamic agent is: the agent node receives the SRv6 message, and queries the local SID table according to the destination address of the SRv6 message; when the destination address is the local SID table, the endpoint dynamic agent SID (End.AD.SID, End) Stands for endpoint, which means endpoint; D stands for dynamic, which means dynamic, and SID means segment identifier), when the proxy node executes the dynamic proxy operation corresponding to End.AD.SID, that is, strips SRH from the SRv6 message, and obtains that it does not contain SRH
  • the message that does not contain SRH is sent to the SF, and the proxy node will use the 5-tuple of the message as the index to store the SRH in the cache entry; the SF will receive the message that does not contain the SRH, right
  • the message is processed and the processed message is returned to the agent node; the agent node will use the five-tuple of the processed message as an index to query the cache entry to
  • SF nodes can perform dual-homing access, that is, the same SF node can access two proxy nodes at the same time. Taking these two proxy nodes as the first proxy node and the second proxy node respectively, through dual-homing access, SRv6 messages can be forwarded from the first proxy node to the SF node or from the second proxy node. To the SF node, even if a certain proxy node fails, the SRv6 message can still be forwarded from another proxy node to the SF node, which greatly improves network reliability.
  • the pressure of forwarding SRv6 messages and processing SRH can be shared between the two proxy nodes, thus realizing the function of load sharing and reducing the load of a single proxy node.
  • the first agent node receives the message with the destination address End.AD.SID, executes the above dynamic agent process, and sends the message without SRH to the SF node. If the SF node After processing the message, the processed message is sent to the second proxy node.
  • the second proxy node receives the message, since the second proxy node does not store the SRH, the second proxy node will not be able to query the SRH. As a result, the SRH recovery fails, and the packets processed by the SF node cannot be transmitted normally.
  • the embodiments of the present application provide a message transmission method, a proxy node, and a storage medium, which can solve the technical problem of SRH recovery failure in a dual-homing access scenario in related technologies.
  • the technical solution is as follows:
  • a message transmission method is provided.
  • the first agent node and the second agent node are connected to the same service function node, and the first agent node receives the SRH and the destination address is End.AD .SID, the first proxy node will generate the SRH and the first circumvention SID that includes the first message according to End.AD.SID, the first message and the first bypass SID corresponding to the second proxy node.
  • a second message that bypasses the SID and control information the first proxy node will send the second message to the second proxy node.
  • the control information instructs the second proxy node to store the SRH of the first message.
  • This method extends a new SID with bypass function for End.AD.SID.
  • the local proxy node receives a message carrying End.AD.SID, it uses this new SID to transfer the message.
  • the SRH of the packet is bypassed and transmitted to the proxy node of the opposite end, so that the proxy node of the opposite end can also obtain the SRH of the message, so that the SRH of the message can be stored in the proxy node of the opposite end.
  • two proxy nodes dual-homed to the same SF node can realize the synchronization of the SRH of the message, so that the SRH of the message is redundantly backed up.
  • the SF node processes the message, it can choose to return the processed message to the opposite agent node. Since the opposite agent node has obtained the SRH of the message in advance, the opposite agent node can use the previously stored SRH recovers the SRH of the message, so that the message processed by the SF node can be transmitted normally through the recovered SRH.
  • the first proxy node may update the remaining segment number SL of the SRH of the first message, and update the destination address of the first message, Obtain the second message, and the SL of the second message is greater than the SL of the first message, thereby obtaining the second message.
  • the processing operation is very simple, so the process of generating a message can be performed by the data plane instead of relying on the control plane.
  • the above message generation logic is configured for the microcode of the forwarding chip, and the forwarding chip can execute the step of generating the second message. Therefore, the microcode architecture is supported and the practicability is strong.
  • the first proxy node may update the SL by adding one to the SL of the first message; the first proxy node may update the destination address of the first message to the first bypass SID to update the SL.
  • the destination address of the first message is updated.
  • the first proxy node may also insert one or more target SIDs into the SRH segment list of the first message, where one or more target SIDs are used to indicate
  • the target forwarding path is a path between the first proxy node and the second proxy node.
  • the first proxy node can update the SL by adding a positive integer greater than the SL of the first message; the first proxy node can update the destination address of the first message to correspond to the next SR.
  • the SID of the node is used to update the destination address of the first message.
  • the second message may include an extension header, and the control information may be carried in the extension header of the second message.
  • the second message has good scalability, which is convenient for using the SRH backup function provided in this embodiment with other functions. .
  • control information can be directly carried in the first detour SID.
  • the second message may include an internet protocol (English: internet protocol, IP) header, and the control information may be carried in the IP header of the second message.
  • internet protocol English: internet protocol, IP
  • the IP header of the second message can be re-encapsulated from the original IP header of the first message. Then, by directly carrying the control information in the IP header, the IP header can be used on the basis of the routing function and at the same time exert bearer control.
  • the function of information in this way, the additional bytes occupied by the control information in the message can also be eliminated, so the data volume of the message is saved, and the overhead of transmitting the message is also reduced.
  • the SRH of the second message may include a Type Length Value (TLV), and the control information may be carried in the TLV in the SRH.
  • TLV Type Length Value
  • the first proxy node may also store the SRH of the first message in the first cache entry; generate a third message according to the first message, and send the third message to the service function node , Where the third message includes the payload of the first message and does not include the SRH of the first message.
  • the first proxy node provides dynamic proxy services for the SF node, and transmits the load of the first message to the SF node, so that the SF node can process business functions according to the load.
  • the SRH since the SRH is not sent For the SF node, it avoids the situation that the SF node cannot recognize the SRH and causes the processing failure.
  • the first proxy node may use the identifier of the first cache entry as an index to store the SRH of the first message in the first cache entry.
  • the cache entry that stores SRH is usually fixed relative to the proxy node and will not change due to the modification of the flow identifier.
  • the identifier of the cache entry when storing the SRH and the identifier of the cache entry when querying the SRH can be kept the same. Therefore, using the identifier of the cache entry as the index, the SRH of the packet whose flow identifier has been modified can be queried, thereby changing the flow identifier.
  • the message restores SRH.
  • the proxy node can also restore the SRH, so the proxy node can support the access to the SF of the NAT function. In this way, a dynamic proxy function is provided for the SF that has realized the NAT function.
  • NAT Network Address Translation
  • the first proxy node may use the identifier of the first cache entry as an index. Before storing the SRH of the first message in the first cache entry, the first proxy node may determine that the service function of the service function node includes Modify the flow ID.
  • the SRH is stored in the way that the identifier of the cache entry is used as the index, so as to avoid the change of the flow identifier during the query process. And unable to find the situation of SRH.
  • the second message further includes the identifier of the first cache entry
  • the control information is also used to instruct the second proxy node to store the difference between the identifier of the first cache entry and the identifier of the second cache entry.
  • the second cache entry is a cache entry of the SRH in which the second proxy node stores the first message.
  • control information can be used to instruct the second proxy node to maintain the mapping relationship between the cache entry of the local storage SRH and the cache entry of the peer storage SRH. Then, if the SF node returns the message with the modified flow identifier to the second proxy node, the second proxy node can find the local cache of the second proxy node according to the identifier of the cache entry carried in the message and the mapping relationship. The cache entry of the SRH can find the SRH from the local cache entry, and then restore the SRH. In this way, the scenario of dual-homing access when the SF is a NAT type SF can be supported.
  • the third message includes the identifier of the first cache entry.
  • the first proxy node may also receive the fourth message from the service function node.
  • the proxy node can use the identifier of the first cache entry in the first message as an index, and query to obtain the SRH of the first message.
  • the first proxy node can obtain the SRH of the first message according to the received fourth message and the query of the first message.
  • SRH generates the fifth message, and sends the fifth message.
  • the fifth message includes the payload of the fourth message and the SRH of the first message;
  • the first proxy node realizes the dynamic proxy function.
  • the message can be forwarded on the SR network using SRH.
  • the load of the message is obtained.
  • the business function processing of the SF node enables other nodes to obtain the processed load, and on this basis, continue to perform other business functions of the SFC.
  • the proxy node can use the identifier of the cache entry in the message to find the SRH and restore the SRH by using it as an index.
  • the proxy node can be allowed to support access to the SF with the NAT function, so as to provide the dynamic proxy function for the SF with the NAT function.
  • the first proxy node may generate the second message when it is triggered by detecting that the first cache entry is generated, or it may generate the second message when it is triggered by detecting that the first cache entry is updated. Two messages.
  • the new SRH can be transmitted to the second agent node in time through the forwarding of the second message, so as to ensure that the new SRH is synchronized to the second agent node in real time.
  • the previously stored SRH since the stored SRH will not trigger the generation of the cache entry, it will not trigger the forwarding of the second message, so it is not necessary to generate this SRH for transmission.
  • the processing overhead of the second message and the network overhead of transmitting the second message are not necessary to generate this SRH for transmission.
  • the first proxy node uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index, and stores the information of the first message in the first cache entry. SRH.
  • the SRH indexes of the packets of different VPNs can be distinguished by different End.AD.SIDs, thereby ensuring the different VPNs
  • the index of the SRH of the message is different, so that the SRH of the message of different VPNs can be stored separately, and the information isolation of different VPNs is realized.
  • the first agent node receives a configuration instruction
  • the configuration instruction includes the endpoint dynamic agent SID corresponding to the service function node in each virtual private network (Virtual Private Network, VPN), and the first agent node may store the SID according to the configuration instruction
  • the endpoint dynamic proxy the mapping relationship between the SID and the VPN identifier.
  • End.AD.SID can be used as the identifier of the VPN, and the End.AD.SID carried in the message sent to the SF node of different VPN will be different, through different End.AD .SID can distinguish packets of different VPNs.
  • the flow identifier corresponding to the third message is the flow identifier corresponding to the first message, and the third message includes the VPN identifier.
  • the first proxy node After the first proxy node sends the third message to the service function node, the first proxy node returns The fourth message may be received from the service function node, the flow identifier corresponding to the fourth message is the flow identifier corresponding to the first message, and the fourth message includes the VPN identifier and the payload; the first proxy node is based on the VPN identifier and the endpoint The dynamic proxy SID and the flow identifier corresponding to the fourth message are queried to obtain the SRH of the first message; the first proxy node generates the fifth message according to the fourth message and the SRH of the first message, The fifth message includes the payload of the fourth message and the SRH of the first message; the first proxy node sends the fifth message.
  • the first agent node may detect the failure event, and the failure event includes the event that the link between the first agent node and the service function node fails. Or at least one of the events of the failure of the first agent node.
  • the second message forwarded by the first proxy node may also include the load of the first message, and the control information is also used to instruct the second proxy node to send the load of the first message to the service function node.
  • the local proxy node When a failure event is detected, the local proxy node repackages the packets to be processed by the dynamic proxy, and forwards them to the peer proxy node to indicate The opposite agent node replaces the local agent node to perform dynamic agent operations, strips SRH from the repackaged message, and transmits the load of the repackaged message to the SF node. In this way, device failure or link can occur at the local agent node In the event of a failure, the local traffic is bypassed to the opposite end to continue forwarding, so that in a failure scenario, it can also ensure that the message can be processed by the SF node and be transmitted normally. Therefore, the stability and reliability of the network are greatly improved. And robustness.
  • the first proxy node may be connected to the second proxy node through the first link. Based on this topology, the first proxy node may use the link corresponding to the first link in the process of sending the second message. The first outbound interface of the road sends the second message to the second proxy node.
  • the first proxy node can select a peer link as the forwarding path of the second message, so that the second message can be transmitted to the opposite end through the peer link.
  • the first proxy node may be connected to the routing node through the second link, and the routing node is connected to the second proxy node through the third link.
  • the first proxy node is sending the second report.
  • the second message may be sent to the routing node through the second outgoing interface corresponding to the second link, and the second message is forwarded by the routing node to the second proxy node through the third link.
  • the first proxy node can select the route forwarded by the routing node as the forwarding route of the second packet, so that the second packet can be transmitted to the opposite end after being forwarded by the routing node.
  • the first proxy node may detect the state of the first link. If the first link is in an available state, the first proxy node may pass the link corresponding to the first link. The first outbound interface of the route sends a second message to the second proxy node. If the first link is in an unavailable state, the first proxy node can send the second message to the routing node through the second outbound interface corresponding to the second link. Send the second message.
  • the first proxy node will preferentially use the peer-to-peer link, so that the second message is transmitted to the second proxy node through the peer-to-peer link preferentially.
  • the path forwarded by the routing node is shorter, and the peer link passes fewer nodes than the path forwarded by the routing node, which can reduce the delay in transmitting the second message.
  • the first proxy node may continuously send multiple second messages to the second proxy node.
  • the transmission reliability of the second message is realized by using repeated transmission. Since the SRH of the first message can be transmitted to the second proxy node through any one of the second messages, the probability of SRH loss during transmission is reduced, and the implementation is relatively simple. High feasibility.
  • the first proxy node may also retransmit the second message to the second proxy node Message
  • the source address of the sixth message is the first detour SID
  • the sixth message includes the second detour SID corresponding to the first proxy node
  • the second detour SID is used to identify the sixth message
  • the destination node of is the first proxy node
  • the sixth message is used to indicate that the second message is received.
  • the Acknowledge (ACK) mechanism is used to achieve the transmission reliability of the second message.
  • the first proxy node can confirm whether the second proxy node has received the SRH by whether the sixth message is received. In the case that the second proxy node does not get the SRH, the first proxy node guarantees the transmission of the SRH through retransmission To the second agent node, so as to ensure the reliability of the transmission of SRH.
  • a message transmission method is provided.
  • the first agent node and the second agent node are connected to the same service function node, and the second agent node receives the first message including the first agent node from the first agent node.
  • This method extends a new SID with bypass function for End.AD.SID.
  • the local proxy node receives a message carrying End.AD.SID, it uses this new SID to transfer the message.
  • the SRH of the packet is bypassed and transmitted to the proxy node of the opposite end, so that the proxy node of the opposite end can also obtain the SRH of the message, so that the SRH of the message can be stored in the proxy node of the opposite end.
  • two proxy nodes dual-homed to the same SF node can realize the synchronization of the SRH of the message, so that the SRH of the message is redundantly backed up.
  • the SF node processes the message, it can choose to return the processed message to the opposite agent node. Since the opposite agent node has obtained the SRH of the message in advance, the opposite agent node can use the previously stored SRH recovers the SRH of the message, so that the message processed by the SF node can be transmitted normally through the recovered SRH.
  • the second proxy node may delete the first bypass SID from the segment list in the SRH of the second message; change the number of remaining segments in the SRH of the second message SL is subtracted by one to obtain the SRH of the first packet.
  • the second message may include an extension header, and the control information may be carried in the extension header of the second message.
  • the second message has good scalability, which is convenient for using the SRH backup function provided in this embodiment with other functions. .
  • control information can be directly carried in the first detour SID.
  • the second packet may include an IP header, and the control information may be carried in the IP header of the second packet.
  • the IP header of the second message can be re-encapsulated from the original IP header of the first message. Then, by directly carrying the control information in the IP header, the IP header can be used on the basis of the routing function and at the same time exert bearer control.
  • the function of information in this way, the additional bytes occupied by the control information in the message can also be eliminated, so the data volume of the message is saved, and the overhead of transmitting the message is also reduced.
  • the SRH of the second message may include the TLV, and the control information may be carried in the TLV in the SRH.
  • the second proxy node when the second proxy node stores the SRH of the first packet, the second proxy node uses the identifier of the second cache entry as an index to store the SRH of the first packet in the second cache entry.
  • the second proxy node may determine that the service function of the service function node includes modifying the flow identifier.
  • the second message may also include a first cache entry used by the first proxy node to store the SRH of the first message
  • the second proxy node determines that the control information also instructs the second proxy node to store the mapping relationship between the identity of the first cache entry and the identity of the second cache entry, and the second proxy node may also store the first cache entry The mapping relationship between the identifier of and the identifier of the second cache entry.
  • control information can be used to instruct the second proxy node to maintain the mapping relationship between the cache entry of the local storage SRH and the cache entry of the peer storage SRH. Then, if the SF node returns the message with the modified flow identifier to the second proxy node, the second proxy node can find the local cache of the second proxy node according to the identifier of the cache entry carried in the message and the mapping relationship. The cache entry of the SRH can find the SRH from the local cache entry, and then restore the SRH. In this way, the scenario of dual-homing access when the SF is a NAT type SF can be supported.
  • the second proxy node may also receive a fourth message from the service function node, and the second proxy node The node queries the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry according to the identifier of the first cache entry carried in the fourth message, and can obtain the first cache entry identifier corresponding to the first cache entry identifier.
  • the identifier of the cache entry The second proxy node uses the identifier of the second cache entry as an index to query the second cache entry to obtain the SRH of the first message.
  • the second proxy node may generate a fifth message according to the fourth message and the SRH of the first message, and the second proxy node sends the fifth message.
  • the fifth message includes the payload of the fourth message and the SRH of the first message.
  • the second proxy node realizes the dynamic proxy function.
  • the message can be forwarded on the SR network using SRH.
  • the load of the message is obtained.
  • the business function processing of the SF node enables other nodes to obtain the processed load, and on this basis, continue to perform other business functions of the SFC.
  • the second proxy node can use the identifier of the cache entry carried in the message based on the mapping relationship between the maintained cache entries , Use it as an index to find the cache entry of the local cache SRH from the mapping relationship, find the SRH, and then restore the SRH, so the second proxy node can support access to the SF with the NAT function, thereby providing dynamics for the SF with the NAT function Proxy function.
  • the flow identifier corresponding to the second message is the flow identifier corresponding to the first message
  • the SRH of the first message includes the endpoint dynamic proxy SID
  • the second proxy node is in the process of storing the SRH of the first message
  • the flow identifier corresponding to the second packet and the endpoint dynamic proxy SID in the SRH can be used as indexes, and the SRH of the first packet can be stored in the second cache entry.
  • the SRH indexes of the packets of different VPNs can be distinguished by different End.AD.SIDs, thereby ensuring the different VPNs
  • the index of the SRH of the message is different, so that the SRH of the message of different VPNs can be stored separately, and the information isolation of different VPNs is realized.
  • the second agent node receives the configuration instruction, and the second agent node can store the mapping relationship between the endpoint dynamic agent SID and the VPN identifier according to the configuration instruction, where the configuration instruction includes the service function node in each VPN The corresponding endpoint dynamic proxy SID.
  • End.AD.SID can be used as the identifier of the VPN.
  • the End.AD.SID carried in the packets sent by the SF nodes of different VPNs will be different, through different End.ADs. .SID can distinguish packets of different VPNs.
  • the second proxy node uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index, and after storing the SRH of the first message in the second cache entry, the second proxy node can also use the service function
  • the node receives the fourth message, and the second proxy node queries the second cache entry according to the VPN identifier carried in the fourth message, the endpoint dynamic proxy SID, and the flow identifier corresponding to the fourth message to obtain the SRH of the first message ;
  • the second proxy node can generate a fifth message and send the fifth message according to the fourth message and the SRH of the first message.
  • the fifth message includes the payload of the fourth message and the SRH of the first message.
  • the second proxy node may also send it according to the first proxy node Generate a third message carrying the payload of the first message, and send the third message to the service function node.
  • the local proxy node When a failure event is detected, the local proxy node repackages the packets to be processed by the dynamic proxy, and forwards them to the peer proxy node to indicate The opposite agent node replaces the local agent node to perform dynamic agent operations, strips SRH from the repackaged message, and transmits the load of the repackaged message to the SF node. In this way, device failure or link can occur at the local agent node In the event of a failure, the local traffic is bypassed to the opposite end to continue forwarding, so that in a failure scenario, it can also ensure that the message can be processed by the SF node and be transmitted normally. Therefore, the stability and reliability of the network are greatly improved. And robustness.
  • the second proxy node is connected to the first proxy node through the first link, and in the process of receiving the second message, the second proxy node can pass through the first inbound interface corresponding to the first link from the first A proxy node receives the second message.
  • the peer-to-peer link is used as the forwarding path of the second message, so that the second message can be transmitted locally to the second proxy node through the peer-to-peer link.
  • the second proxy node is connected to the routing node through a third link, and the routing node is connected to the first proxy node through a second link.
  • the second message may be sent by the first agent node to the routing node through the second link, and the second agent node may receive the second message from the routing node through the second ingress interface corresponding to the third link.
  • the path based on the routing node forwarding is used as the forwarding path of the second message, so that the second message is transmitted to the second proxy node locally after being forwarded by the routing node.
  • the second proxy node may continuously receive multiple second messages from the first proxy node.
  • the transmission reliability of the second message is realized by using repeated transmission. Since the SRH of the first message can be transmitted to the second agent node through any one of the second messages, the probability of SRH loss during transmission is reduced, and the implementation is relatively simple. High feasibility.
  • the second agent node After the second agent node receives the second message from the first agent node, the second agent node generates the second message according to the first detour SID, the second detour SID corresponding to the second agent node, and the second message.
  • a sixth message is sent to the first proxy node, the source address of the sixth message is the first bypass SID, the sixth message includes the second bypass SID corresponding to the first proxy node, the first The second bypass SID is used to identify the destination node of the sixth message as the first proxy node, and the sixth message is used to indicate that the second message is received.
  • the ACK mechanism is used to achieve the transmission reliability of the second message.
  • the first proxy node can confirm whether the second proxy node has received the SRH by whether it receives the sixth message. In the case that the second proxy node does not get the SRH, the first proxy node guarantees the transmission of the SRH through retransmission To the second agent node, so as to ensure the reliability of the transmission of SRH.
  • a first proxy node is provided, and the first proxy node has the function of realizing the packet transmission in the first aspect or any one of the optional manners in the first aspect.
  • the first proxy node includes at least one module, and the at least one module is configured to implement the packet transmission method provided in the first aspect or any one of the optional manners of the first aspect.
  • a second proxy node is provided, and the second proxy node has the function of realizing the message transmission in the second aspect or any one of the optional manners in the second aspect.
  • the second proxy node includes at least one module, and the at least one module is configured to implement the message transmission method provided in the foregoing second aspect or any one of the optional manners of the second aspect.
  • a first agent node in a fifth aspect, is provided, the first agent node includes a processor, and the processor is configured to execute instructions so that the first agent node executes the first aspect or any optional manner of the first aspect The message transmission method provided.
  • the first agent node includes a processor, and the processor is configured to execute instructions so that the first agent node executes the first aspect or any optional manner of the first aspect The message transmission method provided.
  • a second agent node in a sixth aspect, is provided, the second agent node includes a processor, and the processor is configured to execute instructions so that the second agent node executes the second aspect or any optional manner of the second aspect
  • the message transmission method provided.
  • a computer-readable storage medium stores at least one instruction.
  • the instruction is read by a processor to enable a first agent node to execute the first aspect or any one of the first aspects. Select the message transmission method provided by the method.
  • a computer-readable storage medium stores at least one instruction, which is read by a processor to enable a second agent node to execute the second aspect or any one of the second aspects described above. Select the message transmission method provided by the method.
  • a computer program product is provided.
  • the first agent node executes the report provided in the first aspect or any one of the optional methods of the first aspect. Text transmission method.
  • a computer program product is provided.
  • the second agent node executes the report provided in the second aspect or any one of the optional methods of the second aspect. Text transmission method.
  • a chip is provided.
  • the first agent node executes the message transmission method provided in the first aspect or any one of the optional methods of the first aspect. .
  • a chip which when the chip runs on a second agent node, enables the second agent node to execute the message transmission method provided in the second aspect or any one of the optional methods of the second aspect above .
  • a message transmission system in a thirteenth aspect, includes a first agent node and a second agent node.
  • the first agent node is configured to perform the first aspect or any one of the first aspects.
  • the second proxy node is configured to execute the method described in the second aspect or any one of the optional manners in the second aspect.
  • FIG. 1 is a system architecture diagram of an SFC provided by an embodiment of the present application
  • FIG. 2 is a schematic diagram of performing proxy operations in SRv6 SFC provided by an embodiment of the present application
  • FIG. 3 is a system architecture diagram of dual-homing access provided by an embodiment of the present application.
  • FIG. 4 is a system architecture diagram of dual-homing access provided by an embodiment of the present application.
  • FIG. 5 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a first message provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an extension header provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a bypass SID provided by an embodiment of the present application.
  • FIG. 9 is a network topology diagram of a first agent node and a second agent node provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 14 is a network topology diagram of a first agent node and a second agent node provided by an embodiment of the present application;
  • FIG. 15 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 17 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a second message provided by an embodiment of the present application.
  • FIG. 19 is a schematic diagram of an update segment list provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of an update segment list provided by an embodiment of the present application.
  • FIG. 21 is a schematic diagram of a traffic plan provided by an embodiment of the present application.
  • FIG. 22 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 23 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 24 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 25 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 26 is a schematic diagram of a third message provided by an embodiment of the present application.
  • FIG. 27 is a schematic diagram of a third message provided by an embodiment of the present application.
  • FIG. 28 is a schematic diagram of a third message provided by an embodiment of the present application.
  • FIG. 29 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 30 is a schematic diagram of evolving a first message to a second message according to an embodiment of the present application.
  • FIG. 31 is a schematic diagram of a first state machine provided by an embodiment of the present application.
  • FIG. 32 is a schematic diagram of a second state machine provided by an embodiment of the present application.
  • FIG. 33 is a schematic diagram of packet transmission in a scenario of spontaneously sending and receiving according to an embodiment of the present application.
  • FIG. 34 is a schematic diagram of packet transmission in a spontaneous sending and receiving scenario provided by an embodiment of the present application.
  • FIG. 35 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 36 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 37 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 38 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 39 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 40 is a schematic diagram of packet transmission in a failure scenario provided by an embodiment of the present application.
  • FIG. 41 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 42 is a schematic diagram of packet transmission in a failure scenario provided by an embodiment of the present application.
  • FIG. 43 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • FIG. 44 is a schematic diagram of packet transmission in a failure scenario provided by an embodiment of the present application.
  • FIG. 45 is a schematic structural diagram of a first proxy node provided by an embodiment of the present application.
  • FIG. 46 is a schematic structural diagram of a second proxy node provided by an embodiment of the present application.
  • FIG. 47 is a schematic structural diagram of a network device provided by an embodiment of the present application.
  • FIG. 48 is a schematic structural diagram of a computing device provided by an embodiment of the present application.
  • the size of the sequence number of each process does not mean the order of execution.
  • the execution order of each process should be determined by its function and internal logic, and should not be implemented in the embodiments of the present application.
  • the process constitutes any limitation.
  • determining B based on A does not mean that B is determined only based on A, and B can also be determined based on A and/or other information.
  • one embodiment or “an embodiment” mentioned throughout the specification means that a specific feature, structure, or characteristic related to the embodiment is included in at least one embodiment of the present application. Therefore, the appearances of "in one embodiment” or “in an embodiment” in various places throughout the specification do not necessarily refer to the same embodiment. In addition, these specific features, structures or characteristics can be combined in one or more embodiments in any suitable manner.
  • the business functions of traditional telecommunications networks are usually tightly coupled with hardware resources, and the product form of each business node is a dedicated device, which leads to complex deployment and difficult expansion, migration, and upgrade.
  • the SFC virtualizes each business function as an SF node, and obtains an orderly set of business functions, that is, a business function chain, by combining multiple SF nodes.
  • the traffic flows through each SF node in a service chain in a specified order, and the message is processed in turn through each SF node to complete the business processing flow.
  • SFC technology on the one hand, it is possible to decouple business functions from hardware resources, thereby facilitating the flexible opening and rapid deployment of services.
  • SFC can be combined with network function virtualization (English: Network Function Virtualization, abbreviated as: NFV), then the NFV management node in NFV, such as network function virtualization Virtualization Manager (English: Network Functions Virtualization Manager, referred to as: VNFM), etc., can act as the control plane of SFC to control each node in SFC;
  • NFV Network Function Virtualization
  • VNFM Network Functions Virtualization Manager
  • SFC can be combined with Software Defined Network (English: Software Defined Network, referred to as SDN) )
  • SDN controller (English: SDN controller) in the SDN can act as the control plane of the SFC to control each node in the SFC.
  • the SFC system architecture may include multiple nodes, each of which has a corresponding function, and different nodes cooperate with each other to realize the overall function of the business chain.
  • the system architecture of SFC may include flow classifier (English: classifier, abbreviation: CF), SF node, proxy (English: proxy) node, service function forwarder (English: Service Function Forwarder, abbreviation: SFF) node, and Routing (English: router) node.
  • FIG. 1 is a system architecture diagram of an SFC provided by an embodiment of the present application.
  • SF node is SF node 1, SF node 2, SF node 3 or SF node 4
  • SFF node is SFF node 1 or SFF node 2
  • proxy node is proxy node 1 or proxy node 2
  • routing node is routing Node 1 or routing node 2.
  • the service function chain domain (SFC domain, SFC domain) in Figure 1 refers to a collection of nodes that have SFC enabled.
  • the SFF node and the proxy node can be integrated in the same device, that is, the hardware device where the SFF node is located simultaneously implements the function of the proxy node.
  • SFF node 1 and agent node 1 are located in the same hardware device
  • SFF node 2 and agent node 2 are located in the same hardware device.
  • the flow classifier may be an ingress router of the SFC.
  • the flow classifier is used to classify the received traffic according to the classification rules. If the traffic meets the classification criteria, the flow classifier will forward the traffic so that the traffic enters the service chain provided by the SFC. Under normal circumstances, after the flow classifier determines that the flow meets the classification criteria, it will add SFC information to each packet corresponding to the flow, so that the packet is based on the SFC information and arrives at each SF node in the network according to the order of the service function chain. , So as to be processed by each SF node in turn.
  • the flow classifier can be a router, switch, or other network device.
  • the SF node is used to perform business processing on messages.
  • the SF node can be connected to one or more SFF nodes, and the SF node can receive messages to be processed from one or more SFF nodes.
  • the business function corresponding to the SF node can be set according to the business scenario.
  • the business functions corresponding to the SF node can be user authentication, firewall, network address translation (English: Network Address Translation, referred to as NAT), bandwidth control, virus detection, cloud storage, and deep packet inspection (English: Deep Packet Inspection, referred to as : DPI), intrusion detection or intrusion prevention, etc.
  • the SF node can be a server, a host, a personal computer, a network device, or a terminal device.
  • the SFF node is used to forward the received message to the SF node according to the SFC information. Normally, after the SF node finishes processing the message, it will return the message to the same SFF node, that is, the SFF node that sent the pre-processed message to the SF node before, and the SFF node will return the processed message to The internet.
  • SFF nodes can be routers, switches, or other network devices.
  • the SFF node and SF node can be used as the data plane of SFC.
  • the SFF node and the SF node may establish a tunnel, and the SFF node and the SF node may transmit packets through the tunnel.
  • the tunnel can be a virtual extended local area network (English: Virtual Extensible Local Area Network, abbreviated as: VXLAN), general routing encapsulation (English: generic routing encapsulation, abbreviated as: GRE) tunnel, mobile IP data encapsulation and tunnel (in English) : IP-in-IP) etc.
  • the SFF node and the SF node may not transmit packets through a tunnel, but directly transmit IP packets or Ethernet (Ethernet) packets.
  • the data surface of SFC can be controlled by the control surface of SFC.
  • the control surface of the SFC can include one or more SFC controllers.
  • the SFC controller may be an SDN controller, an NFV management node 1 or an NFV management node 1.
  • the SFC client before the message is transmitted, can send the attribute, configuration, strategy and other information of the service function chain to the SFC controller, and the SFC controller can according to the information sent by the client and the topology of the SFC, Generate information related to the service function chain, such as the classification rules of the message to enter the SFC, the identification of each SF node corresponding to the service function chain, and send it to the flow classifier and other nodes in the SFC, so that the flow classifier is based on the SFC
  • the information sent by the controller classifies the received messages and adds SFC information.
  • the types of nodes in the SFC system architecture shown in FIG. 1 are optional. In the embodiment of the present application, the types of nodes in the SFC system architecture may be more than those shown in FIG. 1. In this case, the SFC system The architecture may also include other types of nodes; alternatively, the types of nodes in the SFC system architecture may also be fewer than those shown in FIG. 1, and the embodiment of the present application does not limit the types of nodes in the SFC system architecture.
  • the number of nodes in the SFC system architecture shown in FIG. 1 is an optional manner, and the number of nodes in the SFC system architecture in the embodiment of the present application may be more than that shown in FIG. 1, or may be more than that shown in FIG.
  • a large number the embodiment of the present application does not limit the number of nodes in the SFC system architecture.
  • SFC can be combined with SRv6 technology.
  • SRv6 SFC The combination of these two technologies.
  • SRv6 SFC the system architecture of SRv6 SFC will be exemplarily described.
  • Segment Routing (English: Segment Routing, SR for short) is a protocol designed to forward data packets in a network based on the concept of source routing. SR divides the forwarding path into segments and assigns segment IDs (Segment IDs, SIDs) to these segments and nodes in the network. By arranging the SIDs in an orderly manner, a list containing SIDs can be obtained. The list is based on the Internet.
  • Protocol version 6 segment routing (English: Segment Routing Internet Protocol Version 6, abbreviated as: SRv6) is usually called segment list (SID List, also called segment identification list), in segment routing multi-protocol label switching (English: Segment Routing Multi-Protocol Label Switching, referred to as SR-MPLS) is usually called a label stack.
  • the segment list can indicate a forwarding path.
  • you can specify the nodes and paths through which the packets carrying the SID List pass, so as to meet the requirements of traffic tuning.
  • a message can be compared to luggage, and SR can be compared to a label affixed to the luggage.
  • SRv6 refers to the application of SR technology in an IPv6 network, using a 128-bit IPv6 address as a form of SID.
  • a node that supports SRv6 will query the local SID table (English: local SID table, also known as the local segment identification table or my SID table) according to the destination address (DA) in the message.
  • DA destination address
  • the destination address of the message matches any SID in the local SID table, it is confirmed that the destination address hits the local SID table, and the corresponding operation is performed based on the topology, instruction or service corresponding to the SID.
  • the message can be sent from The outbound interface corresponding to the SID is forwarded; if the destination address of the message does not match each SID in the local SID table, the IPv6 routing FIB is queried according to the destination address, and the packet is forwarded according to the entry that the destination address hits in the routing FIB.
  • the local SID table refers to a list containing SIDs stored locally by the node. In order to distinguish the local SID table from the segment list carried in the message, the name of the local SID table usually contains a "local" prefix.
  • the head node of SRv6 can be a flow classifier in SFC
  • the SRv6 node of SRv6 can be a routing node, SFF node, proxy node or SF node in SFC.
  • the SFC information can use the encapsulation format of SRv6, and the SFC information can be a segment list. By arranging the SIDs in the segment list in the message, it can specify the forwarding path of the message in the SFC or the various executions to be performed on the message. Business processing operations.
  • the segment list contains one or more SIDs arranged in sequence. Each SID is a 128-bit IPv6 address in form.
  • Each SID can essentially represent a topology, a command or a service, and the destination address of the message is The SID in the segment list currently pointed to by SL.
  • the node in the SFC After receiving the message, the node in the SFC will read the destination address of the message and perform the forwarding operation according to the destination address. If the node receiving the message supports SRv6, the node will query the local SID table according to the destination address. When the destination address hits the SID in the local SID table, the corresponding operation will be performed based on the topology, instruction or service corresponding to the SID.
  • the node When the destination address misses the SID in the local SID table, the node will query the IPv6 routing FIB according to the destination address, and forward the message according to the entry that the destination address hits in the routing FIB. If the node that receives the message does not support SRv6, the node will directly query the IPv6 routing FIB based on the destination address to forward the message.
  • the SF node may be a node that does not support SRv6, that is, a node that does not perceive SRv6 (English: SRv6-unaware).
  • the SF node itself may not be able to recognize SRH.
  • a proxy node can be used to ensure that it does not recognize SRH SF
  • the node can also process the message normally.
  • the proxy node is used to identify SRH on behalf of the SF node, strip the SRH of the message, and forward the message that does not contain SRH to the SF node. Then, because the message received by the SF node does not contain SRH, the SF node that does not support SRv6 can Process the message.
  • the proxy node can communicate with the SFF node and the SF node.
  • the proxy node will proxy the SF node to receive the message containing the SRH from the SFF node; the proxy node will strip the SRH from the message to get the message that does not contain the SRH, and send it through the outgoing interface.
  • the SF node will receive a message that does not contain SRH through the inbound interface, and after processing the message, it will send a message that does not contain SRH through the outgoing interface.
  • the proxy node will receive the message that does not contain SRH and encapsulate the message with SRH. Obtain the message containing the SRH, and send the message containing the SRH through the outgoing interface, so that the message is returned to the SFF node.
  • the proxy node can be a network device, such as a router or a switch, and the proxy node can also be a server, a host, a personal computer, or a terminal device.
  • Proxy nodes can be divided into dynamic proxy, static proxy, shared memory proxy, and pseudo proxy. The difference between dynamic proxy and static proxy is that the stripped SRH will be cached during dynamic proxy operation, while the static proxy operation will The stripped SRH is discarded.
  • the integrated setting of the SFF node and the proxy node described above is only an optional manner.
  • the SFF node and the proxy node may also be set in different hardware devices, and the hardware device where the SFF node is located may be It communicates with the hardware device where the agent node is located through the network.
  • Dual-homing access may be that the SF node is dual-homing to the proxy node, that is, the same SF node is simultaneously connected to two proxy nodes.
  • the message that needs to be sent to the SF node for processing starts from the routing node, and can pass through a certain proxy node to reach the SF node, or through another proxy node to reach the SF node, through a redundant path , Can greatly improve the reliability of the network.
  • Figure 3 is a system architecture diagram of a dual-homing access provided by an embodiment of the present application.
  • the system includes a first proxy node 101, a second proxy node 102, an SF node 103, and a routing node 104.
  • the first proxy The node 101 and the second agent node 102 are connected to the same SF node 103.
  • Each node in the system can be a node in the SFC.
  • the first agent node 101 may be the agent node 1 in FIG. 1
  • the second agent node 102 may be the agent node 2 in FIG.
  • the SF node 103 may be the SF node 1 or the SF node 2 in FIG.
  • the node 104 may be the routing node 2 in FIG. 1.
  • links can be established between different nodes and connected through links.
  • the first proxy node 101 can be connected to the second proxy node 102 through a first link
  • the first proxy node 101 can be connected to the routing node 104 through a second link
  • the routing node 104 can be connected through a third link.
  • the link is connected to the second proxy node 102
  • the first proxy node 101 can be connected to the SF node 103 through a fourth link
  • the second proxy node 102 can be connected to the SF node 103 through a fifth link.
  • the first proxy node 101 and the second proxy node 102 may be peer nodes, that is, the first proxy node 101 and the second proxy node 102 have the same function or play the same role during the message transmission process.
  • the first proxy node 101 may be recorded as a local proxy (English: local proxy)
  • the second proxy node 102 may be recorded as a peer proxy (English: peer proxy).
  • the second proxy node 102 can be recorded as a local proxy
  • the first proxy node 101 can be recorded as a peer proxy.
  • the link between the first agent node 101 and the second agent node 102 may be recorded as a peer link (English: peer link).
  • the routing node 104 can choose to send the received message to the first proxy node 101, or it can choose to send the received message to The second proxy node 102, therefore, the message can be forwarded from the first proxy node 101 to the SF node 103, or from the second proxy node 102 to the SF node 103.
  • the SF node 103 processes the message, it can choose to send the processed message to the first proxy node 101, or it can choose to send the processed message to the second proxy node 102.
  • the message of can either be returned from the first proxy node 101 to the routing node 104, or can be returned from the second proxy node 102 to the routing node 104. It can be seen that the SF node 103 realizes the function of dual-homing access by simultaneously accessing the first agent node 101 and the second agent node 102.
  • the function of load sharing can be realized.
  • the first proxy node 101 can perform dynamic proxy operations on the messages, or the first proxy node 101 can perform dynamic proxy operations on the messages.
  • the two proxy nodes 102 perform dynamic proxy operations on the messages, so that the two proxy nodes are used to perform dynamic proxy operations together, reducing the pressure of a single proxy node to process messages.
  • the message can be returned to the routing node 104 through the first proxy node 101, or it can be returned to the routing node 104 through the second proxy node 102, so as to reach the SF node 103.
  • the effect of shunting the processed traffic reduces the pressure of a single agent node to forward the message.
  • the message is transmitted on two links, thereby increasing the transmission bandwidth and thus increasing the transmission speed.
  • the transmission reliability can be improved. If the first proxy node 101 or the fourth link fails, the message can still be returned to the routing node 104 through the second proxy node 102. If the second proxy node 102 or the fifth link fails In the event of a failure, the message can still be returned to the routing node 104 through the first proxy node 101, thereby avoiding the problem of interruption of message transmission after a single proxy node fails.
  • the networking manner shown in FIG. 3 is only an optional manner.
  • the first proxy node 101 and the second proxy node 102 deploy peer-to-peer links to improve the reliability of the network.
  • the traffic to the first proxy node 101 can pass through the peer-to-peer link Detour to the second proxy node 102 to avoid occupying the bandwidth between the routing node and the proxy node.
  • the dual-homing access system can also be networked in other ways.
  • the peer-to-peer link may not be deployed between the first agent node 101 and the second agent node 102.
  • the deployment of this networking is relatively simple, and the outbound interface occupied by the peer-to-peer link is eliminated, thereby The interface resources of the first agent node 101 and the interface resources of the second agent node 102 are saved.
  • each link shown in FIG. 3 or FIG. 4 may be a one-hop IP link or a combination of multi-hop IP links. If two nodes are connected by a multi-hop IP link, the forwarding path between the two nodes can pass through one or more intermediate nodes, and the intermediate node can forward packets between the two nodes. This embodiment does not limit the number of hops of the IP link included in each link.
  • two nodes are located on a one-hop IP link, they can be called two nodes on-link. From the perspective of IP routing, the two nodes are reachable by one hop. For example, in the case of forwarding IPv4 packets between two nodes, when one node sends an IPv4 packet, the time to live value (TTL) in the IPv4 packet is reduced by one, and it can reach the other node. A node. For another example, in the case of forwarding IPv6 packets between two nodes, when one node sends an IPv6 packet, the hop-limit in the IPv4 packet is reduced by one, and the other can be reached. node.
  • TTL time to live value
  • Two nodes located on a one-hop IP link do not mean that the two nodes must be directly connected physically.
  • Two nodes located on a one-hop IP link include the case where two nodes are directly connected physically, and of course, it also includes the case where two nodes are not directly connected physically. For example, when two nodes are connected through one or more Layer 2 switches, the two nodes can also be said to be located on a one-hop IP link.
  • two nodes are located on a multi-hop IP link, they can be called two nodes off-link. From the perspective of IP routing, the two nodes are reachable via multi-hop routes.
  • the dual-homing access system described in FIG. 3 or FIG. 4 is only an optional manner, and the proxy node having two relationship peers in the SFC is also only an optional manner.
  • there may be more than two proxy nodes with peer relationship in the SFC and the peer proxy of any proxy node may not only be one, but multiple, and the peer link accessed by any proxy node may not only It is one, but multiple, and the proxy node can connect to the corresponding peer proxy separately through each peer link.
  • the above introduces the system architecture of SRv6 SFC dual-homing access.
  • the following exemplarily introduces the method and process of message transmission based on the system architecture provided above.
  • FIG. 5 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • the interaction body of the method includes a first agent node, a second agent node, an SF node, and a routing node.
  • the term "node" in this document may refer to a device.
  • the routing node may be a network device, such as a router or a switch, for example, the routing node may be the routing node 104 in FIG. 3 or FIG. 4;
  • the first proxy node may be a network device, such as a router, a switch, or a firewall
  • the first agent node may be the first agent node 101 in FIG. 3 or FIG.
  • the first agent node may also be a computing device, such as a host, a server, a personal computer or other equipment;
  • the second agent The node may be a network device, such as a router, a switch, or a firewall.
  • the second proxy node may be the second proxy node 102 in FIG. 3 or FIG. 4, and the second proxy node may also be a computing device, such as a host. , Server, personal computer or other equipment;
  • the SF node can be a computing device, for example, a host, server, personal computer or other equipment, for example, the SF node can be the SF node 103 in Figure 3 or Figure 4, this method It can include the following steps:
  • Step 501 The routing node sends the first message to the first proxy node.
  • the first message refers to the message whose payload is to be processed by the SF node.
  • FIG. 6, is a schematic diagram of a first message provided by an embodiment of the present application.
  • the first message may be an SRv6 message.
  • the first message may include IPv6 header, SRH, and payload.
  • the IPv6 header may include a source address and a destination address
  • the SRH may include a segment list, SL, and one or more TLVs
  • the segment list may include one or more SIDs.
  • the payload of the first message may be an IPv4 message, an IPv6 message, or an Ethernet (English: Ethernet) frame.
  • the destination address of the first message can be used for the active SID (English: active SID) in the bearer segment list.
  • the active SID is also called the active segment, which is the SID to be processed by the SR node that receives the message .
  • the active SID is the outermost label of the label stack, and in IPv6, the active SID is the destination address of the message.
  • the segment list includes 5 SIDs, namely SID0, SID1, SID2, SID3, and SID4, and the value of SL is 2, it means that there are 2 unexecuted SIDs in the segment list, namely SID0 and SID1.
  • the currently executed SID in the list is SID2, and there are 2 executed SIDs in the segment list, namely SID3 and SID4.
  • the destination address of the first message may be the endpoint dynamic agent SID (English: End.AD.SID, where AD stands for dynamic agent), that is, the active SID of the first message may be End.AD.SID .
  • the source address in the IPv6 header of the first packet may be used to identify the device encapsulating the SRH, for example, the source address may be the address of the flow classifier, or the source address may be the address of the head node.
  • the IPv6 header of the first packet may not carry the source address.
  • FIG. 6 is only a schematic diagram of the first message.
  • the number and types of fields in the first message may be more than those shown in FIG. 6 or less than those shown in FIG. 6.
  • the position and length of each field in the first message shown in FIG. 6 can be adaptively changed according to actual needs. This embodiment does not limit the position of each field in the message, nor does it limit each field. The length is limited.
  • End.AD.SID used to instruct the first agent node or the second agent node to perform dynamic agent operations.
  • the dynamic proxy operation is also called End.AD operation.
  • the dynamic proxy operation can specifically include operations such as stripping the SRH in the message, caching the SRH in the message, and sending out the message with the SRH stripped.
  • End.AD.SID can be pre-configured on the first agent node or the second agent node, and End.AD.SID can be pre-stored in the local SID table of the first agent node or the second agent node.
  • End.AD.SID can be published to the network by the first agent node or the second agent node.
  • the flow classifier can receive the published End.AD.SID and add End.AD.SID to the message, so that after the message carrying End.AD.SID is forwarded to the first proxy node or the second proxy node, Trigger the first proxy node or the second proxy node to perform dynamic proxy operations on the message.
  • the routing node may send the first packet in a flow-based manner or a packet-based manner. This embodiment does not limit the manner in which the first packet is sent.
  • Step 502 The first proxy node receives the first message from the routing node, and the first proxy node stores the SRH of the first message in the first cache entry.
  • the first proxy node can read the destination address of the first message, and the first proxy node can query according to the destination address of the first message Local SID table, to determine whether the destination address matches the SID in the local SID table.
  • the first agent node can determine that the first packet is an SRv6 packet and execute End Dynamic proxy operation corresponding to .AD.SID.
  • the first cache entry refers to the SRH cache entry in which the first proxy node stores the first message.
  • the first proxy node may include an interface board, the interface board may include a forwarding entry memory, and the first cache entry can be stored in the forwarding entry memory.
  • the first proxy node can detect whether the SRH of the first message has been stored in the cache. If the SRH of the first message is not stored in the cache, the first proxy node allocates the SRH of the first message The cache entry obtains the first cache entry, and the SRH of the first message is stored in the first cache entry. In addition, if it is detected that the SRH of the first message has been stored in the cache, the step of storing the SRH of the first message can be omitted.
  • the method of detecting whether the SRH of the first message has been stored in the cache may include: comparing the SRH in each cache entry with the SRH of the first message, and if the SRH in any cache entry is compared with the first If the SRH of the message is consistent, it is confirmed that the cache entry has stored the SRH of the first message. If the SRH in each cache entry is inconsistent with the SRH of the first message, it is confirmed that the cache does not store the first message. For the SRH of the message, the cache must be updated to store the SRH of the first message.
  • the entire content of the SRH in the cache entry can be compared with the entire content of the SRH in the first packet, or the partial content of the SRH in the cache entry can be compared with the SRH in the first packet. Part of the content is compared, and the hash value of the SRH in the cache entry can be compared with the hash value of the SRH of the first message. This embodiment does not limit the comparison mode.
  • the storage mode of the SRH of the first message may include any one of the following storage mode 1 to storage mode 3:
  • the first proxy node may obtain the flow identifier corresponding to the first message, and use the flow identifier corresponding to the first message as an index to store the SRH of the first message.
  • the first proxy node may store the SRH of the first message in a key-value storage manner, the flow identifier is a key, and the SRH of the first message is a value.
  • the flow identifier is used to identify the data flow to which the message belongs.
  • the flow identifier corresponding to each message in the same data flow is the same, and the flow identifier can be the value of one or more fields in the message.
  • the flow identifier corresponding to the first packet may be the quintuple of the first packet, MAC header information, or application layer information.
  • the five-tuple can include the source IP address, source port number, destination IP address, destination port number, and transport layer protocol number of the message.
  • the MAC frame header information may include the source MAC address, destination MAC address, Ethernet type (Ether Type), and VLAN tag (VLAN tag) of the message.
  • the application layer information can be the value of one or more fields in the payload of the packet.
  • the first proxy node can obtain the identifier of the first cache entry, use the identifier of the first cache entry as an index, and store the SRH of the first message, where the first proxy node can adopt a key-value storage method To store the SRH of the first message, the identifier of the first cache entry is the key, and the SRH of the first message is the value.
  • the second storage method is adopted, the flow of message transmission can be referred to the embodiment shown in FIG. 36 and the embodiment shown in FIG. 37 below.
  • the first proxy node can store the SRH of the first message by using the flow ID and End.AD.SID corresponding to the first message as an index. Among them, the first proxy node can store the SRH in a key-value storage manner. For the SRH of the first message, the combination of the flow ID and End.AD.SID is the key, and the SRH of the first message is the value.
  • the flow of message transmission can be referred to the embodiment shown in FIG. 38 and the embodiment shown in FIG. 39 below.
  • the SRH of the first message can be stored in the first proxy node. Then, if in the subsequent process, the SF node or the second proxy node sends a message that does not contain SRH to the first proxy node, the first proxy The node can use the pre-stored to find the SRH of the first message, thereby restoring the SRH.
  • Step 503 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • the second message a message used to transmit the SRH of the first message to the opposite agent node.
  • the second message may be a control message.
  • the second message may include the first bypass SID, control information, and the SRH of the first message.
  • the source address of the second message may be the second bypass SID corresponding to the first proxy node, or other information that can identify the first proxy node.
  • the second message may be an SRv6 message, and the second message may include an IPv6 header, an SRH, and a payload.
  • the IPv6 header may include the source address of the second packet and the destination address of the second packet.
  • the SRH of the second message may include each SID in the SRH of the first message and the first bypass SID.
  • the payload of the second packet may be the payload of the first packet. It should be noted that it is an optional manner that the second packet includes the payload of the first packet. In another embodiment, the second packet may not include the payload of the first packet.
  • the source address of the IPv6 header of the second packet may be a second bypass SID corresponding to the first proxy node, and the second bypass SID is used to identify that the source node of the second packet is the first proxy node.
  • the processing strategy corresponding to End.AD.SID may be configured on the first agent node in advance, and the processing strategy may include operations of executing dynamic agent operations and generating and sending the second packet. Therefore, After the first agent node receives the first message, according to the End.AD.SID in the first message, it will not only perform steps 502, step 508, and step 509 to provide dynamic proxy services, but also perform steps 503 to 504 To control the second proxy node to store the SRH. In this way, the ability of SRv6 to program through SID is fully utilized, and the operations corresponding to End.AD.SID are expanded.
  • the first agent node may be configured in advance, and the first agent node may receive a configuration instruction, and obtain End.AD.SID from the configuration instruction; the first agent node End.AD.SID can be stored, for example, End.AD.SID is stored in the local SID table of the first agent node.
  • the second agent node can be configured in advance, the second agent node can receive the configuration instruction, and obtain the End.AD.SID from the configuration instruction; the second agent node can store the End.AD.SID, for example, End.AD.SID is stored in the local SID table of the second agent node.
  • the configuration operation can be performed by the user on the first agent node or the second agent node, or can be performed by the control plane of the SFC, such as the SDN controller or the NFVM, which is not limited in this embodiment.
  • the first agent node and the second agent node may be in the relationship of anycast endpoint dynamic agent SID (English: anycast End.AD.SID).
  • Anycast also known as anycast, anycast, or anycast, is a communication method of IPv6, which means that a group of nodes that provide the same or corresponding services are identified by the same address, so that the address is the destination address. The message can be routed to any node in this group of nodes.
  • the same End.AD.SID can be assigned to the first agent node and the second agent node, so that End.AD.SID configured on the first agent node and End.AD configured on the second agent node .SID is the same.
  • the routing node can either send the message to the first proxy node or the second proxy node. Therefore, the message is both
  • the dynamic proxy operation can be performed through the first proxy node, and the dynamic proxy operation can also be performed through the second proxy node, thereby facilitating the realization of the dual-homing access function.
  • the corresponding anycast index (English: anycast-index) can be configured for the anycast End.AD.SID.
  • the anycast index is used to identify the End.AD.SID of the same group of nodes with anycast relationship. For the same End For .AD.SID, the anycast index configured for End.AD.SID on the first agent node and the anycast index configured for End.AD.SID on the second agent node may be the same.
  • the first agent node may configure End.AD.SID as anycast End.AD.SID and configure the anycast index corresponding to End.AD.SID by receiving the following configuration instruction.
  • proxy1 represents the first proxy node.
  • the meaning of the first line of configuration instructions is to configure the SR of IPv6, and the meaning of the second line of configuration instructions is that the Locator (location information) is t1, and the prefix of IPv6 is A::1 64 , The static route is 32.
  • the meaning of the configuration command in the third line is to configure the Locator of IPv6 SR, the opcode is ::2, and the anycast index corresponding to End.AD.SID is 10.
  • the meaning of the fourth line of the configuration command is to configure the End.AD.SID of the IPv6 SR, encapsulate the ipv4 packet, and use the Ethernet port 0 of the sub-board in slot 1 as the outgoing interface to send the ipv4 packet, and the next hop is
  • the address is 1.1.1.1
  • the incoming interface is the No. 1 Ethernet port on the 1 slot 0 daughter board.
  • the scenario embodied in the above configuration instruction is that the payload of the first packet is an IPv4 packet, and the proxy node generates the third packet by acquiring the IPv4 packet carried by the first packet as the third packet.
  • Detour SID It is a new SID provided in this embodiment.
  • the bypass SID is used to identify that the destination node of the message is the opposite proxy node.
  • the local proxy node can send the message to the opposite proxy node by carrying the bypass SID in the message, that is, send the local proxy's message to the opposite proxy node.
  • the text is sent to the peer proxy.
  • the local proxy node can transmit the message to the opposite proxy node by carrying the bypass SID of the opposite proxy node in the message, thereby realizing the function of bypassing the message from the local end to the opposite proxy node.
  • the bypass SID can be called End.ADB SID, and the meaning of End.ADB SID is bypass SID for End.AD.SID (a SID with bypass function extended for dynamic proxy operation).
  • the name of End.ADB SID can be regarded as the combination of End.AD.SID and B, where the "B" in "ADB” stands for bypass, which means bypass, and can also be translated as bypass.
  • the bypass SID may be in the form of an IPv6 address, and the bypass SID may include 128 bits.
  • the detour SID may include positioning information (Locator) and function information (Function), and the format of the detour SID is Locator: Function.
  • Locator occupies the high bits of the bypass SID
  • Function occupies the low bits of the bypass SID
  • the detour SID may also include parameter information (Arguments), and the format of the detour SID is Locator:Function:Arguments.
  • the detour SID may be a node SID, and each detour SID can uniquely correspond to a proxy node. If the system architecture of the SFC includes multiple proxy nodes, there may be multiple detour SIDs. The row SID corresponds to one agent node among the plurality of agent nodes. In this way, for a detour SID, the message carrying the detour SID can be uniquely routed to a proxy node, so as to be sent to the proxy node corresponding to the detour SID.
  • a dual-homing access system architecture is taken as an example for description. The system includes a first proxy node and a second proxy node.
  • the detour SID corresponding to the second proxy node is called the first detour.
  • the first bypass SID is used to send the message to the second proxy node.
  • the first detour SID may be issued to the network by the first proxy node, and other nodes in the SFC may receive the issued first detour SID.
  • the detour SID corresponding to the first proxy node is called the second detour SID.
  • the second bypass SID is used to send the message to the first proxy node.
  • the second detour SID can be issued to the network by the second proxy node, and other nodes in the SFC can receive the issued second detour SID.
  • the first agent node may be configured in advance, and the first agent node may receive a configuration instruction, and obtain the first detour SID from the configuration instruction; the first agent node may Store the first detour SID, for example, store the first detour SID in the local SID table of the first agent node; when the first agent node receives the first message, it can read the pre-stored first detour SID to get the first detour SID.
  • the configuration operation may be performed by the user on the first agent node, or may be performed by a controller of the network, such as an SDN controller or NFVM, which is not limited in this embodiment.
  • the first agent node may receive the following configuration instructions:
  • proxy1 represents the first proxy node.
  • the meaning of the first line of configuration instructions is to configure the SR of IPv6, and the meaning of the second line of configuration instructions is that the Locator (location information) is t2, and the prefix of IPv6 is B::64.
  • the static route is 32.
  • the meaning of the configuration command in the third line is that the operation code is ::1, and the first detour SID corresponding to the second agent node is C::2.
  • the second proxy node can be configured in advance, and the second proxy node can receive a configuration instruction, and obtain the second detour SID from the configuration instruction, for example, store the second detour SID locally on the second proxy node SID table.
  • the second agent node may receive the following configuration instructions:
  • proxy2 represents the second proxy node.
  • the meaning of the first line of configuration instructions is to configure the SR of IPv6, and the meaning of the second line of configuration instructions is that the Locator (location information) is t2, and the prefix of IPv6 is C::64.
  • the static route is 10.
  • the meaning of the configuration command in the third line is that the operation code is ::2, and the second detour SID corresponding to the first agent node is B::1.
  • the detour SID configured on the first agent node and the second agent node may be a mutual reference relationship, that is, the first detour SID configured on the first agent node refers to the second agent node to indicate The first detour SID is used to transmit the message to the second proxy node.
  • the second detour SID configured on the second proxy node refers to the first proxy node to indicate the second detour SID It is used to transmit the message to the first agent node.
  • bypass SID and the proxy node are only an exemplary manner.
  • the bypass SID and the proxy node may also have other correspondences, which is not limited in this embodiment.
  • Control information used to instruct the second agent node to store the SRH of the first message.
  • the control information may include a first flag and a second flag.
  • the first flag is used to identify whether the message is a copied message.
  • the first flag can be recorded as a copy flag (English: Copy flag, C-flag for short).
  • the second message may include a copy flag field, and the first flag may be carried in the copy flag field.
  • the length of the first flag may be 1 bit.
  • the second message may be a message obtained by encapsulating a copy of the first message. Therefore, the first flag may be set to 1, and the first flag may identify that the second message is The copied message.
  • the second flag is used to identify whether the message is an acknowledgement message.
  • the second flag can be recorded as an acknowledgement flag (English: copy acknowledge flag, abbreviated as A-flag).
  • the second message may include a confirmation flag field, and the second flag may be carried in the confirmation flag field.
  • the length of the second flag may be 1 bit.
  • the second flag may be set to 0, and the second flag may identify that the second message is not a confirmation message.
  • the control information can be carried anywhere in the second message.
  • the location of the control information may include any of the following (1) to (4).
  • control information is carried in the extension header of the second message.
  • Extension headers can include two types, one is an extension header (English: hop-by-hop option header) that is parsed hop-by-hop, and the other is an extension header (English: destination option header) that is parsed by a destination node.
  • the extension header parsed by the destination node can include two positions. One position is that the extension header parsed by the destination node is before the routing header. Next, the extension header is parsed by the node specified in the routing header; in another case, the extension header parsed by the destination node is after the routing header. In this case, the extension header is parsed by the final destination node of the message.
  • the extension header in the second message may be an extension header parsed by the destination node, the extension header parsed by the destination node may be located before the SRH, and the control information may be carried in the extension header parsed by the destination node.
  • the second proxy node specified in the SRH can parse the extension header, so that the second The proxy node stores the SRH according to the control information by reading the control information in the extension header.
  • the extension header may include one or more of the following fields:
  • Next header type (English: next header): After the extension header, the message can also include one or more extension headers or one or more high-level headers. Next header is used to indicate the extension header in the message. The type of extended header or high-level header. The length of the next header can be 1 byte.
  • the length of the extension header (English: header Extended Length, abbreviated as Hdr Ext Len): used to indicate the length of the extension header.
  • the length indicated by Hdr Ext Len does not include the first 8 bytes of the extension header, that is, the length indicated by Hdr Ext Len is the length from the 9th byte to the last byte of the extension header.
  • Hdr Ext Len can use 8 bytes as a unit.
  • the value of Hdr Ext Len can be set to 2.
  • the option type (English: option Type, abbreviated as: OptType) is used to indicate the type of the option in the extension header.
  • the length of the option type can be 1 byte.
  • the option type can be a standardized experimental type to facilitate intercommunication between different manufacturers, and the option type can also be an experimental option type.
  • the value of the experimental option type can include 0x1E, 0x3E, 0x5E, 0x7E, 0x9E, 0xBE, 0xDE, or 0xFE.
  • the 2 bits of the highest 2 digits in the option type define the received unidentified
  • the third high digit in the option type indicates whether the option data is allowed to be modified during the forwarding process.
  • the value of the option type of the extension header can be set to 0x1E, and the highest two digits of the option type are 00, which means that if the option is received and the option is not recognized, the option is ignored and the report continues to be processed. The rest of the text.
  • the third high bit in the option type indicates that the option data is allowed to be modified during the forwarding process.
  • Option length (English: option length, referred to as: OptLen): used to indicate the length of the option in the extension header. For example, in Figure 7, the length indicated by OptLen can be from the first byte of the extension header to the first byte after OptLen. The length of the last byte of the option. The value of OptLen can be set to 20.
  • Suboption type (English: suboption type, abbreviation: SubOptType): Options can have a nested structure, and one option can include one or more suboptions. The value of the sub-option type can be set to 1, which indicates that the sub-option is used for information synchronization between two nodes that support SRv6 SFC dynamic proxy operation, that is, SRH synchronization.
  • Suboption length (English: suboption length, abbreviation: SubOptLen): used to indicate the length of the data part of the suboption.
  • the length of SubOptLen is 1 byte, and SubOptLen can be in bytes.
  • the value of the sub-option length can be set to 20.
  • the reserved field (reserved1) is 1 byte in length.
  • the sender can set the value of the reserved field to 0, and the receiver can ignore the reserved field.
  • Target board index (English: Target Board, TB for short): The hardware index of the board that generates the cache entry, the length is 1 byte. TB can be set when N-flag is set to 1, if N-flag is default, TB can be set to 0.
  • Flags field currently undefined reserved flag fields.
  • the length of the flag field can be 28 bits. The sender can set the value of the reserved field to 0, and the receiver can ignore the reserved field.
  • the first flag used to identify whether the message is a copied message, which can be recorded as copy flag, C or C-flag.
  • the length of the first flag may be 1 bit. If the value of the first flag is set to 1, it can indicate that the message is a copied message, usually a control message, for example, the message is the second message or the sixth message forwarded in a normal scenario. If the value of the first flag is set to 0, it can indicate that the message is not a copied message, but is usually a data message. For example, the message is the second message forwarded in a failure scenario.
  • the process of transmitting the second message in a normal scenario can refer to the embodiment in FIG. 5, and the process of transmitting the second message in a fault scenario can refer to the embodiment of FIG. 41 below.
  • the second flag used to identify whether the message is an acknowledgement message, which can be recorded as Acknowledge-flag, A or A-flag.
  • the length of the second flag may be 1 bit. If the value of the second flag is set to 1, it can indicate that the message is a confirmation message, for example, the message is the sixth message. If the value of the second flag is set to 0, it may indicate that the message is not an acknowledgement message, for example, the message is a second message forwarded in a normal scenario.
  • the third flag used to identify whether the SF node associated with End.AD.SID is a NAT type SF node, that is, whether the SF node will modify the flow identifier corresponding to the packet in the process of processing the packet. It is marked as NAT flag (English: network address translation flag), N, NAT flag, or N-flag. The length of the third flag may be 1 bit. If the SF node associated with End.AD.SID is a NAT type SF node, the value of the third flag can be set to 1, and if the third flag is default, the third flag can be set to 0. Among them, if the value of the third flag is set to 1, the TB and Cache-Index fields can be set.
  • Cache index used to carry the identifier of the cache entry.
  • the length of the cache index is 4 bytes.
  • the cache index can be set when the third flag is set to 1, and if the third flag is default, the cache index can be set to 0.
  • the anycast index (anycast index) is an index uniquely corresponding to the anycast End.AD.SID, and the length is 4 bytes.
  • the receiving end can verify the anycast index in the message.
  • Reserved field (Reserved2): 4 bytes in length. The sender sets the value of the reserved field to 0, and the receiver ignores the reserved field.
  • the detour SID can include one or more of Locator, Function, or Arguments.
  • the control information can be carried in the function information of the first detour SID, or it can be carried in the first detour SID.
  • the parameter information of the bypass SID can be included in the function information. In this way, by bypassing the SID itself, the function of controlling the storage of the SRH of the peer agent node can be realized, thereby reducing the extra bytes occupied by the control information in the message, thus saving the data volume of the message and reducing the amount of data in the message. This reduces the overhead of transmitting messages.
  • control information is carried in the IP header of the second message.
  • control information can be carried in the IPv6 header of the second packet.
  • the control information can be carried in other parts than the first bypass SID in the IPv6 header.
  • the payload of the second packet is an IPv6 packet, that is, the form of the second packet is similar to the form of an IP in IP packet, including the outer IPv6 header and the inner IPv6 header, and the control information can be carried in the outer layer In the IPv6 header.
  • the control information is carried in the TLV in the SRH of the second message.
  • the SRH of the second message may include one or more TLVs, and the control information may be carried in the TLVs.
  • the function of controlling the storage of SRH by the second proxy node can be realized through the SRH itself, thereby reducing the additional bytes occupied by the control information in the message, thus saving the data volume of the message and reducing the transmission The overhead of the message.
  • the format of the second message can be different due to the network topology and/or the path planning requirements of the first agent node, and can also be generated due to the difference in the way of carrying control information. difference.
  • implementation manner 1 and implementation manner 2 are used as examples for description.
  • the destination address of the second packet can be the first bypass SID, for example, see Figure 10, Figure 11, Figure 12, and Figure 13, the destination address (Destination Address, DA) field of the IPv6 header of the second packet
  • the first bypass SID can be carried.
  • Scenario 1 The first proxy node and the second proxy node are located on a one-hop IP link (on-link). Please refer to the above for the specific concept of on-link, so I won't repeat it here.
  • FIG. 9 is a schematic diagram of the first agent node and the second agent node located on a one-hop IP link.
  • the destination address in the IPv6 header of an SRv6 message is usually used to identify the next SR node to forward the message. Therefore, if the first proxy node and the second proxy node are located on a one-hop IP link, the first proxy node can indicate that the next SR node of the second message is the second by carrying the first bypass SID in the destination address. Proxy node.
  • Scenario 2 The first proxy node and the second proxy node are located on a multi-hop IP link (off-link), and the intermediate node between the first proxy node and the second proxy node does not enable the SR function.
  • the intermediate node when it receives the second message, it can query the routing FIB according to the destination address of the second message (ie, the first bypass SID), and in the routing according to the destination address Packets are forwarded by matching entries in FIB.
  • the destination address of the second message ie, the first bypass SID
  • Scenario 3 The first proxy node and the second proxy node are located on a multi-hop IP link, and the service requirements of the intermediate node for forwarding the second message are not specified.
  • Scenario 3 may include multiple situations, including a situation where the intermediate node has the SR function enabled, or a situation where the intermediate node has not enabled the SR function. For example, if the multi-hop IP link forms multiple forwarding paths, and the first proxy node does not specify which forwarding path needs to forward the second packet, then the SR function is enabled and the SR is not enabled at the intermediate node. In the case of function, the first bypass SID can be carried in the destination address.
  • the forwarding path composed of the multi-hop IP link is unique, for example, there is only one intermediate node between the first agent node and the second agent node, then the SR function is enabled and the SR function is not enabled at the intermediate node In the case of, the first proxy node does not need to designate an intermediate node, and the second message can still be forwarded to the second proxy node through the intermediate node.
  • the intermediate node when the intermediate node receives the second message, it can query the local SID table according to the destination address of the second message, that is, the first bypass SID, that is, the first bypass SID.
  • the row SID is matched with each SID in the local SID table.
  • the intermediate node will determine that the first detour SID misses the local SID table, and the intermediate node will follow the traditional IP routing method To forward the second message, specifically, the intermediate node will query the routing FIB according to the first bypass SID, and forward the second message according to the entry that the destination address hits in the routing FIB.
  • the format of the second message realized through the first implementation manner can be specifically divided into multiple types.
  • the following is an example to illustrate through implementation method 1.1 to implementation method 1.4 with reference to Figure 10 to Figure 13:
  • Implementation method 1.1 Refer to Figure 10. If an extension header is used to carry control information, the format of the second packet can be as shown in Figure 10.
  • the second packet includes IPv6 header, extension header, SRH and payload, where the extension header can be Located between the IPv6 header and SRH.
  • Implementation 1.2 Refer to Figure 11. If the first bypass SID (ie End.ADB.SID in the figure) is used to carry control information, the format of the second message can be as shown in Figure 11, and the second message includes IPv6 Header, SRH, and payload. By comparing Figure 10 and Figure 11, it can be seen that implementation 1.2 reduces the length of the second message and can achieve the function of compressing the second message.
  • first bypass SID ie End.ADB.SID in the figure
  • the format of the second message can be as shown in Figure 11, and the second message includes IPv6 Header, SRH, and payload.
  • the format of the second message may be as shown in Figure 12.
  • the second message includes an IPv6 header, SRH and payload.
  • Implementation 1.4 Refer to Figure 13. If the IPv6 header is used to carry control information, the format of the second message may be as shown in Figure 13.
  • the second message includes the IPv6 header, SRH and payload.
  • the destination address of the second packet can be the SID corresponding to the next SR node, for example, see Figure 15, Figure 16, Figure 17, and Figure 18, the destination address (DA) field of the IPv6 header of the second packet It can carry the SID corresponding to the next SR node.
  • the SID corresponding to the next SR node can be published by the next SR node to the SR network, and the SID corresponding to the next SR node can be stored in the local SID table of the next SR node.
  • This method can be adapted to the service requirement that the first proxy node and the second proxy node are located on a multi-hop IP link, and have designated intermediate nodes for forwarding the second message.
  • the first proxy node can indicate that the second message is to be forwarded from the local end to the SR node by carrying the SID corresponding to the next SR node in the destination address, so that the second message is forwarded by the SR node and arrives The second agent node.
  • Figure 14 is a schematic diagram of the first proxy node and the second proxy node located on a multi-hop IP link.
  • the intermediate nodes of the first proxy node and the second proxy node include the intermediate node 1, the intermediate node 2, and the intermediate node. 3.
  • Intermediate node 1, intermediate node 2, and intermediate node 3 are all enabled with the SR function.
  • the destination address of the second message may carry the SID corresponding to the intermediate node 3.
  • the intermediate node 3 when it receives the second message, it can query the local SID table according to the destination address of the second message, and if the destination address of the second message hits the SID in the local SID table, the SID is executed.
  • the corresponding operation is, for example, forwarding the second message to the second proxy node.
  • the number of intermediate nodes between the first agent node and the second agent node can be set according to the requirements of networking, and the number of intermediate nodes can be more or less, for example, it can be only one, or can be more. Number, this embodiment does not limit the number of intermediate nodes located between two proxy nodes in a multi-hop IP link scenario.
  • the format of the second message implemented through the second implementation manner can be specifically divided into multiple types.
  • the following is an example for description with reference to Fig. 15, Fig. 16, Fig. 17 and Fig. 18 through the implementation mode 2.1 to the implementation mode 2.4:
  • the second message includes an IPv6 header, an extension header, SRH, and a payload.
  • Implementation 2.2 Refer to Figure 16. If the first bypass SID (ie End.ADB.SID in Figure 16) is used to carry control information, the format of the second message can be as shown in Figure 16, and the second message includes IPv6 header, SRH and payload.
  • the first bypass SID ie End.ADB.SID in Figure 16
  • the format of the second message can be as shown in Figure 16, and the second message includes IPv6 header, SRH and payload.
  • the format of the second message may be as shown in Figure 17.
  • the second message includes an IPv6 header, SRH and payload.
  • the format of the second message may be as shown in Figure 18.
  • the second message includes the IPv6 header, SRH and payload.
  • the process of generating the second message may include the following steps one to three.
  • Step 1 The first proxy node inserts the first detour SID into the segment list of the SRH of the first message.
  • the first proxy node can insert the first detour SID into the segment list by performing a push operation in the SR, so that after the first detour SID is inserted, the segment list contains every original first message. On the basis of each SID, the first bypass SID is added. Regarding the insertion position of the first detour SID in the segment list, the first proxy node can insert the first detour SID after the active SID in the SRH, that is, insert the first detour SID after End.AD.SID .
  • FIG. 19 is a schematic diagram of inserting the first detour SID provided by an embodiment of the present application.
  • the left side of FIG. 19 shows the segment list of the first message, and the right side of FIG. 19 shows the second The segment list of the message, in Figure 19, End.AD.SID is SID2, and the first detour SID is SID5, so the first proxy node inserts SID5 after SID2.
  • the SRv6 node parses the SRH, it usually reads each SID in the SRH in reverse order.
  • the first SID read and executed in the SRH of the first message is SID4, and the second SID is SID4.
  • the first SID to be read and executed is SID3, followed by SID2, and so on.
  • the last SID to be read and executed is SID0.
  • Step 2 The first proxy node updates the SL of the SRH of the first message.
  • Step 3 The first proxy node updates the destination address of the first message to obtain the second message.
  • the SL of the second message generated by the first proxy node will be greater than the SL of the first message received.
  • the SL in the SRH of the SRv6 message is usually used to identify the number of SIDs to be processed, so by modifying the value of SL, it can indicate that the second message sent by the first agent node is relative to the first agent node.
  • the SRH contains more SIDs to be processed, so that the second agent node and/or the intermediate node between the first agent node and the second agent node, under the instruction of SL, can These more SIDs are processed.
  • the next to-be-processed SID of the second message can be updated from End.AD.SID to another SID, so that the second proxy node and/or intermediate node can use the destination address to query the local SID table When, the operation corresponding to the other SID will be executed.
  • each SID in the SRH is inserted into the SRv6 message by the head node.
  • the SRv6 node For each SRv6 node other than the head node, whenever the SRv6 node receives an SRv6 message, it only executes The operation corresponding to the active SID in the SRH does not insert the SID into the SRH, nor delete the SID from the SRH, which means that each intermediate SRv6 node will not change the SID in the SRH.
  • the SRv6 node reduces the SL in the SRH by one, that is, executes SL-- to indicate that the local end has executed one SID in the SRH, so the remaining unexecuted SID in the SRH is reduced by one.
  • the first proxy node since the first detour SID in the SRH is inserted into the first message by the first proxy node, the first proxy node adds one to the SL in the SRH, that is, executes SL++ to indicate the SRH One is added to the remaining unexecuted SID in the SID. The newly added unexecuted SID is the first detour SID.
  • the first detour SID must be transmitted to the second agent node so that the second agent node can execute it.
  • step one is performed first, then step one, and finally step three as an example.
  • step three is performed first, then step one, and finally step two.
  • step two is performed first, then step one is performed, and step three is performed last.
  • steps 1, step 2, and step 3 can be executed at the same time.
  • the first proxy node may not only perform step 1, step 2, and step 3, but also perform the following step 4.
  • Step 4 The first proxy node generates an extension header, and encapsulates the extension header in the first message.
  • step four does not limit the time sequence of step four and step one, step two, and step three.
  • step four can also be performed before step one, step two, and step three.
  • the first proxy node may copy the first message, thereby obtaining two copies of the first message, one is the first message itself, and the other is a copy of the first message.
  • the first proxy node may use one of the first messages to generate the second message, and use the other first message to generate the third message.
  • the first proxy node may use a copy of the first message to generate the second message, and use the first message itself to generate the third message.
  • the second message may carry the first flag (C- flag) to indicate that the second message is a message copied from the first message.
  • the manner of generating the second message described above may include multiple implementation manners according to different specific scenarios.
  • the following is an example of implementation manner 1 to implementation manner 2.
  • Implementation manner 1 The first proxy node adds one to the SL of the first message.
  • the second step above may be: updating the destination address of the first message to the first bypass SID.
  • the left side of Figure 19 shows the SL of the first message.
  • the value of the SL of the first message is 2, which indicates that there are two SIDs that are not executed in the segment list, namely SID0 and SID1.
  • the current active SID in the segment list is SID2, and there are 2 executed SIDs in the segment list, namely SID3 and SID4.
  • the first agent node can increase SL by one, so that the value of SL is updated from 2 to 3.
  • the value of SL of the second message is 3, which indicates that the segment list has not been executed There are 3 SIDs, namely SID0, SID1, and SID2.
  • the current active SID in the segment list is SID5, and there are 2 executed SIDs in the segment list, SID3 and SID4.
  • the first proxy node may modify the destination address of the IPv6 header in the first message, so that the destination address of the first message is updated from End.AD.SID to the first bypass SID.
  • Implementation mode one can be applied to multiple scenarios, for example, the first proxy node and the second proxy node are located on a one-hop IP link, and the first proxy node and the second proxy node are located on a multi-hop IP link (off-link), However, the intermediate node between the first proxy node and the second proxy node does not enable the SR function.
  • Another example is that the first proxy node and the second proxy node are located on a multi-hop IP link, and the intermediate node for forwarding the second message is not specified.
  • this embodiment does not limit the implementation mode in which scenario the first agent node executes.
  • the second message obtained through the first implementation manner may be as shown in FIG. 10, FIG. 11, FIG. 12, or FIG.
  • the first proxy node may also insert one or more target SIDs into the segment list of the SRH of the first message.
  • the above step one may be: the first proxy node adds N to the SL of the first message, The N is a positive integer greater than 1.
  • the foregoing step two may be: the first proxy node updates the destination address of the first message to the SID corresponding to the next SR node among the one or more target SIDs.
  • the one or more target SIDs are used to indicate the target forwarding path.
  • the target SID can be a node SID or a link SID (also called an adjacent SID).
  • the target SID can identify an intermediate node, a link, and a link SID. Or multiple instructions, one service, or one VPN.
  • the target forwarding path is the path between the first proxy node and the second proxy node.
  • the target forwarding path can be an explicit path (strict explicit), that is, the path of each link or each intermediate node is specified; the target forwarding path can also be a loose explicit path, that is, part of the link or part of the intermediate node is specified.
  • the target forwarding path can be a link or a combination of multiple links.
  • the target forwarding path may be a path at the data link layer, a path at the physical layer, or a path constructed through tunneling technology.
  • the target forwarding path can be a path for transmitting optical signals or a path for transmitting electrical signals.
  • one or more target SIDs inserted by the first agent node can be There are three SIDs, which are SID6 corresponding to intermediate node 1, SID7 corresponding to intermediate node 2, and SID20 corresponding to intermediate node 3.
  • SID6 corresponding to intermediate node 1
  • SID7 corresponding to intermediate node 2
  • SID20 corresponding to intermediate node 3.
  • the left side of Figure 20 shows the SL of the first message.
  • the value of the SL of the first message is 2, which means that there are two SIDs that are not executed in the segment list, namely SID0 and SID1.
  • the segment list The current active SID is SID2, and there are 2 executed SIDs in the segment list, namely SID3 and SID4.
  • the first agent node can update the value of SL from 2 to 6. As shown on the right side of Figure 20, the value of SL of the second message is 6, indicating that there are 6 SIDs that have not been executed in the segment list. , Are SID0, SID1, SID2, SID5, SID8, SID7, the current active SID in the segment list is SID6, and there are 2 executed SIDs in the segment list, namely SID3 and SID4.
  • the target forwarding path can be specified as the first proxy node—intermediate node 1—intermediate node 2—intermediate node 3. It should be noted that the SID corresponding to each node as the target SID is only an example. If the target forwarding path is a loose path, one or more target SIDs inserted by the first proxy node may be SIDs corresponding to some intermediate nodes. .
  • the second message obtained through the second implementation manner may be as shown in FIG. 15, FIG. 16, FIG. 17, or FIG. 18.
  • Implementation mode 2 can be applied to a variety of scenarios.
  • the first proxy node and the second proxy node are located on a multi-hop IP link, and there is a business requirement for specifying an intermediate node for forwarding the second message.
  • the implementation mode 2 is executed.
  • the first proxy node may perform traffic planning by inserting the target SID.
  • FIG. 21 is a schematic diagram of traffic planning through target SID according to an embodiment of the present application.
  • the target SID can be used to control the second message from these three paths A designated path of is transmitted to the second agent node.
  • the node N1 advertised corresponding to the link (N1, N2)
  • the SID is used as the target SID, and the target SID is carried in the segment list of the second message.
  • node N1 receives a message, it queries the local SID table according to the target SID, and finds outbound interface 1 bound to the link (N1, N2), and selects outbound interface 1 to send the second message, so that the message After being sent out through the outgoing interface 1, it will reach the node N2, and then reach the second agent node through (N2, N3).
  • the effect achieved can at least include: the ability to select the forwarding path of the second packet according to business requirements, and control the transmission of the second packet through the specified target forwarding path, thereby facilitating path planning and realizing traffic adjustment. Excellent effect.
  • the first proxy node may modify the destination address of the IPv6 header in the first message, so that the destination address of the first message is updated from End.AD.SID to the SID corresponding to the next SR node.
  • the second message generation manner may be other implementation manners, for example, a bound SID (English: Binding SID (abbreviation: BSID) to identify the target forwarding path, the first proxy node can insert the first bypass SID and BSID into the SRH segment list of the first message, and update the destination address of the IPv6 header of the first message Is the BSID, and the SL of the first message is increased by 2.
  • BSID Binding SID
  • the next SR node When the next SR node receives the second message, after using the BSID to query the local SID table, it can perform the End.B6.Insert operation (an insert operation in SRv6) to insert a new SRH into the second message, or Perform an End.B6.Encaps operation (an encapsulation operation in SRv6), and insert an outer IPv6 header containing SRH into the second packet.
  • End.B6.Insert operation an insert operation in SRv6
  • an End.B6.Encaps operation an encapsulation operation in SRv6
  • Figure 22 is a schematic diagram of converting a first message into a second message provided by an embodiment of the present application.
  • Figure 22 is suitable for description Use the extension header to carry control information. It can be seen from Figure 22 that two pieces of information can be added on the basis of the first message, one is the first bypass SID used to transmit the message to the second proxy node, and the other is used to carry control The extension header of the information, through this simple operation, the second message can be obtained.
  • a target SID can be added to indicate the target forwarding path.
  • Figure 23 is suitable for describing the case where the bypass SID is used to carry control information.
  • the second message can be obtained by adding the first bypass SID on the basis of the first message.
  • Figure 24 is suitable for describing the use of TLVs to carry control information. It can be seen from Figure 24 that the second message can be obtained by adding the first bypass SID on the basis of the first message and writing control information to the TLV in the SRH.
  • Figure 25 is suitable for describing the case where the IP header is used to carry control information. It can be seen from Figure 25 that the second packet can be obtained by adding the first bypass SID on the basis of the first packet and writing control information to the IPv6 header.
  • the achieved effects can at least include:
  • each SRH cached by the proxy node is dynamically learned from the message by the data plane (such as the forwarding chip) of the proxy node during the process of forwarding the message.
  • the data plane such as the forwarding chip
  • the data plane usually Do not let the data plane send SRH to the CPU.
  • the CPU of the proxy node because the CPU does not obtain the SRH, it cannot generate a message containing the SRH, and the CPU cannot back up the SRH to the second proxy node through the message.
  • the data plane of the proxy node since the data plane can usually only perform simple processing operations, it is very complicated, difficult, and feasible to generate the entire message from 0 on the data plane according to the SRH.
  • the trigger condition for generating the second message may include one or more of the following conditions (1) to (3).
  • Condition (1) detects that the first cache entry is generated.
  • the first proxy node can detect the cache. If the first cache entry is not included in the cache, and after receiving the first message, the first cache entry is added to the cache, then the first cache entry is detected Generate, then the first proxy node will generate the second message and send the second message.
  • Condition (2) detects that the first cache entry is updated.
  • the first proxy node can detect the first cache entry in the cache. If the first cache entry is written and the first cache entry is updated, the first proxy node will generate a second message. Send the second message.
  • the first proxy node may not perform the steps of generating the second message and sending the second message.
  • the effect achieved can at least include: On the one hand, whenever the first proxy node receives a new SRH, due to the new SRH It will trigger the generation of the cache entry, thereby triggering the forwarding of the second message. Therefore, the new SRH can be transmitted to the second proxy node in time through the forwarding of the second message, so as to ensure that the new SRH is synchronized to the second proxy node in real time. Proxy node. On the other hand, for the previously stored SRH, since the stored SRH will not trigger the generation of the cache entry, it will not trigger the forwarding of the second message, so it is not necessary to generate this SRH for transmission. The processing overhead of the second message and the network overhead of transmitting the second message.
  • a data flow with N data packets arrives at the first proxy node, and when the first proxy node receives the first first packet of the data flow, the first first packet If the SRH of is a new SRH for the first agent node, the first agent node will generate a first cache entry to store the SRH of the first first message. At this time, if condition (1) is met, the first proxy node will be triggered to generate the second message and send the second message, then the SRH of the first first message will be transmitted to the second proxy node through the second message. The SRH of the first message is also stored on the second agent node.
  • the SRH of the first message is usually the same.
  • the SRH of the first message is the SRH that has already been stored, and the first proxy node will not generate the first cache entry.
  • the first proxy node will not generate and send the second message, thereby avoiding the overhead caused by forwarding (N-1) second messages.
  • the SRH of the first first packet can be encapsulated in the first packet to recover the packet.
  • N is a positive integer.
  • Condition (3) detects that the current time point reaches the sending cycle of the second message.
  • the first proxy node may periodically send the second message. Specifically, the first proxy node may set a sending period for the second message, and every time a sending period elapses, the second message is generated, and the second message is sent .
  • the first proxy node can periodically synchronize the SRH of the first message to the second proxy node through the second message, so that the second proxy node can periodically refresh the cached SRH, thereby improving the reliability of synchronized SRH .
  • Step 504 The first proxy node sends a second message to the second proxy node.
  • the forwarding path through which the first proxy node sends the second packet may include one or more of the following forwarding path 1 to forwarding path 2:
  • Forwarding path 1 Send the second message through a peer link.
  • the peer-to-peer link between the first agent node and the second agent node may be the first link
  • the first agent node may send the second message through the first link
  • the second agent node may pass the first link.
  • a link receives the second message.
  • the process of transmitting the second message through the peer-to-peer link may include the following steps 1.1 to 1.2:
  • Step 1.1 The first proxy node sends a second message to the second proxy node through the first outgoing interface corresponding to the first link.
  • the first outbound interface refers to the outbound interface corresponding to the first link in the first proxy node.
  • the first proxy node may receive a configuration instruction in advance, and the configuration instruction is used to indicate the correspondence between the first link and the first outbound interface, and the first proxy node may obtain the relationship between the first link and the first outbound interface from the configuration instruction.
  • the corresponding relationship between the first link and the first outbound interface is stored in the routing table entry.
  • the first proxy node After the first proxy node generates the second message, it can select the first outgoing interface corresponding to the first link, and send the second message through the first outgoing interface, so that the second message is transmitted to the second message through the first link.
  • Second agent node Exemplarily, the first agent node may receive the following configuration instructions:
  • proxy1 represents the first proxy node.
  • the meaning of the first line of configuration instructions is to configure IPv6 SR, and the meaning of the second line of configuration instructions is that the outgoing interface corresponding to the peer link in the service function chain is slot 2 0 For Ethernet port 0 on the daughter board, the next hop is FE80::1.
  • Step 1.2 The second agent node receives the second packet from the first agent node through the first ingress interface corresponding to the first link.
  • the first inbound interface refers to the inbound interface corresponding to the first link in the second proxy node.
  • the second packet can pass through the first link to reach the first inbound interface of the second proxy node, and enter from the first inbound interface to The second agent node.
  • the first proxy node may implement forwarding path one by specifying the outgoing interface.
  • the outgoing interface of the second message can be designated as the first outgoing interface, so when the second message is to be sent, the first proxy node will send the second message through the first outgoing interface by default; or, it can use Specify the next hop mode to implement forwarding path one.
  • the next hop of the first proxy node can be designated as the second proxy node. Therefore, when a second message is to be sent, the first proxy node will default to the second proxy node as the next hop, and pass the first out The interface sends the second packet.
  • the second forwarding path is forwarding through the routing node.
  • the process of forwarding through the routing node may include the following steps 2.1 to 2.3:
  • Step 2.1 The first proxy node sends a second message to the routing node through the second outgoing interface corresponding to the second link,
  • the second outbound interface refers to the outbound interface corresponding to the second link in the first proxy node.
  • the first agent node may receive a configuration instruction in advance, and the configuration instruction is used to indicate the correspondence between the second link and the second outbound interface.
  • the first agent node may obtain the relationship between the second link and the second outbound interface from the configuration instruction.
  • the corresponding relationship between the second link and the second outbound interface is stored in the routing table entry.
  • the first proxy node After the first proxy node generates the second message, it can select the second outgoing interface corresponding to the second link, and send the second message through the second outgoing interface, so that the second message is transmitted to the first through the second link. Second agent node.
  • Step 2.2 The routing node forwards the second message to the second proxy node through the third link.
  • Step 2.3 The second proxy node receives the second packet from the routing node through the second inbound interface corresponding to the third link.
  • the second inbound interface refers to the inbound interface corresponding to the third link in the second agent node.
  • the second packet can pass through the third link to reach the second inbound interface of the second agent node, and enter from the second inbound interface to The second agent node.
  • the first proxy node may preferentially use forwarding path one to transmit the second message.
  • the following steps (1) to (3) may be performed to implement preferential use of forwarding path one.
  • Step (1) The first proxy node detects the state of the first link. If the first link is in an available state, the first proxy node executes step (2), or if the first link is in an unavailable state, Then the first agent node executes step (3).
  • the first proxy node can detect whether the first link has been deployed and whether the state of the first outbound interface is up, and if the first link has been deployed and the state of the first outbound interface is up, then confirm the first link. The link is available. If the first link is not deployed or the state of the first outgoing interface is down (down), it is confirmed that the first link is in an unavailable state.
  • Step (2) The first proxy node sends a second message to the second proxy node through the first outgoing interface corresponding to the first link.
  • Step (3) The first proxy node sends the second message to the routing node through the second outgoing interface corresponding to the second link.
  • the second message is preferentially transmitted to the second proxy node through the opposite link. Then, since forwarding path one is shorter than forwarding path two, forwarding path one passes fewer nodes than forwarding path two. , Which can reduce the delay of transmitting the second message.
  • the transmission reliability of the second packet may be achieved by any one of the following manner 1 to manner 2.
  • the first proxy node may generate multiple second messages, and the multiple second messages may be the same.
  • the first proxy node may continuously send multiple second messages to the second proxy node.
  • a proxy node continuously receives multiple second messages.
  • the SRH of the first message can be transmitted to the second proxy node through any one of the second messages, thus reducing the probability of loss of SRH during the transmission process and achieving It is relatively simple and feasible.
  • a corresponding Acknowledge (ACK) message can be designed for the second message, so that the receiving end of the second message returns an ACK message to notify the sender of the second message that it has received the second message.
  • the message transmission process based on the confirmation mechanism may include the following steps 1 to 4:
  • Step 1 The second proxy node generates a sixth message according to the first bypass SID, the second bypass SID corresponding to the second proxy node, and the second message.
  • the sixth message is the ACK message corresponding to the second message.
  • the sixth message indicates that the second message has been received.
  • the second proxy node can receive the second message.
  • the event of the second message is notified to the first agent node.
  • the encapsulation format of the sixth message may be similar to that of the second message.
  • the sixth message includes the second bypass SID corresponding to the first proxy node.
  • the destination address of the sixth message may be this The second detour SID.
  • the sixth message may carry the second flag, and the second flag in the sixth message may indicate that the sixth message is an acknowledgement message.
  • the sixth message may include an extension header, which carries C-flag and A-flag, C-flag is set to 1, and A-flag is set to 1.
  • the second proxy node may re-encapsulate the second message to obtain the sixth message.
  • the process of generating the sixth message may include: the second proxy node updates the source address in the second message to the first bypass SID; the second proxy node changes the destination address of the second message from the first The detour SID is updated to the second detour SID; the second proxy node modifies the value of the second flag in the second message to obtain the sixth message.
  • the three steps of updating the source address, updating the destination address, and modifying the second flag can be combined in any order, and this embodiment does not limit the timing of these three steps.
  • the achieved effect can at least include: the second proxy node can obtain the second message by modifying the source address, the destination address and the second flag of the received second message, Compared with the way that the entire message is generated from 0, the processing operation is very simple, so the process of generating the message can be executed by the data plane instead of relying on the control plane, thus providing a control plane that does not depend on the CPU and other control planes. , But the ACK message generation method that can be realized through the data plane such as the forwarding engine. After experiments, the microcode of the forwarding chip is configured with the above message generation logic, and the forwarding chip can be executed to realize the sixth message generation step , So it supports the microcode architecture and has strong practicability.
  • Step 2 The second proxy node sends a sixth message to the first proxy node.
  • Step 3 When the sixth message is received, the first proxy node determines that the second proxy node has received the second message.
  • the first proxy node may obtain control information from the second message, and determine that the control information indicates that the second message is received. For example, the first proxy node can identify the value of the first flag and the value of the second flag in the second message, and if the value of the first flag is set to 1 and the value of the second flag is set to 1, it is determined The control information indicates that the second message is received.
  • Step 4 When the sixth message is not received, the first proxy node retransmits the second message to the second proxy node.
  • the first proxy node can confirm whether the second proxy node has received the SRH by whether the sixth message is received. In the case that the second proxy node does not get the SRH, it can be guaranteed by retransmission
  • the SRH is transmitted to the second agent node, thereby ensuring the reliability of the transmission of the SRH.
  • Step 505 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • the specific process of transmitting the second message between the first agent node and the second agent node may include multiple implementation manners according to different networking.
  • the following is an example of implementation manner 1 to implementation manner 2.
  • Implementation mode 1 If the first proxy node and the second proxy node are located on a one-hop IP link, after the first proxy node sends the second message through the outgoing interface, the second message can pass through the first proxy node and The link between the second proxy node reaches the inbound interface of the second proxy node, and the second proxy node can receive the second message through the inbound interface.
  • Implementation mode 2 If the first proxy node and the second proxy node are located on a multi-hop IP link, after the first proxy node sends the second message from the outgoing interface, the second message can pass through the first proxy node and The link between the next node reaches the inbound interface of the next node. The next node can receive the second packet through the inbound interface and send the second packet out through the outbound interface until the second packet reaches the second packet. The inbound interface of the proxy node, the second proxy node receives the second message through the inbound interface.
  • the intermediate node can identify the destination address of the IPv6 header of the second message, and query the local SID table according to the destination address.
  • the target SID and the local When the target SID in the SID table matches, the operation corresponding to the target SID is performed, for example, the outgoing interface corresponding to the target SID is selected, and the second packet is forwarded to the next intermediate node through the outgoing interface.
  • the intermediate node may subtract one from the SL in the SRH of the second packet, and update the destination address of the IPv6 header of the second packet to the SID corresponding to the subtracted SL.
  • the intermediate node 1 After the intermediate node 1 receives the second message, it will update the destination address from SID6 to SID7, and the SL from 6 to 5; After receiving the second message, the intermediate node 2 will update the destination address from SID7 to SID8, and the SL from 5 to 4; after receiving the second message, the intermediate node 3 will update the destination address from SID8 to SID5, update SL from 4 to 3.
  • the first proxy node can read the destination address of the second message, and the first proxy node can query the local SID table according to the destination address of the second message to determine the destination address and the local SID Whether the SID in the table matches.
  • the first proxy node can determine that the second message is an SRv6 message, and execute the first Operation corresponding to a bypass SID.
  • the operation corresponding to the first bypass SID may include: identifying the control information in the second message, and if the control information indicates that the SRH of the first message is stored, obtaining the SRH of the first message according to the second message, and storing the first message.
  • the SRH of a message can be configured in advance on the second agent node. For example, the operation corresponding to the first detour SID can be stored in the entry corresponding to the first detour SID in the local SID table.
  • the destination address of the second message is used to query the local SID table, and it is found that the destination address matches the first detour SID, that is, the entry corresponding to the first detour SID is hit, and the entry is executed The operation corresponding to the first detour SID contained in.
  • the second proxy node can identify the value of the first flag and the value of the second flag in the second message, if the value of the first flag is set to 1 and the second flag is set to 0, it is determined that the control information indicates to store the SRH of the first message.
  • the operation corresponding to the control information and the operation corresponding to End.AD.SID both include the step of storing the SRH of the first message, and the difference between the operation corresponding to the control information and the operation corresponding to End.AD.SID is .
  • the operation corresponding to End.AD.SID includes the step of forwarding the load of the first packet to the SF node, and the operation corresponding to the control information may not include the step of forwarding the load of the first packet to the SF node, so as to avoid the step of forwarding the load of the first packet to the SF node. After both a proxy node and a second proxy node forward the load of the first message to the SF node, it causes the SF node to repeatedly process the load of the same message.
  • Step 506 The second proxy node parses the second message to obtain the SRH of the first message.
  • the SRH of the second message may include each SID of the SRH of the first message, and may also include other information other than the SID in the SRH of the first message, and the second proxy node may obtain information based on the SRH of the second message
  • the SRH of the first message can be obtained by performing the following steps 1 to 2.
  • Step 1 The second proxy node deletes the first bypass SID from the segment list in the SRH of the second message.
  • This step can be regarded as the inverse process of step 1 in the process of generating the second message in step 503.
  • the second proxy node can delete the first detour SID from the segment list by performing a pop operation in the SR, so that after the first detour SID is deleted, the segment list is restored to the original first report.
  • the second agent node may delete the active SID in the SRH, that is, delete the first bypass SID after End.AD.SID.
  • the left side of FIG. 19 shows the segment list of the first message
  • the right side of FIG. 19 shows the segment list of the second message.
  • End.AD.SID is SID2.
  • the second agent node can delete SID5 after SID2, thereby restoring the segment list from the right to the left.
  • the second proxy node may also delete one or more target SIDs from the segment list in the SRH of the second message.
  • the left side of FIG. 20 shows the segment list of the first message
  • the right side of FIG. 20 shows the segment list of the second message.
  • Step 2 The second proxy node reduces the SL in the SRH of the second message by one to obtain the SRH of the first message.
  • This step can be regarded as the inverse process of step 2 in the process of generating the second message in step 503.
  • the right side of Figure 19 shows the SL of the second message.
  • the value of the SL of the second message is 3, which means that there are 3 SIDs that have not been executed in the segment list, which are SID0 and SID0.
  • SID1 and SID2 the current active SID in the segment list is SID5, and there are 2 executed SIDs in the segment list, namely SID3 and SID4.
  • the second agent node can decrement SL by one, so that the value of SL is updated from 3 to 2.
  • the value of SL of the first message is 2, which indicates that the segment list has not been executed
  • SIDs There are 2 SIDs, namely SID0 and SID1
  • the current active SID in the segment list is SID2
  • the second message received by the second proxy node is relatively
  • the second message sent by the first proxy node may have a difference due to the update of the SL, and the SL of the second message received by the second proxy node may be smaller than the SL of the second message sent by the first proxy node.
  • the SL of the second message is 6, and because the intermediate node 1 updates the SL from 6 to 5, the intermediate node 2 updates the SL from 5 to 4, and the intermediate node 3 Update the SL from 4 to 3, and the SL of the second message received by the second proxy node is 3.
  • each SID in the SRH is inserted into the SRv6 message by the head node.
  • the SRv6 node For each SRv6 node other than the head node, whenever the SRv6 node receives an SRv6 message, it only executes The operation corresponding to the active SID in the SRH will not push the SID into the SRH, nor will it pop up the SID in the SRH, which means that each intermediate SRv6 node will not change the SID in the SRH.
  • the first detour SID in the SRH of the second message since the first detour SID in the SRH of the second message was inserted by the first proxy node, the first detour SID is not the SID originally included in the SRH of the first message, so the second The proxy node will delete the first bypass SID from the SRH of the second message, thereby stripping the encapsulation of the first bypass SID, and restore the SRH of the second message to the SRH of the first message.
  • the SRH of the message obtained by the second proxy node is basically the same as the SRH of the message previously sent to the first proxy node by the routing node, if the next After the second proxy node receives the processed message, the SRH recovered by the second proxy node can be basically the same as the SRH sent by the routing node before, then the second proxy node returns the SRH restored message to the routing node, the routing node
  • the SRH of the obtained message is basically the same as the SRH of the previously sent message, thereby avoiding the situation that the SRH is lost due to not being recognized by the SF node, and realizing the proxy function of the second proxy node.
  • the SRH stored by the first proxy node and the second proxy node are basically the same Therefore, after any one of the first agent node and the second agent node receives the message, the SRH restored to the message is basically the same, so as to ensure the equivalence of the functions of the first agent node and the second agent node, and satisfy A dual-homing architecture is realized based on two proxy nodes.
  • the SRHs of the two messages mentioned here are basically the same, which means that the main contents of the SRHs of the two messages are the same, and it does not limit that the SRHs of the two messages must be exactly the same.
  • the SRHs of the two messages may be slightly different.
  • the SRH mid-segment lists of the two messages are the same, but the SLs of the SRHs of the two messages are different, for example, due to End.AD.SID Being executed by the first proxy node reduces the SL by 1.
  • Step 507 The second proxy node stores the SRH of the first packet in the second cache entry.
  • the second cache entry refers to the SRH cache entry in which the second proxy node stores the first message.
  • the second proxy node may include an interface board, the interface board may include a forwarding entry memory, and the second cache entry may be located in the forwarding entry memory of the second proxy node.
  • the second proxy node can detect whether the SRH of the first message has been stored in the cache. If the SRH of the first message is not stored in the cache, the second proxy node allocates the SRH of the first message The cache entry is used to obtain a second cache entry, and the SRH of the first packet is stored in the second cache entry. In addition, if it is detected that the SRH of the first message has been stored in the cache, the second proxy node can dispense with the step of storing the SRH of the first message.
  • the operations performed according to the first detour SID can be as follows:
  • N When N receives a packet destined to S and S is a local End. ADB SID, N does/Note: When N receives a data packet whose destination address is S, and S is a bypass SID issued by N, N performs the following steps /
  • the forwarding operation is performed according to the matching entry in the local SID table.
  • SRH[SL] After subtracting one from SL, SRH[SL] will correspond to End.AD.SID, then after using SRH[SL] to update the destination address, the destination address will be updated to End.AD.SID, then use End.AD.SID to query In the local SID table, a matching entry corresponding to End.AD.SID will be found.
  • the matching entry corresponding to End.AD.SID in the local SID table can be used to indicate the operation corresponding to End.AD.SID, and the matching entry corresponding to End.AD.SID can be configured in advance to make the operation corresponding to End.AD.SID Including: detecting whether the message contains control information, and if the message contains control information, the SRH of the message is stored without forwarding the payload of the message to the SF node.
  • the forwarding rules of the second proxy node may be configured in advance, and the second proxy node may be configured to forbid (must not) use the second message body to perform the transmission to the first proxy node after receiving the second message. Response, so as to avoid repeating the steps of generating the second message and sending the second message to the second proxy node when the first proxy node receives the second message, which prevents the second message from being connected to the first proxy node A forwarding loop is formed between the second agent nodes.
  • Step 508 The first proxy node generates a third message according to the first message.
  • the third message a message used to transmit the payload of the first message to the SF node.
  • the third message may include the payload of the first message and not include the SRH of the first message.
  • the third message may be the payload of the first message; in other embodiments, the third message may consist of the payload of the first message and the tunnel header.
  • the first proxy node may strip the SRH from the first message to obtain the third message.
  • the third message may be a data message.
  • the third message may also include a tunnel header (tunnel header).
  • the first proxy node can establish a transmission tunnel with the SF node, and the first proxy node can not only strip the SRH of the first message, but also generate a tunnel header corresponding to the transmission tunnel, and encapsulate the tunnel header so that the third message Including the tunnel head.
  • the tunnel header of the third message may be a VXLAN tunnel header, a GRE tunnel header, or an IP-in-IP tunnel header.
  • the first proxy node may first strip the SRH of the first message, and then encapsulate the tunnel header in the first message from which the SRH is stripped, or it may first encapsulate the tunnel header in the first message, and then encapsulate the tunnel header from the first message.
  • One packet strips the SRH this embodiment does not limit the timing of stripping the SRH and the encapsulation tunnel header in the generation process of the third packet. It should be understood that it is an optional way for the third message to include the tunnel header.
  • the first proxy node may directly use the first message with the SRH stripped as the third message, and the third message may also be Does not include the tunnel head.
  • the third message may include a source address and a destination address. If the first proxy node and the SF node have established a transmission tunnel, the tunnel header of the third message may include the source address and the destination address. The source address in the tunnel header of the third message is used to identify the first proxy node, and the third message The destination address in the tunnel header of the text is used to identify the SF node. Among them, the address of the first proxy node and the address of the SF node can be pre-configured on the first proxy node.
  • the first proxy node can query the configuration to obtain the address of the first proxy node, and change it Write the source address of the tunnel head; and get the address of the incoming SF node, and write it into the destination address of the tunnel head.
  • the addresses configured on the first proxy node and the SF node may refer to each other.
  • the address of the first proxy node configured on the first proxy node may be the same as the address of the first proxy node configured on the SF node.
  • the address of the SF node configured on the proxy node may be the same as the address of the SF node configured on the SF node.
  • the form of the source address and the destination address in the tunnel header can be determined according to the configuration operation. For example, it can be in the form of an IPv4 address or an IPv6 address. Of course, it can also be configured in other forms according to requirements.
  • the source address of the third packet may be the address of the source device of the payload itself, for example, the address of the terminal device, and the destination address of the third packet may be the payload.
  • the address of the destination device to be finally reached after processing for example, the address of another terminal device.
  • the third message sent by the first agent node to the SF node may have multiple formats according to different actual business scenarios.
  • the third message may be as shown in FIG. 26, and the third message may be an IPv4 message.
  • the third message may include the IPv4 message and also include a tunnel header, and the tunnel header may precede the IPv4 header of the IPv4 message.
  • the third message may be as shown in FIG. 27, and the third message may be an IPv6 message.
  • the third message may include the IPv6 message and further include a tunnel header, and the tunnel header may be before the IPv6 header of the IPv6 message.
  • the third message may be as shown in FIG. 28, and the third message may be an Ethernet frame.
  • the third message may also include a tunnel header in addition to the Ethernet frame, and the tunnel header may be before the frame header of the Ethernet frame.
  • Step 509 The first proxy node sends a third message to the SF node.
  • the effects brought by the above steps 508 to 509 may include at least: On the one hand, the first proxy node can transmit the load of the first message to the SF node, so that the SF node can process the load, which is achieved by processing the message correspond
  • step 503 to step 504 and step 508 to step 509 can be executed sequentially. For example, steps 508 to 509 can be performed first, and then steps 503 to 504 are performed; or step 503 to step 504 can be performed first, and then steps 508 to 509 are performed. In another embodiment, step 503 to step 504 and step 508 to step 509 can be performed at the same time.
  • Step 510 The SF node receives the third message from the first proxy node, processes the third message, and obtains the fourth message.
  • the fourth message refers to a message obtained by processing by the SF node.
  • the SF node may perform service function processing on the third message according to the configured service strategy, so as to obtain the fourth message.
  • this embodiment is an example of a scenario where the proxy node spontaneously sends it and receives it.
  • the scenario of spontaneously sending it and receiving it means that one proxy node sends a message to the SF node, and another proxy node receives the message returned by the SF node. .
  • this embodiment takes the SF node after receiving the message from the first proxy node and sending the processed message to the second proxy node as an example for description.
  • the SF node chooses to send the processed message (ie the fourth message) to the second proxy node in various application scenarios. For example, if the SF node determines that the state of the outbound interface connected to the first proxy node is down (closed), the SF node can choose to send the processed message to the second agent node. For another example, if the link between the SF node and the first agent node is congested, the SF node can choose to send the processed message to the second agent node, so that the message passes through the connection between the SF node and the second agent node. Link transmission, so as to achieve the effect of shunting.
  • the SF node can perform intrusion detection on the third message, so as to realize the service function of the firewall.
  • the SF node can modify the quintuple, MAC frame information or application layer information of the third message, so as to realize the service function of NAT.
  • the SF node can perform load balancing, user authentication, etc. on the third message, and this embodiment does not limit the processing mode of the SF node.
  • the fourth message may include a source address and a destination address. If the first proxy node and the SF node have not established a transmission tunnel, the source address of the fourth packet can be the same as the source address of the third packet, and the source address of the fourth packet can be the source address of the payload itself, that is, the source of the payload The address of the device, for example, the address of the terminal device.
  • the destination address of the fourth packet can be the same as the source address of the third packet, and is the address of the destination device to be finally reached after the payload is processed, for example, the address of another terminal device.
  • the tunnel header of the fourth message can include the source address and the destination address
  • the source address in the tunnel header of the fourth message is used to identify the SF node
  • the tunnel header of the fourth message is used to identify the SF node.
  • the destination address in the tunnel header is used to identify the first proxy node.
  • the address of the first proxy node and the address of the SF node can be configured on the SF node in advance.
  • the SF node When the SF node generates the fourth message, it can query the configuration, write the address of the SF node to the source address of the tunnel head, and write the address of the SF node to the source address of the tunnel head.
  • the destination address of the tunnel head is written into the address of the first proxy node.
  • Step 511 The SF node sends a fourth message to the second proxy node.
  • the SF node can pre-store the corresponding to the first agent node.
  • the binding relationship between the inbound interface and the outbound interface corresponding to the second proxy node then, when the SF node receives the third packet from the inbound interface and generates the fourth packet, based on the binding relationship, it will select the The outbound interface bound to the inbound interface sends the fourth message, so that the message reaches the second proxy node after being sent out through the outbound interface.
  • the binding relationship can be determined according to the configuration operation.
  • the SF node For example, if the SF node is connected to proxy1 through inbound interface 1 and connected to proxy2 through outbound interface 2, you can configure forwarding rules for the SF node in advance: After processing the packets received by inbound interface 1, all the obtained packets pass through the outbound interface 2 send. Then, when proxy1 sends the third message, the SF node will receive the third message through the inbound interface 1. The SF node queries the forwarding rule in the forwarding table, and sends the third message to proxy2 through the outbound interface 2. .
  • Step 512 The second proxy node receives the fourth message from the SF node, and queries the second cache entry according to the fourth message to obtain the SRH of the first message.
  • the second proxy node may obtain the index corresponding to the SRH according to the fourth message, query the cache of the second proxy node according to the index, find the second cache entry, and obtain the SRH of the first message from the second cache entry.
  • the query mode of the SRH of the first message may include any one of the following query mode 1 to query mode 3:
  • the second proxy node can obtain the flow identifier corresponding to the first message according to the fourth message, use the flow identifier corresponding to the first message as an index, query the cache, and obtain the SRH of the first message.
  • the flow identifier corresponding to the fourth packet may be the flow identifier corresponding to the first packet.
  • the second proxy node can obtain the identifier of the first cache entry according to the fourth message, use the identifier of the first cache entry as the index, query the cache, and obtain the SRH of the first message, where the fourth The message may include the identifier of the first cache entry.
  • the query method refer to the embodiment shown in FIG. 36 and the embodiment shown in FIG. 37 for details.
  • the second proxy node can obtain the flow ID and VPN ID corresponding to the fourth message according to the fourth message, and query the cache according to the flow ID, VPN ID, and End.AD.SID corresponding to the fourth message.
  • This query method can be specifically referred to the embodiment shown in FIG. 38 and the embodiment shown in FIG. 39 below.
  • Step 513 The second proxy node generates a fifth message according to the fourth message and the SRH of the first message.
  • the fifth message refers to the message obtained by restoring the SRH on the basis of the fourth message.
  • the fifth message includes the payload of the fourth message and the SRH of the first message.
  • the second proxy node may encapsulate the SRH of the first message into the fourth message to obtain the fifth message.
  • the second proxy node may directly use the fourth message of the SRH encapsulating the first message as the fifth message.
  • the second proxy node can establish a transmission tunnel with the SF node, and the fourth message includes the tunnel header corresponding to the transmission tunnel. Then the second proxy node can not only encapsulate the SRH of the first message, but also strip the first message.
  • the tunnel header of the fourth message is such that the fifth message does not include the tunnel header of the fourth message.
  • the tunnel header of the fourth packet may be a VXLAN tunnel header, a GRE tunnel header, or an IP-in-IP tunnel header.
  • the second proxy node can first encapsulate the SRH of the first message into the fourth message, and then strip the tunnel header from the fourth message encapsulated with SRH, or it can strip the tunnel header from the fourth message first, and then strip the tunnel header from the fourth message.
  • the fourth packet with the tunnel header encapsulates the SRH, and this embodiment does not limit the timing of stripping the tunnel header and encapsulating the SRH.
  • Step 514 The second proxy node sends a fifth message to the routing node.
  • Step 515 The routing node receives the fifth message from the second proxy node, and forwards the fifth message.
  • the method provided in this embodiment extends a new SID with bypass function for End.AD.SID, and the local proxy node uses this new SID when receiving a message carrying End.AD.SID SID, the SRH of the message is bypassed and transmitted to the proxy node of the opposite end, so that the proxy node of the opposite end can also obtain the SRH of the message, so that the SRH of the message can be stored to the proxy node of the opposite end.
  • two proxy nodes dual-homed to the same SF node can realize the synchronization of the SRH of the message, so that the SRH of the message is redundantly backed up.
  • the SF node processes the message, it can choose to return the processed message to the opposite agent node. Since the opposite agent node has obtained the SRH of the message in advance, the opposite agent node can use the previously stored The SRH recovers the SRH of the message, so that the message processed by the SF node can be normally transmitted through the recovered SRH.
  • the first agent node or the second agent node can maintain the synchronization of cache entries between the two agent nodes through a state machine.
  • the state machine will be exemplarily described below.
  • the state machine can be recorded as the Peer Cache (Peer Cache) state machine, and the state machine represents the state of the peer agent node storing the SRH.
  • the state machine of the first proxy node represents the state of the second proxy node storing SRH
  • the state machine of the second proxy node represents the state of storing SRH of the first proxy node.
  • the proxy node can add a target field to each cache entry in the cache, write the current state of the state machine into the target field, and after receiving the first In the case of a message, the action to be executed is determined according to the value of the target field in the cache entry.
  • the target field may be recorded as a PeerCacheState (peer cache state) field, and the target field is used to record the current state of the state machine.
  • the state machine can include the current state, the next state, the event, and the target action.
  • the first agent node can execute the target action corresponding to the event in the state machine and the current state according to the detected event and the current state of the state machine, and switch the state machine It is the next state of the current state.
  • the state machine can be either the first state machine or the second state machine:
  • Figure 31 is a schematic diagram of a first state machine provided by an embodiment of the present application.
  • the first state machine is suitable for the case where transmission reliability is ensured by repeated transmission.
  • the processing strategy provided by the first state machine may include the following (1 ) To (7) one or more. It should be understood that the following first state machine can be applied to the first agent node or the second agent node.
  • the opposite agent node refers to the second agent node, and for the second agent node In other words, the opposite agent node refers to the first agent node.
  • the first event is that the first cache entry is generated, and the first event can be recorded as a local cache new (new cache entry added locally) event in the program.
  • the first state indicates that the second cache entry is consistent with the first cache entry, and the first state can be recorded as a cache equal state in the program.
  • the second event is the update of the first cache entry, and the second event can be recorded as LocalCache_Upd (literally translated as local cache update, Upd means update, meaning update) event in the program.
  • LocalCache_Upd literally translated as local cache update, Upd means update, meaning update
  • the third event is that the first message is received, and the third event can be recorded as the Local Cache Touch (local cache is read) event in the program.
  • the second state indicates that it is determined that the cache entry at the opposite end is older than the local cache entry, or the cache entry at the opposite end is suspected to be older than the local cache entry.
  • the local agent node may send a second message to the opposite agent node to update the SRH in the cache entry of the opposite agent node.
  • the second state can be recorded as the cache older (Cache_Older) state in the program.
  • the fourth event is the timeout of the aging timer, and the fourth event can be recorded as the AgingTimer_Expired (aging timer timeout) event in the program.
  • the expiration of the aging timer indicates that the first cache entry is in an stale state, and the first cache entry can be cleared to save cache space.
  • the first proxy node may start an aging timer to set a corresponding aging flag (aging flag) for the first cache entry, and the value of the aging flag may be 1 or 0.
  • the first proxy node can periodically scan the first cache entry through the aging timer. If the aging flag corresponding to the first cache entry is 1, then the aging flag is modified to 0.
  • the first cache entry is scanned If the corresponding aging flag is 0, the first cache entry is deleted. Among them, in the process of forwarding the message, if the first proxy node hits the first cache entry when querying the cache, the first proxy node sets the aging flag corresponding to the first cache entry to 1, thereby setting the first cache The entry resides in the cache.
  • the fifth event is the event of receiving the second message.
  • the fifth event can be recorded in the program as CacheSync_Recv (literally translated as cache synchronization reception, Sync stands for Synchronize, which means synchronization, which means to synchronize SRH to the cache of the opposite agent node , Recv stands for receive, which means to receive the event.
  • Figure 32 is a schematic diagram of a second state machine provided by an embodiment of the present application.
  • the second state machine is suitable for ensuring transmission reliability through a confirmation mechanism.
  • the processing strategy provided by the second state machine may include the following (1 ) To (10) one or more. It should be noted that the following description focuses on the differences between the second state machine and the first state machine, and the characteristics of the first state machine are similar to those of the first state machine, please refer to the first state machine, which will not be described in detail below.
  • the third state means that the cache entry is generated or updated locally, and the second message has been sent to the opposite agent node, waiting for the opposite agent node to confirm the operation result of the second message.
  • the third state can be recorded as Sync_Start (literally translated as synchronous start, Sync means Synchronize, Start means start) state.
  • the action of sending the second message may not be executed, so as to avoid that the two proxy nodes are in the third state at the same time and lock each other.
  • the sixth event is the receipt of the sixth message, and the sixth event can be recorded in the program as CacheSync_Ack (literally translated as cache synchronization confirmation, Sync means Synchronize, and Ack means Acknowledge, meaning confirmation) event.
  • CacheSync_Ack literally translated as cache synchronization confirmation, Sync means Synchronize, and Ack means Acknowledge, meaning confirmation
  • the effect achieved can at least include: through the state machine, the current action can be determined according to the state of the SRH cached by the opposite agent node in real time, so that the SRH can be transmitted to the opposite agent node in time, and the local agent can be guaranteed
  • the cache entries between the node and the opposite agent node are kept synchronized, so that the consistency of the SRH between the agent nodes that are dual-homing connected to the same SF node is realized.
  • FIG. 12 is a schematic diagram of packet transmission in the spontaneous sending and receiving scenario provided by an embodiment of the present application.
  • the SF node After receiving the message from the proxy node 1, the processed message is returned to the proxy node 2.
  • the scenario of spontaneously sending and receiving is an optional way, and the message transmission method provided in the embodiment of the present application can also be applied in the scenario of spontaneous sending and receiving by the proxy node.
  • the scenario of spontaneous sending and receiving means that after a proxy node sends a message to the SF node, the proxy node still receives the message returned by the SF node.
  • the SF node After receiving a message from an agent node, the SF node returns the processed message to the agent node.
  • Figure 34 is a schematic diagram of message transmission in a spontaneously sent and self-received scenario provided by an embodiment of the present application.
  • the SF node receives the message from the proxy node 1, and then processes the message Returned to agent node 1.
  • FIG. 35 is used to exemplarily introduce the message transmission process in the scenario of spontaneous sending and receiving. It should be noted that the embodiment in FIG. 35 focuses on the differences from the embodiment in FIG. 5, and for steps similar to the embodiment in FIG. 5, please refer to the embodiment in FIG. 5, and details are not described in the embodiment in FIG. 35.
  • FIG. 35 is a flowchart of a message transmission method provided by an embodiment of the present application. The method may include the following steps:
  • Step 3501. The routing node sends a first message to the first proxy node.
  • Step 3502 the first proxy node receives the first message from the routing node, and the first proxy node stores the SRH of the first message in the first cache entry.
  • Step 3503 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • Step 3504 The first proxy node sends a second message to the second proxy node.
  • Step 3505 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • Step 3506 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 3507 The second proxy node stores the SRH of the first packet in the second cache entry.
  • Step 3508 The first proxy node generates a third message according to the first message.
  • Step 3509 The first proxy node sends a third message to the SF node.
  • Step 3510 The SF node receives the third message from the first proxy node, processes the third message, and obtains the fourth message.
  • Step 3511. The SF node sends a fourth message to the first proxy node.
  • the SF node can return the fourth message to the first proxy node.
  • the outbound interface corresponding to the first proxy node may be selected, and the fourth packet may be sent through the outbound interface.
  • the SF node may pre-store the binding relationship between the inbound interface corresponding to the first proxy node and the outbound interface corresponding to the first proxy node. Then, when receiving from the inbound interface The third message, after the fourth message is generated, based on the binding relationship, the SF node will select the outbound interface bound to the inbound interface to send the fourth message, so that the message reaches the first agent after being sent out through the outbound interface node.
  • the binding relationship can be determined according to the configuration operation. For example, if the SF node is connected to proxy1 through interface 3, forwarding rules can be configured for the SF node in advance: after processing the message received by interface 3, all the obtained messages are returned through interface 3.
  • the SF node will store the forwarding rule in the forwarding table. After proxy1 sends the third message, the SF node will receive the third message through interface 3. The SF node queries the forwarding rule in the forwarding table, and will The three messages are returned to the first agent node through interface 3.
  • Step 3512 the first proxy node receives the fourth message from the SF node, and according to the fourth message, queries the first cache entry to obtain the SRH of the first message.
  • Step 3513 The first proxy node generates a fifth message according to the fourth message and the SRH of the first message.
  • Step 3514 The first proxy node sends a fifth message to the routing node.
  • Step 3515 The routing node receives the fifth message from the first proxy node, and forwards the fifth message.
  • the method provided in this embodiment extends a new SID with bypass function for End.AD.SID, and the local proxy node uses this new SID when receiving a message carrying End.AD.SID SID, the SRH of the message is bypassed and transmitted to the proxy node of the opposite end, so that the proxy node of the opposite end can also obtain the SRH of the message, so that the SRH of the message can be stored to the proxy node of the opposite end.
  • two proxy nodes dual-homed to the same SF node can synchronize the SRH of the message, so that the SRH of the message is redundantly backed up, and the security of the SRH of the message is improved.
  • the foregoing method embodiments describe the basic flow of packet transmission, and the method provided in the embodiments of the present application may also support the NAT scenario.
  • the NAT scenario refers to the business function of the SF node including the modification of the flow identifier.
  • the flow identifier corresponding to the processed message will be the same as the one before processing.
  • the flow identifiers corresponding to the packets are inconsistent. For example, if the service function of the SF node includes performing NAT, the SF node will modify the quintuple of the message, and the quintuple of the message after processing will be inconsistent with the quintuple of the message before processing.
  • the SF usually includes the service function of Destination Network Address Translation (DNAT).
  • DNAT Destination Network Address Translation
  • the SF will store an address pool and replace the destination address in the message with the address pool. Return the message with the destination address replaced to the proxy node, so that the destination address of the message received by the proxy node will be the same as the destination of the message sent to the SF node by the local proxy node or the opposite proxy node The address is different.
  • DNAT Destination Network Address Translation
  • the method provided in the embodiments of this application can be executed to ensure that when the SF node modifies the flow identifier corresponding to the message, the proxy node can still find the message according to the modified flow identifier.
  • the SRH of the message so as to ensure that the SRH of the message is restored.
  • FIG. 36 focuses on the differences from the embodiment in FIG. 5, and the steps and other features that are the same as those in the embodiment in FIG. 5, please refer to the embodiment in FIG. 5, which will not be repeated in the embodiment in FIG. .
  • FIG. 36 is a flowchart of a message transmission method provided by an embodiment of the present application. As shown in FIG. 36, the method may include the following steps:
  • Step 3601. The routing node sends a first message to the first proxy node.
  • Step 3602 the first proxy node receives the first message from the routing node, and the first proxy node determines that the service function of the SF node includes modifying the flow identifier.
  • the first agent node may be configured in advance, and the first agent node may receive a configuration instruction, and obtain the service function information of the SF node from the configuration instruction.
  • the information indicates that the service function of the SF node includes modifying the flow identifier
  • the first proxy node may store the service function information of the SF node.
  • determining the service function of the SF node includes modifying the flow identifier.
  • the service function of the SF node may also be automatically discovered by the first agent node including modifying the flow identifier.
  • the first agent node may send a capability query request to the SF node, and the SF node may receive the capability query request and respond
  • the service function information is sent to the first agent node, and the first agent node can determine the service function of the SF node according to the received service function information, including modifying the flow identifier.
  • the traffic classifier or the network controller may also send the service function information of the SF node to the SF node. This embodiment does not limit how the first agent node determines the service function of the SF node.
  • the first proxy node can determine that the service function of the SF node includes modifying the flow identifier, and execute the method provided in the subsequent steps to ensure that when the SF node modifies the flow identifier corresponding to the message, the proxy node
  • the SRH of the message can still be found by modifying the message with the flow identification, so as to ensure that the SRH of the message is restored.
  • Step 3603 The first proxy node uses the identifier of the first cache entry as an index, and stores the SRH of the first message in the first cache entry.
  • the identifier of the first cache entry is used to uniquely identify the first cache entry in the cache of the first proxy node.
  • the identifier of the first cache entry may include a cache index (cache Index) of the first cache entry, and the cache index is used to identify the first cache entry on the interface board of the first proxy node.
  • the first proxy node may include multiple interface boards, and the identification of the first cache entry may also include the identification of the target interface board (Target Board, TB), and the target interface board is where the first cache entry is located.
  • the interface board of the target interface board is used to identify the target interface board in each interface board of the first agent node.
  • Step 3604 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • Step 3605 The first proxy node sends a second message to the second proxy node.
  • the second message may include the identifier of the first cache entry, and the control information in the second message may instruct the second proxy node to store the SRH of the first message, and also instruct the second proxy node to store the first cache
  • the second cache entry is a cache entry of the SRH in which the second proxy node stores the first message.
  • the first proxy node can use the second message to transmit the identifier of the local cache entry for storing SRH to the second proxy node.
  • the second proxy node is instructed to maintain the mapping relationship between the cache entry of the SRH stored on the first proxy node and the cache entry of the SRH stored on the second proxy node.
  • control information may include a third flag, which is used to identify the service function of the SF node including modifying the flow identifier.
  • the second proxy node may be configured in advance, so that if the second proxy node determines that the control information includes
  • the third mark stores the mapping relationship between the identification of the first cache entry and the identification of the second cache entry.
  • the third mark may be recorded as a NAT mark, and the NAT mark is used to identify that the service function of the SF node includes NAT.
  • the second message may include an extension header as shown in FIG.
  • the cache index field and the TB field are used to carry the identifier of the first cache entry
  • the value of the cache index field may be
  • the value of the TB field may be the identification of the target interface board in the identification of the first cache entry
  • the N field is used to carry the third flag.
  • the value of the N field may be If it is 1, it means that the SF node is a NAT type SF.
  • the business function of the SF node includes modifying the flow identifier. Therefore, when it is recognized that the value of the N field is 1, the mapping between the identifiers of the cache entries of the two proxy nodes must be maintained relationship.
  • Step 3606 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • Step 3607 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 3608 The second proxy node determines that the service function of the SF node includes modifying the flow identifier.
  • the determining step of the second proxy node is the same as the determining step of the first proxy node, and can refer to step 3602, which will not be repeated here.
  • Step 3609 The second proxy node uses the identifier of the second cache entry as an index, and stores the SRH of the first packet in the second cache entry.
  • the identifier of the second cache entry is used to uniquely identify the second cache entry in the cache of the second proxy node.
  • the identifier of the second cache entry may include a cache index of the second cache entry, and the cache index is used to identify the corresponding second cache entry in the interface board of the second proxy node.
  • the second proxy node may include multiple interface boards, and the identifier of the second cache entry may also include the identifier of the target interface board.
  • the target interface board is the interface board where the second cache entry is located, and the target interface The board identifier is used to identify the target interface board in each interface board of the second agent node.
  • the storage step of the second proxy node is the same as the storage step of the first proxy node, and can refer to step 3603, which will not be repeated here.
  • the second proxy node may determine that the control information instructs the second proxy node to store the SRH of the first message and the identifier of the first cache entry and the second cache table according to the control information in the second message.
  • the mapping relationship between the identifiers of the items, the second proxy node can obtain the identifier of the first cache entry from the second message, and store the mapping between the identifier of the first cache entry and the identifier of the second cache entry relationship.
  • the first proxy node stores SRH in TB1
  • the cache index of SRH in TB1 is cache index1
  • the second proxy node stores SRH in TB2
  • the cache index of SRH in TB2 is cache index2
  • the first The identifier of the cache entry is (TB1, cache index1)
  • the identifier of the second cache entry is (TB2, cache index2)
  • the mapping relationship stored by the second proxy node may include (TB1, cache index1, TB2, cache index2).
  • the second proxy node can identify the value of the first flag, the value of the second flag, and the value of the third flag in the second message. If the value of the first flag is set to 1, the value of the second flag is set. If the value is set to 0 and the value of the third flag is set to 1, it is determined that the control information indicates the SRH storing the first message, and the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry is stored .
  • Step 3610 The first proxy node generates a third message according to the first message.
  • Step 3611 the first proxy node sends a third message to the SF node.
  • the third message may include the identifier of the first cache entry.
  • the identifier of the first cache entry may be carried in the payload of the third message.
  • the message received by the SF node will carry the identifier of the first cache entry.
  • the obtained message is also The identifier of the first cache entry will be carried, so that the message returned by the SF node to the proxy node carries the identifier of the first cache entry, so the proxy node can use the identifier of the first cache entry in the returned message to find The cache entry of SRH was previously stored.
  • Step 3612 the SF node receives the third message from the first proxy node, processes the third message, and obtains the fourth message.
  • Step 3613 The SF node sends a fourth message to the second proxy node.
  • Step 3614 The second proxy node receives the fourth message from the SF node, and uses the identifier of the second cache entry as an index to query the second cache entry to obtain the SRH of the first message.
  • the fourth message may include the identifier and payload of the first cache entry.
  • the second proxy node may query the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry according to the identifier of the first cache entry to obtain the second cache table corresponding to the identifier of the first cache entry The ID of the item. For example, if the identifier of the first cache entry in the second message is (TB1, cache index1), the mapping relationship stored by the second proxy node includes (TB1, cache index1, TB2, cache index2), and the second proxy node is based on (TB1, cache index1, TB2, cache index2). TB1, cache index1), query the mapping relationship to obtain (TB2, cache index2), look up the cache entry corresponding to cache index2 on TB2, and obtain the SRH of the first packet from the cache entry.
  • Step 3615 The second proxy node generates a fifth message according to the fourth message and the SRH of the first message, where the fifth message includes the payload of the fourth message and the SRH of the first message.
  • Step 3616 The second proxy node sends the fifth message to the routing node.
  • Step 3617 The routing node receives the fifth message from the second proxy node, and forwards the fifth message.
  • the dynamic proxy node of SRv6 SFC usually uses the 5-tuple of the message as the key, and stores the SRH of the message in the cache entry. After receiving the message returned by the SF node, it still uses the 5-tuple of the message. The group is the key, and the corresponding cache entry is queried from the cache. Then, if the business function of the SF node is NAT, because the SF node will modify the quintuple of the message in the process of processing the message, so that the quintuple of the message returned by the SF node will be the same as that of the previously received message.
  • the five-tuple is inconsistent, which makes the message received by the proxy node from the SF node and the five-tuple of the message previously sent to the SF node inconsistent, so when the proxy node queries the cache according to the received message, because the query cache is used
  • the key is inconsistent with the key used when stored in the cache, so that the SRH cannot be found, and the SRH cannot be restored to the message, which causes the message transmission to fail.
  • the agent node previously stored SRH with quintuple 1 as the key, and the quintuple of the message was modified by the SF node from quintuple 1 to quintuple 2, and the agent node queries the cache with quintuple 2 as the key , Will not be able to get SRH.
  • the proxy node stores the SRH of the message by determining the service function of the SF node including modifying the flow identifier, and using the identifier of the cache entry in which the SRH is stored as the key, to store the SRH of the message, then if the SF node modifies the flow identifier The message is returned to the proxy node. Since the cache entry for storing SRH is usually fixed relative to the proxy node, it will not change due to the modification of the flow identifier. Therefore, the identifier of the cache entry when storing SRH and the cache when querying SRH The identifiers of the entries can be kept consistent.
  • the proxy node can restore the SRH when the flow identifier of the message is changed during the transmission process, so the proxy node can be allowed to support access to the SF with the NAT function, thereby realizing the NAT function
  • the SF provides dynamic proxy functions.
  • the foregoing method embodiment provides a scenario where NAT is supported based on the process of spontaneous sending and receiving.
  • the embodiment of the present application may also support the NAT scenario based on the process of self-transmission and self-reception, which is described in detail in the method embodiment shown in FIG. 37 below.
  • FIG. 37 focuses on the differences from the embodiment in FIG. 36, and for steps similar to the embodiment in FIG. 36, please refer to the embodiment in FIG. 36, and details are not described in the embodiment in FIG. 37.
  • FIG. 37 is a flowchart of a message transmission method provided by an embodiment of the present application.
  • the interaction body of the method includes a first agent node, a second agent node, an SF node, and a routing node.
  • the method can include the following steps:
  • Step 3701. The routing node sends the first message to the first proxy node.
  • Step 3702 the first proxy node receives the first message from the routing node, and the first proxy node determines that the service function of the SF node includes modifying the flow identifier.
  • Step 3703 The first proxy node uses the identifier of the first cache entry as an index, and stores the SRH of the first packet in the first cache entry.
  • Step 3704 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • Step 3705 The first proxy node sends a second message to the second proxy node.
  • Step 3706 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • Step 3707 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 3708 The second proxy node determines that the service function of the SF node includes modifying the flow identifier.
  • Step 3709 The second proxy node uses the identifier of the second cache entry as an index, and stores the SRH of the first packet in the second cache entry.
  • Step 3710 The first proxy node generates a third message according to the first message.
  • Step 3711 The first proxy node sends a third message to the SF node.
  • the second message also includes the identifier of the first cache entry.
  • the control information is also used to instruct the second proxy node to store the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry.
  • the second cache entry is for the second proxy node to store the first cache entry. SRH cache entry of the message.
  • Step 3712 the SF node receives the third message from the first proxy node, processes the third message, and obtains the fourth message.
  • Step 3713 The SF node sends a fourth message to the first proxy node, where the fourth message includes the identifier and payload of the first cache entry.
  • Step 3714 The first proxy node receives the fourth message from the SF node, uses the identifier of the first cache entry as an index, and queries the first cache entry to obtain the SRH of the first message.
  • This step can be different from step 3614, because the identifier of the cache entry in the fourth message corresponds to the local cache,
  • the first proxy node can dispense with the step of querying the mapping relationship between the local cache entry and the peer cache entry, and obtain the identifier of the first cache entry from the fourth message, using the identifier of the first cache entry as the key Query the cache to find the first cache entry.
  • Step 3715 The first proxy node generates a fifth message according to the fourth message and the SRH of the first message.
  • Step 3716 The first proxy node sends a fifth message to the routing node.
  • Step 3717 The routing node receives the fifth message from the first proxy node, and forwards the fifth message.
  • the proxy node stores the SRH of the message by determining that the service function of the SF node includes modifying the flow identifier, and using the identifier of the cache entry storing the SRH as the key, then if the SF node modifies the flow identifier The message is returned to the proxy node. Since the cache entry for storing SRH is usually fixed relative to the proxy node, it will not change due to the modification of the flow identifier. Therefore, the identifier of the cache entry when storing SRH and the cache when querying SRH The identifiers of the entries can be kept consistent.
  • the proxy node can restore the SRH when the flow identifier of the message is changed during the transmission process, so the proxy node can be allowed to support access to the SF with the NAT function, thereby realizing the NAT function
  • the SF provides dynamic proxy functions.
  • the foregoing method embodiment describes the message transmission process supporting the NAT scenario.
  • the method provided in the embodiments of this application can also support the SRv6 VPN scenario.
  • SRv6 VPN refers to the transmission of VPN data based on the SRv6 tunnel.
  • the SF node in the service chain needs to process the VPN data to realize the VPN service .
  • the proxy node can support the SF node that implements the VPN service, and by executing the following method embodiments, it provides dynamic proxy service for the SF node that implements the VPN service.
  • the following describes the message transmission process in the VPN scenario through the method embodiment shown in FIG. 38. It should be noted that the embodiment in FIG. 38 focuses on the differences from the embodiment in FIG. 5, and the same steps or other features as the embodiment in FIG. 5 also refer to the embodiment in FIG. 5, which is not done in the embodiment in FIG. 38. Go into details.
  • FIG. 38 is a flowchart of a message transmission method provided by an embodiment of the present application. The method may include the following steps:
  • Step 3801 the routing node sends the first message to the first proxy node.
  • Step 3802 the first proxy node receives the first message from the routing node, the first proxy node uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index, and stores the first message in the first cache entry SRH.
  • the first proxy node can access multiple SF nodes, and the multiple SF nodes can belong to different VPNs. Therefore, in the process that the first proxy node provides dynamic proxy services for the multiple SF nodes, the first proxy node can Received packets to be sent to different VPNs.
  • the packets of different VPNs may correspond to the same flow identifier. This leads to the fact that if the SRH is stored with the flow identifier as an index, the SRH index of the packets of different VPNs will be the same, resulting in the SRH of the packets of different VPNs. Will be stored in the same location, then, on the one hand, the purpose of information isolation between different VPNs cannot be achieved, which affects security. On the other hand, because the same index hits multiple SRHs in the cache at the same time, the report cannot be determined. Which SRH to recover from the text, which in turn leads to failure to recover SRH.
  • the corresponding End.AD.SID can be assigned to the SF nodes of each VPN in advance, and the End.AD.SIDs corresponding to the SF nodes of different VPNs are different.
  • End.AD.SID can be used as the identifier of the VPN.
  • the End.AD.SID carried in the message sent to the SF node of different VPN will be different, through different End.AD. .SID can distinguish packets of different VPNs.
  • the first agent node has access to 10 SF nodes, namely SF node 0, SF node 1 to SF node 9, and these 10 SF nodes belong to 2 different VPNs, among which SF node 0 to SF node 5 It belongs to VPN1, and SF node 6 to SF node 9 belong to VPN2.
  • 2 End.AD.SIDs can be allocated to 10 SF nodes. Among them, one End.AD.SID is allocated to SF node 0 to SF node 5, and another is allocated to SF node 6 to SF node 9.
  • One End.AD.SID which of the two End.AD.SIDs the End.AD.SID in the message is, can identify which VPN the message is sent to.
  • End.AD.SID assigned to SF node 0 to SF node 5 may be A:1
  • End.AD.SID assigned to SF node 6 to SF node 9 may be B::2, then if it is received
  • the End.AD.SID carried in message 1 is A:1
  • the End.AD.SID carried in message 2 is B::2, indicating that message 1 corresponds to VPN1
  • message 2 is the message corresponding to VPN2.
  • the SRH is stored by using the flow ID and End.AD.SID as indexes.
  • the index of the messages of different VPNs can be different.
  • the End.AD.SID is distinguished to ensure that the SRHs of packets of different VPNs can be stored separately, to achieve information isolation of different VPNs, and to avoid the situation that the same index hits packets of multiple VPNs, thereby solving the above technology problem.
  • the SF node accessed by the first agent node can belong to the public network and the VPN respectively, and the corresponding End.AD.SID can be assigned to the SF node of the public network and the SF node of the VPN, and the End corresponding to the SF node of the public network.
  • AD.SID is different from End.AD.SID corresponding to SF node of VPN. For example, if among the 10 SF nodes accessed by the first agent node, SF node 0 to SF node 4 belong to the public network, and SF node 5 to SF node 9 belong to VPN1. In this example, 2 End.AD.SIDs can be allocated to 10 SF nodes.
  • one End.AD.SID is allocated to SF node 0 to SF node 4, and another is allocated to SF node 5 to SF node 9.
  • One End.AD.SID through this allocation method, the flow ID and End.AD.SID are used as indexes to store the SRH, which can ensure that the SRH of the public network message and the SRH of the VPN message can be stored separately. In addition, the situation where the same index hits the public network message and the VPN message is avoided.
  • Step 3803 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • Step 3804 The first proxy node sends a second message to the second proxy node.
  • Step 3805 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • Step 3806 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 3807 The second proxy node stores the SRH of the first message in the second cache entry according to the flow identifier corresponding to the second message and the endpoint dynamic proxy SID, where the flow identifier corresponding to the second message may be the first The flow identifier corresponding to a packet.
  • the flow identifier corresponding to the second packet may be the flow identifier corresponding to the first packet.
  • the second message may include a quintuple, and the quintuple in the second message is the same as the quintuple in the first message.
  • the second message may include End.AD.SID.
  • the second message may include the SRH of the first message, and the SRH of the first message may include End.AD.SID, so the second proxy node can start from the first message.
  • the flow ID and End.AD.SID corresponding to the first message are obtained.
  • the VPN identifier is used to identify the VPN to which the service function node belongs. Different VPNs have different VPN identifiers. Therefore, through the VPN identifier, different VPNs can be distinguished.
  • the VPN identifier can be a VLAN ID.
  • the VPN identifier may be stored in the second agent node in advance.
  • Manner 1 The second proxy node uses the flow identifier and End.AD.SID corresponding to the first message as indexes, and stores the SRH of the first message in the second cache entry.
  • packets of different VPNs considering that packets of different VPNs may have the same flow identifier, if only the flow identifier is used as an index to store, the same index may cause the packets of different VPNs to be stored in In the same location, information isolation between different VPNs cannot be achieved.
  • the packets of different VPNs correspond to the same flow ID, since their corresponding End.AD.SIDs are different, it can be ensured that their corresponding indexes are distinguished by End.AD.SID, so that different VPNs can be guaranteed.
  • the SRH of the message is stored in different locations to realize information isolation between different VPNs.
  • Manner 2 The second proxy node uses the flow identifier, End.AD.SID, and VPN identifier corresponding to the first message as indexes, and stores the SRH of the first message in the second cache entry.
  • Step 3808 The first proxy node generates a third message according to the first message.
  • Step 3809 The first proxy node sends a third message to the SF node.
  • the flow identifier corresponding to the third packet may be the same as the flow identifier corresponding to the first packet, and the third packet may include the VPN identifier.
  • the first proxy node may establish a transmission tunnel with the SF node, and the first proxy node may generate a tunnel header corresponding to the transmission tunnel, and encapsulate the tunnel header so that the third packet includes the tunnel header, and the tunnel header may Including VPN logo.
  • Step 3810 The service function node receives the third message from the first agent node, processes the third message, and obtains the fourth message.
  • Step 3811. The service function node sends a fourth message to the second agent node.
  • Step 3812 the second proxy node receives the fourth message from the SF node, and queries the second cache entry according to the VPN identifier, End.AD.SID and the flow identifier corresponding to the fourth message to obtain the SRH of the first message.
  • the way to obtain SRH by querying may include the following way one or two:
  • the method one can specifically include the following steps one to two:
  • Step 1 The second agent node queries the mapping relationship between the endpoint dynamic agent SID and the VPN identification according to the VPN identification, and obtains the endpoint dynamic agent SID corresponding to the VPN identification.
  • the flow identifier corresponding to the fourth message may be the flow identifier corresponding to the first message, the fourth message includes the VPN identifier and the payload, and the second proxy node can obtain the VPN identifier from the fourth message and query End according to the VPN identifier.
  • the mapping relationship between the AD.SID and the VPN identifier is used to find which End.AD.SID corresponds to the VPN identifier. For example, if End.AD.SIDA::1 has a mapping relationship with VLAN ID100, and End.AD.SIDA::2 has a mapping relationship with VLAN ID200, when the VPN identifier carried in the fourth packet is 100, The second agent node queries the mapping relationship according to 100, and A:1 can be obtained.
  • a configuration operation can be performed on the second proxy node to configure the SF node assigned to each VPN End.AD.SID.
  • the second agent node may receive the configuration instruction, and the second agent node may store the mapping relationship between the endpoint dynamic agent SID and the VPN identifier according to the configuration instruction.
  • the configuration instruction includes End.AD.SID corresponding to the SF node in each VPN.
  • the second agent node can parse the configuration instruction to obtain the End.AD.SID corresponding to each SF node in the VPN carried by the configuration instruction.
  • two End.AD.SIDs can be configured on the second agent node in advance, for example, A:: 1 is assigned to the SF in the VLAN of VLAN ID 100, and A:: 2 is assigned to the SF in the VLAN of VLAN ID 200, which can store the mapping relationship between A:: 1 and 100 and A:: 2 and 200 The mapping relationship between.
  • mapping relationship between End.AD.SID and VPN identifier in the above-mentioned manner is only an example, and the second agent node can also use other methods to store the mapping relationship between End.AD.SID and VPN identifier, such as automatic learning.
  • Step 2 The second proxy node uses the flow identifier corresponding to the fourth message and the endpoint dynamic proxy SID as an index to query the second cache entry to obtain the SRH of the first message.
  • the second proxy node uses the End.AD.SID, the VPN identifier, and the flow identifier corresponding to the fourth message as an index to query the second cache entry to obtain the SRH of the first message.
  • Step 3813 The second proxy node generates a fifth message according to the fourth message and the SRH of the first message, where the fifth message includes the payload of the fourth message and the SRH of the first message.
  • Step 3814 The second proxy node sends the fifth message to the routing node.
  • Step 3815 The routing node receives the fifth message from the second proxy node, and forwards the fifth message.
  • This embodiment provides a method for supporting dynamic proxy services in the SRv6 VPN scenario.
  • the End.AD.SID can be used as the identifier of the VPN. Therefore,
  • the proxy node stores the SRH according to the flow ID and End.AD.SID, so that the SRH indexes of different VPN packets can be distinguished by different End.AD.SIDs, so as to ensure the SRH indexes of different VPN packets.
  • the difference allows the SRHs of packets of different VPNs to be stored separately to achieve information isolation of different VPNs. At the same time, it avoids the situation of hitting the SRHs of packets of multiple VPNs through the same index and ensures the accuracy of the query.
  • the foregoing method embodiment supports the VPN scenario based on the process of spontaneous sending and receiving.
  • the embodiment of the present application may also support a VPN scenario based on a self-issued and self-received process, which is described in detail in the method embodiment shown in FIG. 39 below.
  • FIG. 39 focuses on the differences from the embodiment in FIG. 38, and the steps that are the same as those in the embodiment in FIG. 38 also refer to the embodiment in FIG. 38, and details are not repeated in the embodiment in FIG. 39.
  • FIG. 39 is a flowchart of a message transmission method provided by an embodiment of the present application. The method may include the following steps:
  • Step 3901. The routing node sends the first message to the first proxy node.
  • Step 3902 the first proxy node receives the first message from the routing node, and the first proxy node stores the SRH of the first message in the first cache entry according to the flow identifier corresponding to the first message and the endpoint dynamic proxy SID .
  • the first agent node may store the SRH, and the following is an example of way one or way two.
  • the first proxy node uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index, and stores the SRH of the first message in the second cache entry.
  • the first proxy node uses the flow identifier, the endpoint dynamic proxy SID, and the VPN identifier corresponding to the first message as indexes, and stores the SRH of the first message in the second cache entry.
  • Step 3903 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first detour SID corresponding to the second proxy node.
  • Step 3904 The first proxy node sends a second message to the second proxy node.
  • Step 3905 The second proxy node receives the second message from the first proxy node, and determines that the control information indicates that the second proxy node stores the SRH of the first message.
  • Step 3906 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 3907 The second proxy node stores the SRH of the first message in the second cache entry according to the flow identifier corresponding to the second message and the endpoint dynamic proxy SID, where the flow identifier corresponding to the second message is the first The flow identifier corresponding to the packet.
  • the second agent node may store the SRH, and the following is an example of way one or way two.
  • the second proxy node uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index, and stores the SRH of the first message in the second cache entry.
  • the second proxy node uses the flow identifier, the endpoint dynamic proxy SID, and the VPN identifier corresponding to the first message as indexes, and stores the SRH of the first message in the second cache entry.
  • Step 3908 The first proxy node generates a third message according to the first message.
  • Step 3909 The first proxy node sends a third message to the SF node.
  • Step 3910 The SF node receives the third message from the first proxy node, processes the third message, and obtains the fourth message.
  • Step 3911 the SF node sends a fourth message to the first proxy node.
  • Step 3912 the first proxy node receives the fourth message from the SF node, and queries the first cache entry according to the VPN identifier, the endpoint dynamic proxy SID and the flow identifier corresponding to the fourth message to obtain the SRH of the first message.
  • the flow identifier corresponding to the fourth packet is the flow identifier corresponding to the first packet.
  • the first agent node may query the SRH, and the following is an example of way one or way two.
  • Method one can include the following steps one to two:
  • Step 1 The first agent node queries the mapping relationship between the endpoint dynamic agent SID and the VPN identification according to the VPN identification, and obtains the endpoint dynamic agent SID corresponding to the VPN identification.
  • Step 2 The first proxy node uses the flow identifier corresponding to the fourth message and the endpoint dynamic proxy SID as an index to query the first cache entry to obtain the SRH of the first message.
  • the first proxy node uses the VPN identifier, the endpoint dynamic proxy SID, and the flow identifier corresponding to the fourth message as an index to query the first cache entry to obtain the SRH of the first message.
  • step 3912 is for the case of method 2 in step 3907.
  • Step 3913 The first proxy node generates a fifth message according to the fourth message and the SRH of the first message, where the fifth message includes the payload of the fourth message and the SRH of the first message.
  • Step 3914 The first proxy node sends a fifth message to the routing node.
  • Step 3915 The routing node receives the fifth message from the first proxy node, and forwards the fifth message.
  • This embodiment provides a method for supporting dynamic proxy services in SRv6 VPN scenarios.
  • the End.AD.SID can be used as the identifier of the VPN. Therefore,
  • the proxy node stores the SRH by indexing the flow ID and End.AD.SID, so that the SRH index of the packets of different VPNs can be distinguished by different End.AD.SIDs, so as to ensure the packets of different VPNs.
  • the SRH indexes are different, so that the SRHs of packets of different VPNs can be stored separately, which realizes the information isolation of different VPNs. At the same time, it avoids the situation of hitting the SRHs of packets of multiple VPNs through the same index to ensure the accuracy of the query.
  • the foregoing method embodiments provide a method for message transmission in a normal state.
  • the method provided in the embodiments of the present application can also support failure scenarios.
  • dynamic proxy services can also be provided to ensure normal transmission of messages.
  • the link between the proxy node 1 and the SF node, and the link between the proxy node 2 and the SF node can form a redundant link, when the link between the proxy node 1 and the SF node
  • the traffic can bypass to the proxy node 2, the proxy node 2 performs dynamic proxy operations, and the proxy node 2 continues to transmit to the SF node.
  • FIG. 41 is a flowchart of a message transmission method provided by an embodiment of the present application. The method may include the following steps:
  • Step 4101 The routing node sends the first message to the first proxy node.
  • Step 4102 the first proxy node receives the first message from the routing node, and detects a failure event.
  • Failure events can include two types, one type is link failure, and the other type is node failure.
  • the provided method repackages the message to be processed by dynamic proxy and forwards it to the second proxy node, thereby instructing the second proxy node to replace the first proxy node to perform dynamic proxy operations, strip SRH from the repackaged message, and remove the SRH from the repackaged message.
  • the payload of the re-encapsulated message is transmitted to the SF node.
  • the failure event includes at least one of an event in which the link between the first agent node and the SF node fails or an event in which the first agent node fails.
  • the event that the link between the first agent node and the SF node fails may be the event that the fourth link fails.
  • the first proxy node may be connected to the SF node through the third outbound interface, the first proxy node may transmit the packet to the fourth link through the third outbound interface, and the first proxy node may detect the third outbound interface If the status of the third outbound interface is down, the first proxy node detects the event that the fourth link has failed.
  • a bidirectional forwarding detection (BFD) mechanism may be pre-deployed for the first agent node.
  • BFD is used to quickly detect and monitor the forwarding connection status of links or IP routes in the network.
  • the process of deploying the BFD mechanism may include: performing a configuration operation on the first agent node, the first agent node receives the configuration instruction, and according to the configuration instruction, turns on BFD for the third outbound interface, and establishes a BFD with the SF node. Conversation.
  • the first proxy node will send BFD packets through the third outbound interface every preset time period. If the returned BFD response packet is not received within the preset time period, the fourth link can be determined A malfunction occurs.
  • this embodiment is described by first describing receiving the first message, and then describing the detection of the failure event as an example.
  • the first agent node may also first detect the failure event and then receive the first message.
  • the first agent node detects a failure event while receiving the first message. This embodiment does not limit the timing of the two steps of receiving the first message and detecting the failure event.
  • Step 4103 The first proxy node stores the SRH of the first packet in the first cache entry.
  • Step 4104 The first proxy node generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node.
  • the second message may be regarded as a bypass message of the first message, and the second message may be a data message.
  • the second message includes the payload of the first message.
  • the control information in the second message may instruct the second proxy node to store the SRH of the first message, and also instruct the second proxy node to send the load of the first message to the service function node.
  • the control information can include a first flag and a second flag.
  • the value of the first flag can be set to 0, indicating that the message is not a copied message, usually a data message, After receiving the message, the end forwards the payload in the message.
  • the value of the second flag in the control information can be set to 0, indicating that the message is not an acknowledgement message.
  • the trigger condition for generating the second message in this embodiment may be different from the trigger condition for generating the second message in the embodiment of FIG. 5.
  • the trigger condition for generating the second message may be detection of the generation or update of the first cache entry, or detection of the current point in time reaching the sending period of the second message.
  • the number of the second message may be equal to the number of the second message.
  • the number of a message has a significant difference. For example, a data stream includes N first messages, and the SRHs of these N first messages are usually the same with a high probability. Then, in the embodiment of FIG.
  • the first proxy node can generate a cache entry for a data flow to store an SRH, so this data flow triggers a second packet in total, and the load of each first packet can be transmitted by the first proxy node to the SF node.
  • the load of each first message can be carried in the corresponding second message, so that the load of each first message It is transmitted to the second proxy node so that the load of each first message is transmitted to the SF node through the second proxy node.
  • the trigger condition for generating the second message can be: every time a first message is received, then Corresponding to the generation of a second message, the number of second messages may be approximately the same as the number of first messages.
  • Step 4105 The first proxy node sends a second message to the second proxy node.
  • Step 4106 The second proxy node receives the second message from the first proxy node, determines that the control information indicates that the second proxy node stores the SRH of the first message and sends the payload of the second message to the SF node.
  • the payload of is the payload of the first packet.
  • the second proxy node can identify the value of the first flag and the value of the second flag in the second message, and if the value of the first flag is set to 0 and the value of the second flag is set to 0, it is determined
  • the control information indicates that the SRH of the first message is stored and the payload of the second message is sent.
  • Step 4107 The second proxy node parses the second message to obtain the SRH of the first message.
  • Step 4108 The second proxy node stores the SRH of the first packet in the second cache entry.
  • Step 4109 The second proxy node generates a third message according to the second message.
  • step 4109 may include the following steps one to two:
  • Step 1 The second proxy node restores the second message to the first message.
  • the second proxy node may delete the first bypass SID from the SRH of the second message, reduce the SL of the SRH of the second message by one, and update the destination address of the second message to the endpoint dynamic proxy SID, get the first message.
  • Step 2 The second proxy node generates a third message according to the first message.
  • the second step is the same as the above step 508 and will not be repeated here.
  • step 4107 first and step 4108 later.
  • the first agent node may also perform step 4108 first, and then perform step 4107, or the first agent The node executes step 4108 while executing step 4107. This embodiment does not limit the timing of step 4107 and step 4108.
  • Step 4110 The second proxy node sends a third message to the SF node.
  • the process of forwarding the third message by the second proxy node is the same as the process of forwarding the third message by the first proxy node in the embodiment of FIG. 5, and will not be repeated here.
  • Step 4111 the SF node receives the third message from the second proxy node, processes the third message, and obtains the fourth message.
  • Step 4112 SF sends a fourth message to the second proxy node.
  • Step 4113 The second proxy node receives the fourth message from the SF node, and according to the fourth message, queries the second cache entry to obtain the SRH of the first message.
  • Step 4114 The second proxy node generates a fifth message according to the fourth message and the SRH of the first message.
  • Step 4115 The second proxy node sends the fifth message to the routing node.
  • Step 4116 The routing node receives the fifth message from the second proxy node, and forwards the fifth message.
  • the fault can be eliminated, and the link or node can recover from the fault state to the normal state.
  • the first proxy node subsequently receives the first message from the routing node again, and can detect the failure recovery event, and can stop executing the method process provided in this embodiment, and no longer change the load of the first message.
  • the second proxy node transmits the load of the first message to the SF node, but executes the method flow provided in the embodiment of FIG. 4, and the local end transmits the load of the first message to the SF node, This avoids traffic bypassing in normal forwarding scenarios.
  • the first agent node can detect the state of the third outgoing interface, and if the state of the third outgoing interface is up, the first agent node detects that the fourth link is restored to the connected state.
  • This embodiment provides a method for supporting dynamic proxy services in failure scenarios.
  • the local proxy node repackages the message to be processed by dynamic proxy, forwards it to the peer proxy node, and instructs the peer
  • the proxy node replaces the local proxy node to perform dynamic proxy operations, strips the SRH from the repackaged message, and transmits the load of the repackaged message to the SF node.
  • the local traffic is bypassed to the opposite end to continue forwarding, so that in the case of failure, it can also ensure that the message can be processed by the SF node and be transmitted normally. Therefore, the stability, reliability and robustness of the network are greatly improved Sex.
  • the foregoing method embodiment provides a message transmission process triggered by the first agent node detecting a fault event.
  • a message transmission process triggered after the routing node detects a fault event is also provided.
  • the routing node senses that the proxy node 1 is faulty, or senses that the link between the routing node and the proxy node 1 is faulty
  • the method provided in the embodiments of the present application can be executed to ensure that the message gets a dynamic proxy. Operation can be processed by the SF node.
  • FIG. 43 is a flowchart of a message transmission method provided by an embodiment of the present application. The method may include the following steps:
  • Step 4301 the routing node detects a failure event, and sends a first message to the second proxy node.
  • the failure event includes at least one of an event in which the second link fails, an event in which the fourth link fails, or an event in which the first agent node fails.
  • the routing node accesses the second link through the fourth outgoing interface, the routing node can detect the status of the fourth outgoing interface, and if the status of the fourth outgoing interface is down, the routing node detects the second link. The event of a link failure.
  • a monitoring link group (monitor link) may be deployed to the first agent node.
  • the monitoring link group includes a second link and a fourth link, and the second link may be a monitored link in the monitoring link group.
  • the fourth link may be a monitor in the monitoring link group.
  • the first proxy node can detect the status of the second outgoing interface. If the status of the second outgoing interface is down, it can be determined that the second link is down. The first proxy node sets the status of the third outgoing interface to down. , So that the fourth link is closed synchronously with the failure of the second link, so that the fourth link and the second link are linked.
  • the proxy node 1 can detect the state of the outgoing interface connected to the SF node.
  • the proxy node 1 sets the state of the outgoing interface connected to the routing node to down, so that the link between the proxy node 1 and the routing node is closed.
  • the dotted line in FIG. 44 indicates that the closing of the second link causes the closing of the fourth link.
  • the routing node After detecting the failure event, the routing node can determine that the load of the first packet cannot be forwarded by the first proxy node to the SF node temporarily, and then uses the dual-homing access networking architecture to send the first packet to the second proxy node. Thus, the load of the first message is transmitted to the second proxy node, so that the load of the first message is forwarded from the second proxy node to the SF node.
  • the routing node accesses the third link through the fifth outgoing interface, the routing node can select the fifth outgoing interface as the outgoing interface of the first packet, and send the first packet through the fifth outgoing interface , The first message will be transmitted to the second agent node through the third link.
  • Step 4302 the second proxy node receives the first message from the routing node, and the second proxy node stores the SRH of the first message in the second cache entry.
  • Step 4303 The second proxy node generates a third message according to the first message.
  • Step 4304 The second proxy node sends a third message to the SF node.
  • Step 4305 The SF node receives the third message from the second proxy node, processes the third message, and obtains the fourth message.
  • Step 4306 The SF node sends a fourth message to the second proxy node.
  • Step 4307 The second proxy node receives the fourth message from the SF node, and according to the fourth message, queries the second cache entry to obtain the SRH of the first message.
  • Step 4308 The second proxy node generates a fifth message according to the fourth message and the SRH of the first message.
  • Step 4309 The second proxy node sends the fifth message to the routing node.
  • Step 4310 The routing node receives the fifth message from the second proxy node, and forwards the fifth message.
  • This embodiment provides a method for supporting dynamic proxy services in failure scenarios.
  • a routing node detects a failure in a link related to a proxy node, it forwards the message to another proxy node so that another proxy node can replace it.
  • the failed proxy node performs dynamic proxy operations, so that even in a fault scenario, the message can be processed by the SF node, so that the normal transmission can be obtained. Therefore, the stability, reliability and robustness of the network are greatly improved.
  • the foregoing embodiment in FIG. 41 to FIG. 44 shows the flow of the method for transmitting packets in a failure scenario. It should be understood that the foregoing embodiment in FIG. 41 and the embodiment in FIG. 44 are only examples of the failure scenario, and the method provided in the embodiment of the present application It can also be applied to other failure scenarios in the same way.
  • the method provided by the embodiments of the present application is self-adaptive, that is, it can apply a set of forwarding procedures to correctly handle traffic forwarding requirements in various scenarios. For the sake of brevity, the above description is only for some failure scenarios, and the remaining failures The processing of the scene is similar, so I won't repeat it here.
  • FIG. 45 is a schematic structural diagram of a first proxy node provided by an embodiment of the present application.
  • the first proxy node includes: a receiving module 4501, configured to perform the step of receiving the first message, for example, Step 502, step 3502, step 3602, step 3702, step 3802, step 3902, step 4102, or step 4302 in the above method embodiment;
  • the generating module 4502 is used to perform the step of generating the second message, for example, the above method can be performed Step 503, step 3503, step 3604, step 3704, step 3803, step 3903, or step 4104 in the embodiment;
  • the sending module 4503 is used to execute the step of sending the second message, for example, the steps in the above method embodiment can be executed 504, step 3504, step 3605, step 3705, step 3804, step 3904, or step 4105.
  • the generating module 4502 is configured to insert the first bypass SID into the segment list of the SRH of the first message; increase the number of remaining segments SL of the SRH of the first message by one; and add the destination address of the first message Update to the first bypass SID, and get the second message.
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP header of the second packet; or, the control information is carried In the TLV in the SRH of the second message.
  • the first agent node further includes: a storage module for executing the step of storing the SRH of the first message.
  • a storage module for executing the step of storing the SRH of the first message.
  • step 502, step 3502, step 3603, step 3703, and step 3802 in the above method embodiment can be executed.
  • the generating module 4502 is also used to perform the step of generating the third message, for example, step 508, step 3508, step 3610, step 3710, step 3808, or step 3908 in the above method embodiment can be performed;
  • the sending module 4503 is also used to execute the step of sending the third message.
  • step 509, step 3509, step 3611, step 3711, step 3809, or step 3909 in the above method embodiment can be executed.
  • the storage module is configured to perform the step of storing the SRH with the identifier of the first cache entry as an index. For example, step 3602 and step 3603, or step 3702 and step 3703 in the above method embodiment can be performed.
  • the first agent node further includes: a determining module, which is further configured to determine the service function of the service function node including modifying the flow identifier;
  • the second message further includes the identifier of the first cache entry
  • the control information is further used to instruct the second proxy node to store the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry
  • the second cache entry is a cache entry of the SRH in which the second proxy node stores the first message.
  • the receiving module 4501 is further configured to perform the step of receiving the fourth message, for example, step 3714 in the above method embodiment may be performed;
  • the first agent node further includes: a query module, which is used to perform the query to obtain the first message
  • the step of SRH of the text for example, can perform step 3714 in the above method embodiment;
  • the generation module 4502 is also used to perform the step of generating the fifth message, for example, step 3715 in the above method embodiment can be performed;
  • the sending module 4503 It is also used to execute the step of sending the fifth message, for example, step 3716 in the foregoing method embodiment may be executed.
  • the trigger condition for generating the second message includes one or more of the following: detecting that the first cache entry is generated; detecting that the first cache entry is updated.
  • the storage module is configured to perform the step of storing the SRH using the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as an index.
  • step 3802 or 1802 in the above method embodiment can be performed.
  • the receiving module 4501 is used to perform the step of receiving configuration instructions; the storage module is used to perform the step of storing the mapping relationship between the endpoint dynamic proxy SID and the VPN identifier.
  • the flow identifier corresponding to the third message is the flow identifier corresponding to the first message, and the third message includes the VPN identifier.
  • the receiving module 4501 is also used to perform the step of receiving the fourth message, such as the above method implementation.
  • the first agent node further includes: a query module for performing the step of querying the mapping relationship between the endpoint dynamic proxy SID and the VPN identifier, for example, performing step 3912 in the above method embodiment; the query module, also It is used to perform the step of querying to obtain the SRH of the first packet, such as performing step 3913 in the above method embodiment;
  • the generating module 4502 is also used to perform the step of generating the fifth packet, such as performing the steps in the above method embodiment 3914.
  • the sending module 4503 is also used to execute the step of generating the fifth message, for example, execute step 3915 in the foregoing method embodiment.
  • the second message further includes the load of the first message
  • the control information is further used to instruct the second proxy node to send the load of the first message to the service function node
  • the first proxy node further includes: a detection module, It is used to execute the step of detecting the failure event, for example, execute step 4102 in the above method embodiment.
  • the first proxy node is connected to the second proxy node through the first link, and the sending module 4503 is configured to send the second message to the second proxy node through the first outbound interface corresponding to the first link;
  • the first proxy node is connected to the routing node through the second link, and the routing node is connected to the second proxy node through the third link.
  • the sending module 4503 is configured to send to the routing node through the second outbound interface corresponding to the second link.
  • the routing node sends a second message, and the second message is forwarded by the routing node to the second proxy node through the third link.
  • the detection module is also used to detect the status of the first link; the sending module 4503 is used to send a message to the first link through the first outgoing interface corresponding to the first link if the first link is in an available state The second proxy node sends the second message; or, if the first link is in an unavailable state, sends the second message to the routing node through the second outgoing interface corresponding to the second link.
  • the sending module 4503 is configured to continuously send multiple second messages to the second proxy node; or, when the sixth message is not received, retransmit the second message to the second proxy node.
  • the source address of the message is the first bypass SID
  • the sixth message may include the second bypass SID corresponding to the first proxy node.
  • the destination address of the sixth message is the second bypass SID
  • the sixth message Text is used to indicate that the second message has been received.
  • first proxy node provided in the embodiment of FIG. 45 corresponds to the first proxy node in the foregoing method embodiment, and each module in the first proxy node and the foregoing other operations and/or functions are used to implement the methods in the method embodiment.
  • first agent node for specific details, please refer to the foregoing method embodiments, and for brevity, details are not repeated here.
  • the first agent node provided in the embodiment of FIG. 45 transmits messages
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be assigned to different functions according to needs.
  • Module completion that is, divide the internal structure of the first agent node into different functional modules to complete all or part of the functions described above.
  • the first proxy node provided in the foregoing embodiment belongs to the same concept as the foregoing message transmission method embodiment, and its specific implementation process is detailed in the method embodiment, and will not be repeated here.
  • FIG. 46 is a schematic structural diagram of a second proxy node provided by an embodiment of the present application. As shown in FIG. 46, the second proxy node includes:
  • the receiving module 4601 is configured to perform the step of receiving the second message, for example, step 505, step 3505, step 3606, step 3709, step 3805, step 3905, or step 4106 in the above method embodiment can be performed;
  • the determining module 4602 is configured to perform the step of determining the SRH in which the control information instructs the second proxy node to store the first message. For example, step 505, step 3505, step 3606, step 3706, step 3805, step 3805 in the above method embodiment can be performed 3905 or step 4106;
  • the parsing module 4603 is configured to perform the step of parsing the second message, for example, step 506, step 3506, step 3607, step 3707, step 3806, step 3906, or step 4107 in the above method embodiment can be performed;
  • the storage module 4604 is configured to perform the step of storing the SRH of the first message. For example, step 507, step 3507, step 3609, step 3709, step 3807, step 3907, step 4108, or step 4302 in the foregoing method embodiment can be performed.
  • the parsing module 4603 is configured to: delete the first detour SID from the segment list in the SRH of the second message; subtract one from the SL in the SRH of the second message to obtain the SRH of the first message .
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP header of the second packet; or, the control information is carried In the TLV in the SRH of the second message.
  • the storage module 4604 is configured to perform the step of storing the SRH with the identifier of the second cache entry as an index. For example, step 508 to step 3609, or step 3708 to step 3709 in the foregoing method embodiment can be performed.
  • the determining module 4602 is further configured to determine the service function of the service function node including modifying the flow identifier.
  • the second message further includes the identifier of the first cache entry
  • the first cache entry is the cache entry of the SRH in which the first proxy node stores the first message
  • the determining module 4602 is also used to determine control information It also instructs the second proxy node to store the mapping relationship between the identity of the first cache entry and the identity of the second cache entry
  • the storage module 4604 is also used to store the identity of the first cache entry and the identity of the second cache entry. The mapping relationship between the identities.
  • the receiving module 4601 is further configured to perform the step of receiving a fourth message carrying the identifier of the first cache entry;
  • the second proxy node further includes: a query module, which is used to perform the step of receiving the fourth message according to the identifier of the first cache entry Query the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry to obtain the identifier of the second cache entry corresponding to the identifier of the first cache entry;
  • the query module is also used to The identifier of the cache entry is an index, and the second cache entry is queried to obtain the SRH of the first message;
  • the generation module is used to generate the fifth message and the fifth message according to the SRH of the fourth message and the first message
  • the message includes the payload of the fourth message and the SRH of the first message;
  • the sending module is used to send the fifth message.
  • the flow identifier corresponding to the second message is the flow identifier corresponding to the first message
  • the SRH of the first message includes the endpoint dynamic proxy SID
  • the storage module 4604 is used to execute the flow identifier corresponding to the first message.
  • the endpoint dynamic proxy SID is the step of index storage SRH, for example, step 3807 or step 3907 in the above method embodiment is executed.
  • the receiving module 4601 is also used to receive configuration instructions, the configuration instructions include the endpoint dynamic proxy SID corresponding to the service function nodes in each virtual private network VPN; the storage module 4604 is also used to store endpoint dynamics according to the configuration instructions The mapping relationship between the proxy SID and the VPN identifier.
  • the receiving module 4601 is further configured to perform the step of receiving the fourth message in step 3812;
  • the second proxy node further includes: a query module, which is configured to perform corresponding to the VPN identifier, the endpoint dynamic proxy SID, and the fourth message
  • the step of querying the SRH with the flow identifier for example, execute step 3812;
  • the generating module is also used to execute step 3813;
  • the sending module is also used to execute step 3814.
  • the second proxy node further includes: a determining module 4602, which is further configured to determine that the control information further instructs the second proxy node to send the load of the first packet to the service function node, for example, perform step 4106; the generating module, further It is used to perform the step of generating the third message, for example, step 4109 is performed; the sending module is also used to perform step 4110.
  • a determining module 4602 which is further configured to determine that the control information further instructs the second proxy node to send the load of the first packet to the service function node, for example, perform step 4106
  • the generating module further It is used to perform the step of generating the third message, for example, step 4109 is performed
  • the sending module is also used to perform step 4110.
  • the second proxy node is connected to the first proxy node through the first link, and the receiving module 4601 is configured to receive the second packet from the first proxy node through the first inbound interface corresponding to the first link;
  • the second proxy node is connected to the routing node through a third link, and the routing node is connected to the first proxy node through a second link.
  • the receiving module 4601 is configured to pass through the second inbound interface corresponding to the third link, from The routing node receives the second message, and the second message is sent by the first agent node to the routing node through the second link.
  • the receiving module 4601 is configured to continuously receive multiple second messages from the first agent node; or, the second agent node further includes: a generating module, configured to correspond to the first detour SID and the second agent node The second bypass SID and the second message are generated to generate the sixth message; the sending module is also used to send the sixth message to the first proxy node, and the source address of the sixth message is the first bypass SID, and the source address of the sixth message is the first bypass SID.
  • the sixth message includes the second bypass SID corresponding to the first proxy node, and the sixth message is used to indicate that the second message is received.
  • the second proxy node provided in the embodiment of FIG. 46 corresponds to the second proxy node in the foregoing method embodiment, and each module in the second proxy node and the foregoing other operations and/or functions are respectively intended to implement the methods in the method embodiment.
  • each module in the second proxy node and the foregoing other operations and/or functions are respectively intended to implement the methods in the method embodiment.
  • the various steps and methods implemented by the second agent node for specific details, please refer to the above method embodiments. For the sake of brevity, details are not repeated here.
  • the second proxy node provided in the embodiment of FIG. 46 transmits messages
  • only the division of the above-mentioned functional modules is used as an example for illustration.
  • the above-mentioned functions can be assigned to different functions according to needs.
  • Module completion that is, divide the internal structure of the second agent node into different functional modules to complete all or part of the functions described above.
  • the second proxy node provided in the foregoing embodiment belongs to the same concept as the foregoing message transmission method embodiment, and its specific implementation process is detailed in the method embodiment, which will not be repeated here.
  • the first proxy node and the second proxy node provided in the embodiments of the present application are described above, and the possible product forms of the first proxy node and the second proxy node are described below. It should be understood that all products of any form that have the characteristics of the above-mentioned first agent node and all products of any form that have the characteristics of the second agent node fall within the scope of protection of this application. It should also be understood that the following introduction is only an example, and does not limit the product form of the first agent node and the second agent node in the embodiment of the present application.
  • the embodiment of the present application provides a proxy node, and the proxy node may be a first proxy node or a second proxy node.
  • the proxy node includes a processor, and the processor is configured to execute instructions so that the proxy node executes the message transmission method provided by the foregoing method embodiments.
  • the processor may be a network processor (Network Processor, NP for short), a central processing unit (CPU), an application-specific integrated circuit (ASIC), or a solution used to control the application. Integrated circuits for program execution.
  • the processor can be a single-CPU processor or a multi-CPU processor. The number of processors can be one or more.
  • the proxy node may also include a memory.
  • the memory can be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types that can store information and instructions
  • Dynamic storage devices can also be electrically erasable programmable read-only memory (EEPROM), compact disc read-only Memory (CD-ROM), or other optical disk storage, optical disk storage ( Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program codes in the form of instructions or data structures and can be stored by a computer Any other media taken, but not limited to this.
  • EEPROM electrically erasable programmable read-only memory
  • CD-ROM compact disc read-only Memory
  • optical disk storage Including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.
  • magnetic disk storage media or other magnetic storage devices or can be used to carry or store desired program codes in the form of instructions or data
  • the memory and the processor can be set separately, and the memory and the processor can also be integrated.
  • the proxy node may also include a transceiver.
  • the transceiver is used to communicate with other devices or communication networks, and the way of network communication can be but not limited to Ethernet, wireless access network (RAN), wireless local area networks (WLAN), etc.
  • the communication interface can be used to receive messages sent by other nodes, and can also send messages to other nodes.
  • the foregoing proxy node may be implemented as a network device, and a network processor in the network device may execute each step of the foregoing method embodiment.
  • the network device may be a router, a switch, or a firewall.
  • FIG. 47 shows a schematic structural diagram of a network device provided by an exemplary embodiment of the present application.
  • the network device may be configured as a first proxy node or a second proxy node.
  • the network device 4700 includes: a main control board 4710, an interface board 4730, a switching network board 4720, and an interface board 4740.
  • the main control board 4710 is used to complete functions such as system management, equipment maintenance, and protocol processing.
  • the switching network board 4720 is used to complete data exchange between various interface boards (interface boards are also called line cards or service boards).
  • the interface boards 4730 and 4740 are used to provide various service interfaces (for example, Ethernet interface, POS interface, etc.), and implement data packet forwarding.
  • the main control board 4710, the interface boards 4730 and 4740, and the switching network board 4720 are connected to the system backplane through the system bus to achieve intercommunication.
  • the central processing unit 4731 on the interface board 4730 is used to control and manage the interface board and communicate with the central processing unit 4711 on the main control board 4710.
  • the physical interface card 4733 receives the first message and sends it to the network processor 4732.
  • the network processor 4732 dynamically proxy SID according to the endpoint, the first message and the corresponding second proxy node
  • the first bypass SID of generates a second message, and according to the information such as the outgoing interface, after the link layer encapsulation is completed, the second message is sent from the physical interface card 4733, so that the second message is transmitted to the second agent node.
  • the network processor 4732 may update the SL of the SRH of the first message; update the destination address of the first message to obtain the second message and the SL of the second message Greater than the SL of the first message.
  • the network processor 4732 may increase the SL of the first message by one; update the destination address of the first message to the first bypass SID.
  • the network processor 4732 may insert one or more target SIDs into the segment list of the SRH of the first message, and the one or more target SIDs are used to indicate the target forwarding path, and the target forwarding path is the The path between the first proxy node and the second proxy node; add N to the SL of the first message, where N is a positive integer greater than 1, update the destination address of the first message to one or more One of the target SIDs corresponds to the SID of the next SR node.
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP of the second packet Header; or, the control information is carried in the TLV in the SRH of the second message.
  • the network processor 4732 may store the SRH of the first message in the forwarding entry memory 534.
  • the network processor 4732 may generate a third message according to the first message, and according to information such as the outbound interface, after the link layer encapsulation is completed, the third message is sent from the physical interface card 4733, so that the third message is transmitted To the SF node.
  • the network processor 4732 may use the identifier of the first cache entry as an index to store the SRH of the first message in the forwarding entry memory 534.
  • the second message further includes the identifier of the first cache entry
  • the control information is also used to instruct the second proxy node to store the identifier of the first cache entry and the identifier of the second cache entry.
  • the mapping relationship between the identifiers is identified, and the second cache entry is a cache entry of the SRH in which the second proxy node stores the first message.
  • the physical interface card 4733 receives the fourth message, and sends the fourth message to the network processor 4732.
  • the network processor 4732 uses the identifier of the first cache entry as an index to query and obtain the first message.
  • the SRH of the message; the network processor 4732 generates a fifth message according to the fourth message and the SRH of the first message, and the fifth message includes the payload of the fourth message and the SRH of the first message ;
  • the network processor 4732 sends the fifth message from the physical interface card 4733.
  • the network processor 4732 generates the second message when detecting that the first cache entry is generated
  • the network processor 4732 generates the second message when detecting that the first cache entry is updated.
  • the network processor 4732 uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as indexes, and stores the SRH of the first message in the forwarding entry memory 4734.
  • the central processing unit 4711 on the main control board 4710 may receive the configuration instruction, and according to the configuration instruction, store the mapping relationship between the endpoint dynamic agent SID and the VPN identifier in the forwarding entry memory 4734.
  • the physical interface card 4733 receives the fourth message, and the network processor 4732 queries the endpoint dynamic proxy SID, the endpoint dynamic proxy SID, the flow identification corresponding to the first message, and the endpoint dynamic proxy SID in the forwarding entry storage 534 according to the VPN identification.
  • the endpoint dynamic proxy SID query the forwarding entry memory 534 to obtain the SRH of the first message; the network processor 4732 generates a fifth message according to the fourth message and the SRH of the first message, and transfers the fifth message from the physical interface Card 4733 is sent out.
  • the network processor 4732 may detect a failure event, and the failure event includes at least one of an event in which the link between the first agent node and the service function node fails or an event in which the first agent node fails.
  • the network processor 4732 detects the status of the first link; if the first link is available, the network processor 4732 selects the physical interface card 4733 to pass the first output corresponding to the first link. Interface, the second packet is sent out from the first outbound interface of the physical interface card 4733; or, if the first link is in an unavailable state, the network processor 4732 selects the physical interface card 4733 to pass through the corresponding second link The second outgoing interface sends out the second packet from the second outgoing interface of the physical interface card 4733.
  • the network processor 4732 continuously generates a plurality of the second messages and sends them to the physical interface card 4733, and the physical interface card 4733 continuously sends a plurality of the second messages to the second proxy node.
  • the network processor 4732 retransmits the second message to the second proxy node.
  • the physical interface card 4733 receives the second message and sends it to the network processor 4732, and the network processor 4732 determines that the control information instructs the second proxy node to store the SRH of the first message; For the second message, the SRH of the first message is obtained; the SRH of the first message is stored in the second cache entry in the forwarding entry memory 4734.
  • the network processor 4732 deletes the first bypass SID from the segment list in the SRH of the second message; subtracts one from the number of remaining segments SL in the SRH of the second message to obtain the first message SRH.
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP of the second packet Header; or, the control information is carried in the TLV in the SRH of the second message.
  • the network processor 4732 uses the identifier of the second cache entry as an index to store the SRH of the first message in the second cache entry in the forwarding entry memory 4734.
  • the network processor 4732 determines that the service function of the SF node includes modifying the flow identifier
  • the network processor 4732 determines that the control information further instructs the second proxy node to store the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry, and the network processor 4732 converts the first cache entry The mapping relationship between the identifier of the entry and the identifier of the second cache entry is stored in the forwarding entry storage 4734.
  • the physical interface card 4733 receives the fourth message and sends it to the network processor 4732.
  • the network processor 4732 queries the identifier of the first cache entry in the forwarding entry memory 534 according to the identifier of the first cache entry.
  • the mapping relationship between the identifier of the second cache entry and the identifier of the second cache entry obtains the identifier of the second cache entry corresponding to the identifier of the first cache entry;
  • the network processor 4732 queries the forwarding entry storage 534 to use the second cache entry
  • the identifier of is an index, the second cache entry in the forwarding entry memory 534 is queried to obtain the SRH of the first message; the network processor 4732 generates the fifth message according to the fourth message and the SRH of the first message;
  • the network processor 4732 sends the fifth message from the physical interface card 4733 after completing the link layer encapsulation according to the information such as the outgoing interface.
  • the network processor 4732 uses the flow identifier corresponding to the second packet and the endpoint dynamic proxy SID as indexes, and stores the SRH of the first packet in the second cache entry.
  • the central processing unit 4711 on the main control board 4710 may receive the configuration instruction, and according to the configuration instruction, store the mapping relationship between the endpoint dynamic agent SID and the VPN identifier in the forwarding entry memory 4734.
  • the physical interface card 4733 receives the fourth message and sends it to the network processor 4732.
  • the network processor 4732 queries and forwards according to the VPN identifier, the endpoint dynamic proxy SID, and the flow identifier corresponding to the fourth message.
  • the second cache entry in the entry memory 4734 obtains the SRH of the first message; the network processor 4732 generates the fifth message according to the fourth message and the SRH of the first message; the network processor 4732 generates the fifth message according to the SRH of the fourth message and the first message After completing the link layer encapsulation for information such as the outgoing interface, the fifth message is sent from the physical interface card 4733.
  • the network processor 4732 determines that the control information further instructs the second proxy node to send the load of the first packet to the service function node; the network processor 4732 generates a third packet according to the second packet, and the network processor The 4732 sends the third message from the physical interface card 4733, so that the third message is transmitted to the SF node.
  • the physical interface card 4733 receives the second packet through the first inbound interface corresponding to the first link; or, the physical interface card 4733 receives the second packet through the second inbound interface corresponding to the third link. Two messages.
  • the physical interface card 4733 continuously receives a plurality of the second messages and sends them to the network processor 4732.
  • the network processor 4732 generates a sixth message according to the first detour SID, the second detour SID corresponding to the second proxy node, and the second message, and the network processor 4732 generates a sixth message according to the outbound interface, etc. Information, after the link layer encapsulation is completed, the sixth message will be sent out through the physical interface card 4733.
  • the operations on the interface board 4740 in the embodiment of the present application are consistent with the operations on the interface board 4730, and will not be repeated for the sake of brevity.
  • the network device 4700 of this embodiment may correspond to the first agent node or the second agent node in the foregoing method embodiments, and the main control board 4710, interface board 4730, and/or 4740 in the network device 4700 may implement
  • the functions and/or various steps implemented by the first agent node or the second agent node in the foregoing method embodiments are not repeated here.
  • main control boards there may be one or more main control boards, and when there are more than one, it may include the main main control board and the standby main control board.
  • the switching network board may not exist, or there may be one or more. When there are more than one, the load sharing and redundant backup can be realized together. Under the centralized forwarding architecture, the network equipment does not need to switch the network board, and the interface board undertakes the processing function of the business data of the entire system.
  • the network device can have at least one switching network board, and data exchange between multiple interface boards is realized through the switching network board, providing large-capacity data exchange and processing capabilities. Therefore, the data access and processing capabilities of network equipment with a distributed architecture are greater than those with a centralized architecture.
  • the form of the network device may also have only one board, that is, there is no switching network board, and the functions of the interface board and the main control board are integrated on the one board.
  • the central processing unit and the main control board on the interface board The central processing unit on the board can be combined into a central processing unit on this board, and perform the functions of the superposition of the two.
  • the data exchange and processing capabilities of this form of equipment are low (for example, low-end switches or routers and other networks). equipment).
  • the specific architecture used depends on the specific networking deployment scenario, and there is no restriction here.
  • the foregoing proxy node may be implemented as a computing device, and the central processing unit in the computing device may execute each step of the foregoing method embodiment.
  • the computing device may be a host, a server, or a personal computer.
  • the computing device can be implemented by a general bus architecture.
  • FIG. 48 shows a schematic structural diagram of a computing device provided by an exemplary embodiment of the present application.
  • the computing device may be configured as a first agent node or a second agent node.
  • the computing device 4800 includes a processor 4810, a transceiver 4820, a random access memory 4840, a read-only memory 4850, and a bus 4860.
  • the processor 4810 is respectively coupled to the receiver/transmitter 4820, the random access memory 4840, and the read-only memory 4850 through the bus 4860.
  • the basic input output system solidified in the read-only memory 4850 or the bootloader in the embedded system is used to guide the system to start, and the computing device 4800 is guided to enter a normal operating state.
  • the application program and the operating system are run in the random access memory 4840, so that:
  • the transceiver 4820 receives the first message, and sends it to the processor 4810.
  • the processor 4810 generates a second message according to the endpoint dynamic proxy SID, the first message, and the first bypass SID corresponding to the second proxy node. For information such as the interface, after the link layer encapsulation is completed, the second message is sent from the transceiver 4820, so that the second message is transmitted to the second proxy node.
  • the processor 4810 may update the remaining segment number SL of the SRH of the first message; update the destination address of the first message to obtain the second message.
  • the SL of is greater than the SL of the first message.
  • the processor 4810 may increase the SL of the first message by one; update the destination address of the first message to the first bypass SID.
  • the processor 4810 may insert one or more target SIDs into the segment list of the SRH of the first message, where the one or more target SIDs are used to indicate a target forwarding path, and the target forwarding path is the first The path between a proxy node and the second proxy node; add N to the SL of the first message, where N is a positive integer greater than 1, update the destination address of the first message to the one or more The target SID corresponds to the SID of the next SR node.
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP of the second packet Header; or, the control information is carried in the TLV in the SRH of the second message.
  • the processor 4810 may store the SRH of the first message in the forwarding entry memory 534.
  • the processor 4810 may generate a third message according to the first message, and according to information such as the outgoing interface, after the link layer encapsulation is completed, the third message is sent out from the transceiver 4820, so that the third message is transmitted to the first message.
  • SF node may store the SRH of the first message in the forwarding entry memory 534.
  • the processor 4810 may use the identifier of the first cache entry as an index to store the SRH of the first message in the forwarding entry memory 534.
  • the second message further includes the identifier of the first cache entry
  • the control information is also used to instruct the second proxy node to store the identifier of the first cache entry and the identifier of the second cache entry.
  • the mapping relationship between the identifiers is identified, and the second cache entry is a cache entry of the SRH in which the second proxy node stores the first message.
  • the transceiver 4820 receives the fourth message, and sends the fourth message to the processor 4810.
  • the processor 4810 uses the identifier of the first cache entry as an index to query and obtain the SRH of the first message.
  • the processor 4810 generates a fifth message according to the fourth message and the SRH of the first message, and the fifth message includes the payload of the fourth message and the SRH of the first message; the processor 4810 The fifth message is sent out from the transceiver 4820.
  • the processor 4810 generates the second message when detecting that the first cache entry is generated
  • the processor 4810 generates the second message when detecting that the first cache entry is updated.
  • the processor 4810 uses the flow identifier corresponding to the first message and the endpoint dynamic proxy SID as indexes, and stores the SRH of the first message in the random access memory 4840.
  • the processor 4810 may receive a configuration instruction, and according to the configuration instruction, store the mapping relationship between the endpoint dynamic proxy SID and the VPN identifier in the random access memory 4840.
  • the transceiver 4820 receives the fourth message, and the processor 4810 queries the endpoint dynamic proxy SID, the endpoint dynamic proxy SID, the flow identification and endpoint dynamics corresponding to the first message in the forwarding entry storage 534 according to the VPN identification Proxy SID, query the forwarding entry memory 534 to obtain the SRH of the first message; the processor 4810 generates a fifth message according to the SRH of the fourth message and the first message, and sends the fifth message from the transceiver 4820 .
  • the processor 4810 may detect a failure event, and the failure event includes at least one of an event in which the link between the first agent node and the service function node fails or an event in which the first agent node fails.
  • the processor 4810 detects the state of the first link; if the first link is in an available state, the processor 4810 selects the first outgoing interface corresponding to the first link in the transceiver 4820 to transfer The second message is sent from the first outgoing interface of the transceiver 4820; or, if the first link is in an unavailable state, the processor 4810 selects the second outgoing interface corresponding to the second link in the transceiver 4820 to send The second message is sent from the second outgoing interface of the transceiver 4820.
  • the processor 4810 continuously generates a plurality of the second messages and sends them to the transceiver 4820, and the transceiver 4820 continuously sends a plurality of the second messages to the second proxy node.
  • the processor 4810 when the transceiver 4820 does not receive the sixth message, the processor 4810 retransmits the second message to the second proxy node.
  • the transceiver 4820 receives the second message and sends it to the processor 4810, and the processor 4810 determines that the control information instructs the second proxy node to store the SRH of the first message; parse the second message To obtain the SRH of the first message; store the SRH of the first message to the second cache entry in the random access memory 4840.
  • the processor 4810 deletes the first bypass SID from the segment list in the SRH of the second message; subtracts one from the number of remaining segments SL in the SRH of the second message to obtain the first message SRH.
  • control information is carried in the extension header of the second packet; or, the control information is carried in the first bypass SID; or, the control information is carried in the IP of the second packet Header; or, the control information is carried in the TLV in the SRH of the second message.
  • the processor 4810 uses the identifier of the second cache entry as an index to store the SRH of the first message in the second cache entry in the random access memory 4840.
  • the processor 4810 determining that the service function of the SF node includes modifying the flow identifier
  • the processor 4810 determines that the control information further instructs the second proxy node to store the mapping relationship between the identifier of the first cache entry and the identifier of the second cache entry, and the processor 4810 converts the identifier of the first cache entry
  • the mapping relationship between the identifier and the identifier of the second cache entry is stored in the random access memory 4840.
  • the transceiver 4820 receives the fourth message and sends it to the processor 4810.
  • the processor 4810 queries the forwarding entry memory 534 for the first cache entry identifier and the second cache entry identifier according to the identifier of the first cache entry.
  • the mapping relationship between the identifiers of the cache entries is obtained, and the identifier of the second cache entry corresponding to the identifier of the first cache entry is obtained; the processor 4810 queries the forwarding entry memory 534 and uses the identifier of the second cache entry as the index , Query the second cache entry in the forwarding entry memory 534 to obtain the SRH of the first message; the processor 4810 generates the fifth message according to the fourth message and the SRH of the first message; the processor 4810 generates For information such as the interface, after the link layer encapsulation is completed, the fifth message is sent out from the transceiver 4820.
  • the processor 4810 uses the flow identifier corresponding to the second packet and the endpoint dynamic proxy SID as indexes, and stores the SRH of the first packet in the second cache entry.
  • the processor 4810 may receive a configuration instruction, and according to the configuration instruction, store the mapping relationship between the endpoint dynamic proxy SID and the VPN identifier in the random access memory 4840.
  • the transceiver 4820 receives the fourth message and sends it to the processor 4810.
  • the processor 4810 queries the random access memory based on the VPN identifier, the endpoint dynamic proxy SID, and the flow identifier corresponding to the fourth message.
  • the second cache entry in 4840 obtains the SRH of the first message; the processor 4810 generates a fifth message according to the fourth message and the SRH of the first message; the processor 4810 generates a fifth message according to the outbound interface and other information, After the link layer encapsulation is completed, the fifth message is sent out from the transceiver 4820.
  • the processor 4810 determines that the control information further instructs the second proxy node to send the payload of the first packet to the service function node; the processor 4810 generates a third packet according to the second packet, and the processor 4810 transfers the first packet to the service function node.
  • the three messages are sent from the transceiver 4820, so that the third message is transmitted to the SF node.
  • the transceiver 4820 receives the second packet through the first inbound interface corresponding to the first link; or, the transceiver 4820 receives the second packet through the second inbound interface corresponding to the third link. Text.
  • the transceiver 4820 continuously receives multiple second messages and sends them to the processor 4810.
  • the processor 4810 generates a sixth message according to the first detour SID, the second detour SID corresponding to the second proxy node, and the second message, and the processor 4810 generates a sixth message according to information such as an outbound interface, After the link layer encapsulation is completed, the sixth message will be sent out through the transceiver 4820.
  • the computing device in the embodiment of the present application may correspond to the first agent node or the second agent node in the foregoing method embodiments, and the processor 4810, transceiver 4820, etc. in the computing device may implement the methods in the foregoing method embodiments.
  • the functions and/or various steps and methods implemented by the first agent node or the second agent node For the sake of brevity, I will not repeat them here.
  • the foregoing proxy node may be implemented as a virtualized device.
  • the virtualization device may be a virtual machine (English: Virtual Machine, VM) running a program for sending messages, and the virtual machine is deployed on a hardware device (for example, a physical server).
  • a virtual machine refers to a complete computer system with complete hardware system functions that is simulated by software and runs in a completely isolated environment.
  • the virtual machine can be configured as the first agent node or the second agent node.
  • the first proxy node or the second proxy node can be implemented based on a general physical server combined with NFV technology.
  • the first agent node or the second agent node is a virtual host, a virtual router, or a virtual switch.
  • Those skilled in the art can combine the NFV technology to virtualize the first agent node or the second agent node with the above-mentioned functions on a general physical server by reading this application. I won't repeat them here.
  • the virtualization device may be a container, and the container is an entity used to provide an isolated virtualization environment.
  • the container may be a docker container.
  • the container can be configured as the first agent node or the second agent node.
  • the proxy node can be created through the corresponding image.
  • two container instances can be created for the proxy-container through the image of proxy-container (container providing proxy services), namely the container instance proxy-container1 and the container instance proxy -container2, the container instance proxy-container1 is provided as the first proxy node, and the container instance proxy-container2 is provided as the second proxy node.
  • the agent node When implemented using container technology, the agent node can run using the kernel of the physical machine, and multiple agent nodes can share the operating system of the physical machine. Different agent nodes can be isolated through container technology.
  • the containerized proxy node can run in a virtualized environment, for example, it can run in a virtual machine, and the containerized proxy node can also run directly in a physical machine.
  • the virtualization device can be a Pod, and the Pod is Kubernetes (Kubernetes is a container orchestration engine open sourced by Google, referred to as K8s in English) as the basic unit for deploying, managing, and orchestrating containerized applications.
  • Pod can include one or more containers. Each container in the same Pod is usually deployed on the same host, so each container in the same Pod can communicate through the host, and can share storage resources and network resources of the host.
  • Pod can be configured as the first proxy node or the second proxy node.
  • container as a service full English name: container as a service, English abbreviation: CaaS, which is a PaaS service based on containers
  • CaaS which is a PaaS service based on containers
  • proxy node can also be other virtualized devices, which are not listed here.
  • the above-mentioned proxy node may also be implemented by a general-purpose processor.
  • the general-purpose processor may be in the form of a chip.
  • the general-purpose processor that implements the first agent node or the second agent node includes a processing circuit and an input interface and an output interface that are internally connected and communicated with the processing circuit, and the processing circuit is used to execute the above-mentioned method embodiments through the input interface.
  • the processing circuit is used to execute the receiving steps in the foregoing method embodiments through the input interface, and the processing circuit is used to execute the sending steps in the foregoing method embodiments through the output interface.
  • the general-purpose processor may further include a storage medium, and the processing circuit is configured to execute the storage steps in each of the foregoing method embodiments through the storage medium.
  • the first agent node or the second agent node in the embodiment of this application can also be implemented as follows: one or more field-programmable gate arrays (full English name: field-programmable gate array) , English abbreviation: FPGA), programmable logic device (English full name: programmable logic device, English abbreviation: PLD), controller, state machine, gate logic, discrete hardware components, any other suitable circuit, or capable of executing this application Any combination of circuits with various functions described in the article.
  • field-programmable gate arrays full English name: field-programmable gate array
  • FPGA programmable logic device
  • PLD programmable logic device
  • controller state machine
  • gate logic discrete hardware components
  • the above-mentioned proxy node may also be implemented using a computer program product.
  • an embodiment of the present application provides a computer program product, which when the computer program product runs on a first agent node, causes the first agent node to execute the message transmission method in the foregoing method embodiment.
  • the embodiment of the present application also provides a computer program product, which when the computer program product runs on a second agent node, causes the second agent node to execute the message transmission method in the foregoing method embodiment.
  • first agent node or the second agent node of the above-mentioned various product forms respectively have any function of the first agent node or the second agent node in the above method embodiment, and will not be repeated here.
  • the disclosed system, device, and method may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the unit is only a logical function division, and there may be other divisions in actual implementation, for example, multiple units or components may be combined or may be Integrate into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may also be electrical, mechanical or other forms of connection.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place, or may also be distributed on multiple network units. Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments of the present application.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of this application is essentially or the part that contributes to the existing technology, or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium. It includes several instructions to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the method in each embodiment of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read-only memory (read-only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program code .
  • the computer program product includes one or more computer program instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
  • the computer instructions can be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium.
  • the computer program instructions can be passed from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, and a magnetic tape), an optical medium (for example, a digital video disc (DVD), or a semiconductor medium (for example, a solid state hard disk).
  • the program can be stored in a computer-readable storage medium.
  • the storage medium can be read-only memory, magnetic disk or optical disk, etc.

Abstract

本申请公开了一种报文传输方法、代理节点及存储介质,属于通信技术领域。在该方法中,该方法为End.AD.SID扩展了一种具有绕行功能的新的SID,本地的代理节点在接收到携带End.AD.SID的报文时,利用这种新的SID,将报文的SRH绕行传输至对端的代理节点,从而让对端的代理节点也能够得到报文的SRH,使得报文的SRH能够存储至对端的代理节点。双归接入至同一个SF节点的两个代理节点之间可实现报文的SRH的同步,使得报文的SRH得到冗余备份。

Description

报文传输方法、代理节点及存储介质
本申请要求于2019年11月6日提交中国专利局、申请号为201911078562.7、发明名称为“报文传输方法、代理节点及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,特别涉及一种报文传输方法、代理节点及存储介质。
背景技术
分段路由(英文:segment routing,简称:SR)是基于源路由的理念来转发报文的技术,基于互联网协议第6版(英文:internet protocol version 6,简称:IPv6)的分段路由(简称:SRv6)是一种将SR技术与IPv6协议结合起来的技术。具体来讲,SRv6报文会在携带IPv6头(英文:IPv6 header)的基础上,还携带分段路由头(英文:segment routing header,简称:SRH)。SRH包括段标识(英文:segment ID,简称:SID,也称segment)列表(英文:SID list,也称segment list)以及剩余段数量(英文:segment left,简称:SL)等信息。段列表中包含一个或多个依次排列的SID,每个SID在形式上是一个128比特的IPv6地址,每个SID在实质上能够表示拓扑、指令或服务。SL相当于指针,是个不小于0的数值,会指向段列表中的活跃SID(英文:active SID),该活跃SID为IPv6头中的目的地址。当支持SRv6的节点接收到报文时,会读取报文的目的地址,根据目的地址查询本地SID表(英文:local SID table),当目的地址为本地SID表中的SID时,将报文识别为SRv6报文,基于该SID对应的拓扑、指令或服务,来执行相应的操作。
业务功能链(英文:service function chain,简称:SFC,也称业务链)是一个有序的业务功能集合,能够让流量按照指定的顺序经过多个业务功能(英文:service function,简称:SF)节点,通过多个SF节点对报文依次进行处理,来完成业务处理流程。
在报文为SRv6报文而SF节点不支持SRv6的情况下,为了让SF节点能够正常处理报文,引入了代理(英文:proxy)节点。代理节点用于代理SF节点处理SRH,代理节点可分为动态代理、静态代理、共享内存代理以及伪代理等类型。其中,动态代理的处理流程为:代理节点接收SRv6报文,根据SRv6报文的目的地址,查询local SID table;当目的地址为local SID table中的端点动态代理SID(End.AD.SID,End表示endpoint,意为端点;D表示dynamic,意为动态,SID意为段标识)时,代理节点执行End.AD.SID对应的动态代理操作,即,从SRv6报文剥离SRH,得到不包含SRH的报文,将不包含SRH的报文发送给SF,并且,代理节点会以报文的五元组为索引,在缓存表项中存储SRH;SF会接收到不包含SRH的报文,对报文进行处理,将处理后的报文返回给代理节点;代理节点会以处理后的报文的五元组为索引,查询缓存表项,得到SRH;代理节点会向处理后的报文封装SRH,从而恢复SRH,得到包含SRH的SRv6报文;代理节点会将SRv6报文的SL减一,将SRv6报文的目的地址更新为段列表中减一后的SL 对应的SID,即End.AD.SID的下一个SID,得到更新后的SRv6报文;代理节点会向下一跳节点发送更新后的SRv6报文。
时下,SF节点可以进行双归接入,即,同一个SF节点可以同时接入两个代理节点。以这两个代理节点分别称为第一代理节点和第二代理节点为例,通过双归接入,SRv6报文既可以从第一代理节点转发至SF节点,也可以从第二代理节点转发至SF节点,那么即使某一个代理节点故障,SRv6报文仍可以从另一个代理节点转发至SF节点,从而极大地提升了网络可靠性。并且,两个代理节点之间可以分担转发SRv6报文以及处理SRH的压力,因此实现了负载分担的功能,减轻了单个代理节点的负载。
在双归接入的场景下,第一代理节点接收到目的地址为End.AD.SID的报文,执行上述动态代理的流程,将不包含SRH的报文发送给SF节点后,如果SF节点对报文进行处理后,将处理后的报文发送给了第二代理节点,第二代理节点接收到报文时,由于第二代理节点没有存储SRH,第二代理节点会查询不到SRH,造成SRH恢复失败,导致SF节点处理后的报文无法正常传输。
发明内容
本申请实施例提供了一种报文传输方法、代理节点及存储介质,能够解决相关技术中双归接入场景下SRH恢复失败的技术问题。所述技术方案如下:
第一方面,提供了一种报文传输方法,在该方法中,第一代理节点与第二代理节点与连接到同一个业务功能节点,第一代理节点接收包括SRH而目的地址为End.AD.SID的第一报文,则第一代理节点会根据End.AD.SID、该第一报文和对应于第二代理节点的第一绕行SID,生成包括第一报文的SRH、第一绕行SID和控制信息的第二报文,第一代理节点会向第二代理节点发送该第二报文。其中,控制信息来指示第二代理节点存储该第一报文的SRH。
该方法通过为End.AD.SID扩展了一种具有绕行功能的新的SID,本地的代理节点在接收到携带End.AD.SID的报文时,利用这种新的SID,将报文的SRH绕行传输至对端的代理节点,从而让对端的代理节点也能够得到报文的SRH,使得报文的SRH能够存储至对端的代理节点。如此,双归接入至同一个SF节点的两个代理节点之间可以实现报文的SRH的同步,使得报文的SRH得到冗余备份。那么,SF节点对报文进行处理后,可以选择将处理后的报文返回给对端代理节点,由于对端代理节点预先均得到了报文的SRH,因此对端代理节点能够利用之前存储的SRH来恢复报文的SRH,使得SF节点处理后的报文能够通过恢复的SRH得到正常传输。
可选地,在生成第二报文的过程中,第一代理节点可以向对所述第一报文的SRH的剩余段数量SL进行更新,对所述第一报文的目的地址进行更新,得到所述第二报文,所述第二报文的SL大于所述第一报文的SL,从而得到该第二报文。
通过上述可选方式,对接收到的报文进行复制,插入SID(第一绕行SID),即可得到能够绕行传输至对端的报文,相对于从0开始生成整个报文的方式而言,处理操作十分简单,因此生成报文的过程可以由数据面执行,而不依赖于控制面。经实验,对转发芯片的微码配置上述报文生成逻辑,则转发芯片即可执行以实现生成第二报文的步骤,因此支持了微码架构,实用性强。
可选地,第一代理节点可以通过将第一报文的SL加一,来对SL进行更新;第一代理节点可以通过将第一报文的目的地址更新为第一绕行SID,来对第一报文的目的地址更新。
可选地,第一代理节点可以在插入第一绕行SID的基础上,还向第一报文的SRH的段列表插入一个或多个目标SID,其中,一个或多个目标SID用于指示目标转发路径,目标转发路径为第一代理节点至第二代理节点之间的路径。此外,第一代理节点可以通过将第一报文的SL加上大于的正整数,来对SL进行更新;第一代理节点可以通过将第一报文的目的地址更新为中对应于下一个SR节点的SID,来对第一报文的目的地址更新。
可选地,第二报文可以包含扩展头,控制信息可以携带在第二报文的扩展头中。
通过这种方式,如果要在本实施例提供的方法上的基础上增加新功能时,通过在扩展头中继续添加附加的信息即可,比如如果要指令第二代理节点执行某种行为或某种业务,将该其行为或业务对应的其他控制信息也放在扩展头即可,这样,第二报文具有良好的可扩展能力,便于将本实施例提供的SRH备份功能与其他功能搭配使用。
可选地,控制信息可以直接携带在第一绕行SID中。
通过这种方式,通过绕行SID本身即可实现控制对端代理节点存储SRH的功能,这样,免去了控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第二报文可以包括互联网协议(英文:internet protocol,IP)头,控制信息可以携带在第二报文的IP头中。
第二报文的IP头可以由第一报文原有的IP头重新封装得到,那么通过将控制信息直接携带在IP头中,可以让IP头发挥路由功能的基础上,同时顺便发挥承载控制信息的功能,这样同样也可以免去控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第二报文的SRH可以包括类型长度值(Type Lenght Value,TLV),控制信息可以携带在SRH中的TLV中。
通过这种方式,利用了SRH通过TLV来携带信息的能力,同样也可以免去控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第一代理节点还可以在第一缓存表项中存储该第一报文的SRH;根据该第一报文,生成第三报文,向该业务功能节点发送该第三报文,其中,第三报文包括该第一报文的载荷且不包括该第一报文的SRH。
通过这种方式,第一代理节点为SF节点提供了动态代理的服务,将第一报文的载荷传输给了SF节点,使得SF节点可以根据载荷来进行业务功能处理,同时由于没有将SRH发给SF节点,也就避开了SF节点由于无法识别SRH而导致处理失败的情况。
可选地,在存储第一报文的SRH的过程中,第一代理节点可以以第一缓存表项的标识为索引,在第一缓存表项中存储该第一报文的SRH。
通过这种方式,如果SF节点将修改了流标识的报文返回给代理节点,由于存储SRH的缓存表项相对于代理节点来说通常是固定的,不会由于流标识的修改而改变,因此存储SRH时缓存表项的标识和查询SRH时缓存表项的标识能够保持一致,因此以缓存表 项的标识为索引,能够查询到修改了流标识的报文的SRH,从而向修改了流标识的报文恢复SRH。那么,即使SF是网络地址转换(Network Address Translation,NAT)功能的SF,使得报文在传输过程中流标识被改变,代理节点也能够恢复SRH,因此可以让代理节点支持接入NAT功能的SF,从而为实现了NAT功能的SF提供动态代理功能。
可选地,第一代理节点可以以第一缓存表项的标识为索引,在第一缓存表项中存储该第一报文的SRH之前,第一代理节点可以确定业务功能节点的业务功能包括对流标识进行修改。
通过这种方式,当发现SF节点的业务功能可能会导致报文对应的流标识发生改变时,即按照以缓存表项的标识为索引的方式来存储SRH,从而避免查询过程中由于流标识改变而无法寻找到SRH的情况。
可选地,第二报文还包括该第一缓存表项的标识,控制信息还用于指示该第二代理节点存储该第一缓存表项的标识与第二缓存表项的标识之间的映射关系,该第二缓存表项为该第二代理节点存储该第一报文的SRH的缓存表项。
通过这种方式,能够利用控制信息来指示第二代理节点去维护本地存储SRH的缓存表项与对端存储SRH的缓存表项之间的映射关系。那么,如果SF节点将修改了流标识的报文返回给了第二代理节点,第二代理节点能够根据报文中携带的缓存表项的标识,结合映射关系,找到第二代理节点本地缓存的SRH的缓存表项,从而从本地的缓存表项中找到SRH,进而恢复SRH。如此,可以支持SF为NAT类型的SF时进行双归接入的场景。
可选地,第三报文包括第一缓存表项的标识,第一代理节点向业务功能节点发送第三报文之后,第一代理节点还可以从业务功能节点接收第四报文,第一代理节点可以将第一报文中的第一缓存表项的标识为索引,查询得到第一报文的SRH,第一代理节点可以根据接收的第四报文以及查询到的第一报文的SRH,生成第五报文,发送第五报文。其中,第五报文包括第四报文的载荷以及第一报文的SRH;
通过这种方式,第一代理节点实现了动态代理功能,一方面,通过将报文的SRH恢复出来,使得报文可以利用SRH继续在SR网络转发,另一方面,由于报文的载荷得到了SF节点的业务功能处理,使得其他节点可以得到处理后的载荷,在此基础上继续执行SFC的其他业务功能。尤其是,即使SF是NAT功能的SF,使得报文在传输过程中流标识被改变,代理节点能够利用在报文中携带缓存表项的标识,以其为索引来找到SRH,进而恢复SRH,因此可以让代理节点支持接入NAT功能的SF,从而为实现了NAT功能的SF提供动态代理功能。
可选地,第一代理节点可以在检测到第一缓存表项生成这一情况的触发下,生成第二报文,也可以在检测到第一缓存表项更新这一情况触发下,生成第二报文。
通过采用这种方式来触发第二报文的转发流程,一方面,每当第一代理节点接收到一个新的SRH时,由于新的SRH会触发缓存表项的生成,从而触发第二报文的转发,因此能够通过第二报文的转发,将新的SRH及时传输给第二代理节点,从而保证新的SRH被实时同步至第二代理节点。另一方面,对于之前已经存储过的SRH而言,由于已存储的SRH不会触发缓存表项的生成,也就不会触发第二报文的转发,因此免去了为传输这种SRH生成第二报文的处理开销以及传输第二报文的网络开销。
可选地,在存储第一报文的SRH的过程中,第一代理节点以第一报文对应的流标识和端点动态代理SID为索引,在第一缓存表项中存储第一报文的SRH。
通过这种方式,通过以流标识和End.AD.SID为索引,来存储SRH,能够让不同VPN的报文的SRH的索引能够通过不同的End.AD.SID区别开,从而保证不同VPN的报文的SRH的索引不同,使得不同VPN的报文的SRH得以分开存储,实现不同VPN的信息隔离。
可选地,第一代理节点接收配置指令,配置指令包括每个虚拟专用网络(Virtual Private Network,VPN)中的业务功能节点对应的端点动态代理SID,第一代理节点可以根据该配置指令,存储该端点动态代理SID与VPN标识之间的映射关系。
通过这种分配End.AD.SID的方式,End.AD.SID能够作为VPN的标识,发往不同VPN的SF节点的报文中携带的End.AD.SID会不同,通过不同的End.AD.SID,能够区分不同VPN的报文。
可选地,第三报文对应的流标识为第一报文对应的流标识,第三报文包括VPN标识,第一代理节点向业务功能节点发送第三报文之后,第一代理节点还可以从业务功能节点接收第四报文,第四报文对应的流标识为第一报文对应的流标识,第四报文包括VPN标识和载荷;第一代理节点根据所述VPN标识、端点动态代理SID和所述第四报文对应的流标识,查询得到所述第一报文的SRH;第一代理节点根据第四报文以及第一报文的SRH,生成第五报文,第五报文包括第四报文的载荷以及第一报文的SRH;第一代理节点发送第五报文。
通过这种方式,提供了一种支持SRv6 VPN场景下提供动态代理服务的方法,通过建立End.AD.SID和报文携带的VPN标识之间的映射关系,对于不同VPN的报文而言,即使它们对应的流标识相同,由于它们的VPN标识对应于不同的End.AD.SID,因此它们在代理节点缓存中的索引能够通过End.AD.SID区别开,因此可以避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
可选地,第一代理节点向第二代理节点发送第二报文之前,第一代理节点可以检测到故障事件,故障事件包括第一代理节点与业务功能节点之间的链路产生故障的事件或者第一代理节点产生故障的事件中的至少一项。在这一情况下,第一代理节点转发的第二报文还可以包括第一报文的载荷,控制信息还用于指示第二代理节点向业务功能节点发送第一报文的载荷。
通过这种方式,提供了一种支持故障场景下提供动态代理服务的方法,本地代理节点通过检测到故障事件时,对待进行动态代理处理的报文进行重新封装,转发至对端代理节点,指示对端代理节点代替本地代理节点来执行动态代理操作,从重新封装的报文剥离SRH,将重新封装的报文的载荷传输至SF节点,如此,能够在本地的代理节点发生设备故障或链路故障时,将本地的流量绕行至对端继续转发,从而在故障场景下,也能保证报文得以被SF节点处理,从而得到正常传输,因此,极大地提高了网络的稳定性、可靠性和健壮性。
可选地,第一代理节点可以通过第一链路与第二代理节点相连,在这种拓扑的基础上,第一代理节点在发送第二报文的过程中,可以通过对应于第一链路的第一出接口,向第二代理节点发送第二报文。
通过这种方式,第一代理节点可以选择对等链路(peer link)来作为第二报文的转发路径,使得第二报文能够通过对等链路传输至对端。
可选地,第一代理节点可以通过第二链路与路由节点相连,路由节点通过第三链路与第二代理节点相连,在这种拓扑的基础上,第一代理节点在发送第二报文的过程中,可以通过对应于第二链路的第二出接口,向路由节点发送第二报文,第二报文由路由节点通过第三链路转发至第二代理节点。
通过这种方式,第一代理节点可以选择路由节点转发的路径来作为第二报文的转发路径,使得第二报文经过路由节点的转发后能够传输至对端。
可选地,第一代理节点在发送第二报文的过程中,可以对第一链路的状态进行检测,如果第一链路处于可用状态,则第一代理节点可以通过对应于第一链路的第一出接口,向第二代理节点发送第二报文,如果第一链路处于不可用状态,则第一代理节点可以通过对应于第二链路的第二出接口,向路由节点发送第二报文。
通过这种方式,如果对等链路处于可用状态,第一代理节点会优先使用对等链路,使得第二报文优先通过对等链路传输至第二代理节点,那么由于对等链路比路由节点转发的路径更短,对等链路经过的节点比路由节点转发的路径经过的节点少,能够减少传输第二报文的时延。
可选地,第一代理节点在发送第二报文的过程中,第一代理节点可以向第二代理节点连续发送多个第二报文。
通过这种方式,利用重复发送来实现了第二报文的传输可靠性。由于第一报文的SRH通过多个第二报文中的任一个第二报文均可以传输至第二代理节点,因此降低了SRH在传输过程中发生丢失的概率,并且实现起来较为简单,可行性高。
可选地,第一代理节点向第二代理节点发送第二报文之后,当第一代理节点未接收到第六报文时,第一代理节点还可以向第二代理节点重传第二报文,第六报文的源地址为第一绕行SID,第六报文包括对应于第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,第六报文用于表示接收到了第二报文。
通过这种方式,利用确认(Acknowledge,ACK)机制来实现了第二报文的传输可靠性。第一代理节点能够通过是否接收到第六报文,即可确认第二代理节点是否接收到了SRH,在第二代理节点未得到SRH的情况下,第一代理节点通过重传来保证将SRH传输给第二代理节点,从而保证了传输SRH的可靠性。
第二方面,提供了一种报文传输方法,在该方法中,第一代理节点与第二代理节点与连接到同一个业务功能节点,第二代理节点从第一代理节点接收包括第一报文的SRH、对应于第二代理节点的第一绕行SID的第二报文,其中第二报文包括控制信息,第二代理节点确定控制信息指示第二代理节点存储第一报文的SRH,因而,第二代理节点会解析第二报文,得到第一报文的SRH,第二代理节点在第二缓存表项中存储第一报文的SRH。
该方法通过为End.AD.SID扩展了一种具有绕行功能的新的SID,本地的代理节点在接收到携带End.AD.SID的报文时,利用这种新的SID,将报文的SRH绕行传输至对 端的代理节点,从而让对端的代理节点也能够得到报文的SRH,使得报文的SRH能够存储至对端的代理节点。如此,双归接入至同一个SF节点的两个代理节点之间可以实现报文的SRH的同步,使得报文的SRH得到冗余备份。那么,SF节点对报文进行处理后,可以选择将处理后的报文返回给对端代理节点,由于对端代理节点预先均得到了报文的SRH,因此对端代理节点能够利用之前存储的SRH来恢复报文的SRH,使得SF节点处理后的报文能够通过恢复的SRH得到正常传输。
可选地,在解析第二报文的过程中,第二代理节点可以从第二报文的SRH中的段列表,删除第一绕行SID;将第二报文的SRH中的剩余段数量SL减一,得到第一报文的SRH。
通过上述可选方式,将对端发来的报文的SRH中删除一个SID(第一绕行SID),修改其中SL字段,即可还原出对端之前接收到的SRH,处理操作十分简单,因此获取SRH的过程可以由数据面执行,而不依赖于控制面。经实验,对转发芯片的微码配置上述报文生成逻辑,则转发芯片即可执行以实现获取SRH的过程,因此支持了微码架构,实用性强。
可选地,第二报文可以包含扩展头,控制信息可以携带在第二报文的扩展头中。
通过这种方式,如果要在本实施例提供的方法上的基础上增加新功能时,通过在扩展头中继续添加附加的信息即可,比如如果要指令第二代理节点执行某种行为或某种业务,将该其行为或业务对应的其他控制信息也放在扩展头即可,这样,第二报文具有良好的可扩展能力,便于将本实施例提供的SRH备份功能与其他功能搭配使用。
可选地,控制信息可以直接携带在第一绕行SID中。
通过这种方式,通过绕行SID本身即可实现控制对端代理节点存储SRH的功能,这样,免去了控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第二报文可以包括IP头,控制信息可以携带在第二报文的IP头中。
第二报文的IP头可以由第一报文原有的IP头重新封装得到,那么通过将控制信息直接携带在IP头中,可以让IP头发挥路由功能的基础上,同时顺便发挥承载控制信息的功能,这样同样也可以免去控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第二报文的SRH可以包括TLV,控制信息可以携带在SRH中的TLV中。
通过这种方式,利用了SRH通过TLV来携带信息的能力,同样也可以免去控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
可选地,第二代理节点在存储第一报文的SRH的过程中,第二代理节点以第二缓存表项的标识为索引,在第二缓存表项中存储第一报文的SRH。
可选地,第二代理节点可以在存储SRH之前,第二代理节点可以确定业务功能节点的业务功能包括对流标识进行修改。
可选地,第二报文还可以包括第一代理节点存储第一报文的SRH所使用的第一缓存表项
的标识,第二代理节点确定控制信息还指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,第二代理节点还可以存储第一缓存表项的标识 与第二缓存表项的标识之间的映射关系。
通过这种方式,能够利用控制信息来指示第二代理节点去维护本地存储SRH的缓存表项与对端存储SRH的缓存表项之间的映射关系。那么,如果SF节点将修改了流标识的报文返回给了第二代理节点,第二代理节点能够根据报文中携带的缓存表项的标识,结合映射关系,找到第二代理节点本地缓存的SRH的缓存表项,从而从本地的缓存表项中找到SRH,进而恢复SRH。如此,可以支持SF为NAT类型的SF时进行双归接入的场景。
可选地,第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系之后,第二代理节点还可以从业务功能节点接收第四报文,第二代理节点根据第四报文携带的第一缓存表项的标识,查询第一缓存表项的标识与第二缓存表项的标识之间的映射关系,可以得到第一缓存表项的标识对应的第二缓存表项的标识,第二代理节点以第二缓存表项的标识为索引,查询第二缓存表项,可以得到第一报文的SRH。第二代理节点根据第四报文以及第一报文的SRH,可以生成第五报文,第二代理节点发送第五报文。其中,第五报文包括第四报文的载荷以及第一报文的SRH。
通过这种方式,第二代理节点实现了动态代理功能,一方面,通过将报文的SRH恢复出来,使得报文可以利用SRH继续在SR网络转发,另一方面,由于报文的载荷得到了SF节点的业务功能处理,使得其他节点可以得到处理后的载荷,在此基础上继续执行SFC的其他业务功能。尤其是,即使SF是NAT功能的SF,使得报文在传输过程中流标识被改变,第二代理节点能够基于维护的缓存表项之间的映射关系,利用报文中携带的缓存表项的标识,以其为索引从映射关系中找到本地缓存SRH的缓存表项,从而找到SRH,进而恢复SRH,因此第二代理节点可以支持接入NAT功能的SF,从而为实现了NAT功能的SF提供动态代理功能。
可选地,第二报文对应的流标识为第一报文对应的流标识,第一报文的SRH包括端点动态代理SID,第二代理节点在存储第一报文的SRH的过程中,可以使用第二报文对应的流标识和SRH中的端点动态代理SID作为索引,在第二缓存表项中存储第一报文的SRH。
通过这种方式,通过以流标识和End.AD.SID为索引,来存储SRH,能够让不同VPN的报文的SRH的索引能够通过不同的End.AD.SID区别开,从而保证不同VPN的报文的SRH的索引不同,使得不同VPN的报文的SRH得以分开存储,实现不同VPN的信息隔离。
可选地,第二代理节点接收配置指令,第二代理节点可以根据该配置指令,存储该端点动态代理SID与VPN标识之间的映射关系,其中,配置指令包括每个VPN中的业务功能节点对应的端点动态代理SID。
通过这种分配End.AD.SID的方式,End.AD.SID能够作为VPN的标识,不同VPN的SF节点发来的报文中携带的End.AD.SID会不同,通过不同的End.AD.SID,能够区分不同VPN的报文。
可选地,第二代理节点以第一报文对应的流标识和端点动态代理SID为索引,在第二缓存表项中存储第一报文的SRH之后,第二代理节点还可以从业务功能节点接收第四报文,第二代理节点根据第四报文携带的VPN标识、端点动态代理SID、第四报文 对应的流标识,查询第二缓存表项,可以得到第一报文的SRH;第二代理节点根据第四报文以及第一报文的SRH,可以生成第五报文,发送第五报文。其中,第五报文包括第四报文的载荷以及第一报文的SRH。
通过这种方式,提供了一种支持SRv6 VPN场景下提供动态代理服务的方法,通过建立End.AD.SID和报文携带的VPN标识之间的映射关系,对于不同VPN的报文而言,即使它们对应的流标识相同,由于它们的VPN标识对应于不同的End.AD.SID,因此它们在代理节点缓存中的索引能够通过End.AD.SID区别开,因此可以避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
可选地,第二代理节点接收第二报文之后,如果确定控制信息还指示第二代理节点向业务功能节点发送第一报文的载荷,第二代理节点还可以根据第一代理节点发来的第二报文,生成携带了第一报文的载荷的第三报文,将第三报文发给业务功能节点。
通过这种方式,提供了一种支持故障场景下提供动态代理服务的方法,本地代理节点通过检测到故障事件时,对待进行动态代理处理的报文进行重新封装,转发至对端代理节点,指示对端代理节点代替本地代理节点来执行动态代理操作,从重新封装的报文剥离SRH,将重新封装的报文的载荷传输至SF节点,如此,能够在本地的代理节点发生设备故障或链路故障时,将本地的流量绕行至对端继续转发,从而在故障场景下,也能保证报文得以被SF节点处理,从而得到正常传输,因此,极大地提高了网络的稳定性、可靠性和健壮性。
可选地,第二代理节点通过第一链路与第一代理节点相连,第二代理节点在接收第二报文的过程中,可以通过对应于第一链路的第一入接口,从第一代理节点接收第二报文。
通过这种方式,使用对等链路来作为第二报文的转发路径,使得第二报文能够通过对等链路传输至第二代理节点本地。
可选地,第二代理节点通过第三链路与路由节点相连,路由节点通过第二链路与第一代理节点相连,第二代理节点在接收第二报文的过程中,该第二报文可以由第一代理节点通过第二链路发送至路由节点,第二代理节点可以通过对应于第三链路的第二入接口,从路由节点接收第二报文。
通过这种方式,使用了基于路由节点转发的路径来作为第二报文的转发路径,使得第二报文经过路由节点的转发后传输至第二代理节点本地。
可选地,第二代理节点可以从第一代理节点连续接收多个第二报文。
通过这种方式,利用重复发送来实现了第二报文的传输可靠性。由于第一报文的SRH通过多个第二报文中的任一个第二报文均可以传输至第二代理节点,因此降低了SRH在传输过程中发生丢失的概率,并且实现起来较为简单,可行性高。
可选地,第二代理节点从第一代理节点接收第二报文之后,第二代理节点根据第一绕行SID、第二代理节点对应的第二绕行SID和第二报文,生成第六报文,向第一代理节点发送第六报文,第六报文的源地址为第一绕行SID,第六报文包括对应于第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,第六报文用于表示接收到了第二报文。
通过这种方式,利用ACK机制来实现了第二报文的传输可靠性。第一代理节点能 够通过是否接收到第六报文,即可确认第二代理节点是否接收到了SRH,在第二代理节点未得到SRH的情况下,第一代理节点通过重传来保证将SRH传输给第二代理节点,从而保证了传输SRH的可靠性。
第三方面,提供了一种第一代理节点,该第一代理节点具有实现上述第一方面或第一方面任一种可选方式中报文传输的功能。该第一代理节点包括至少一个模块,所述至少一个模块用于实现上述第一方面或第一方面任一种可选方式所提供的报文传输方法。第三方面提供的第一代理节点的具体细节可参见上述第一方面或第一方面任一种可选方式,此处不再赘述。
第四方面,提供了一种第二代理节点,该第二代理节点具有实现上述第二方面或第二方面任一种可选方式中报文传输的功能。该第二代理节点包括至少一个模块,所述至少一个模块用于实现上述第二方面或第二方面任一种可选方式所提供的报文传输方法。第四方面提供的第二代理节点的具体细节可参见上述第二方面或第二方面任一种可选方式,此处不再赘述。
第五方面,提供了一种第一代理节点,该第一代理节点包括处理器,该处理器用于执行指令,使得该第一代理节点执行上述第一方面或第一方面任一种可选方式所提供的报文传输方法。第五方面提供的第一代理节点的具体细节可参见上述第一方面或第一方面任一种可选方式,此处不再赘述。
第六方面,提供了一种第二代理节点,该第二代理节点包括处理器,该处理器用于执行指令,使得该第二代理节点执行上述第二方面或第二方面任一种可选方式所提供的报文传输方法。第六方面提供的第二代理节点的具体细节可参见上述第二方面或第二方面任一种可选方式,此处不再赘述。
第八方面,提供了一种计算机可读存储介质,该存储介质中存储有至少一条指令,该指令由处理器读取以使第一代理节点执行上述第一方面或第一方面任一种可选方式所提供的报文传输方法。
第八方面,提供了一种计算机可读存储介质,该存储介质中存储有至少一条指令,该指令由处理器读取以使第二代理节点执行上述第二方面或第二方面任一种可选方式所提供的报文传输方法。
第九方面,提供了一种计算机程序产品,当该计算机程序产品在第一代理节点上运行时,使得第一代理节点执行上述第一方面或第一方面任一种可选方式所提供的报文传输方法。
第十方面,提供了一种计算机程序产品,当该计算机程序产品在第二代理节点上运 行时,使得第二代理节点执行上述第二方面或第二方面任一种可选方式所提供的报文传输方法。
第十一方面,提供了一种芯片,当该芯片在第一代理节点上运行时,使得第一代理节点执行上述第一方面或第一方面任一种可选方式所提供的报文传输方法。
第十二方面,提供了一种芯片,当该芯片在第二代理节点上运行时,使得第二代理节点执行上述第二方面或第二方面任一种可选方式所提供的报文传输方法。
第十三方面,提供了一种报文传输系统,该报文传输系统包括第一代理节点以及第二代理节点,该第一代理节点用于执行上述第一方面或第一方面任一种可选方式所述的方法,该第二代理节点用于执行上述第二方面或第二方面任一种可选方式所述的方法。
附图说明
图1是本申请实施例提供的一种SFC的系统架构图;
图2是本申请实施例提供的一种SRv6 SFC中执行代理操作的示意图;
图3是本申请实施例提供的一种双归接入的系统架构图;
图4是本申请实施例提供的一种双归接入的系统架构图;
图5是本申请实施例提供的一种报文传输方法的流程图;
图6是本申请实施例提供的一种第一报文的示意图;
图7是本申请实施例提供的一种扩展头的示意图;
图8是本申请实施例提供的一种绕行SID的示意图;
图9是本申请实施例提供的一种第一代理节点与第二代理节点的网络拓扑图;
图10是本申请实施例提供的一种第二报文的示意图;
图11是本申请实施例提供的一种第二报文的示意图;
图12是本申请实施例提供的一种第二报文的示意图;
图13是本申请实施例提供的一种第二报文的示意图;
图14是本申请实施例提供的一种第一代理节点与第二代理节点的网络拓扑图;
图15是本申请实施例提供的一种第二报文的示意图;
图16是本申请实施例提供的一种第二报文的示意图;
图17是本申请实施例提供的一种第二报文的示意图;
图18是本申请实施例提供的一种第二报文的示意图;
图19是本申请实施例提供的一种更新段列表的示意图;
图20是本申请实施例提供的一种更新段列表的示意图;
图21是本申请实施例提供的一种流量规划的示意图;
图22是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图23是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图24是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图25是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图26是本申请实施例提供的一种第三报文的示意图;
图27是本申请实施例提供的一种第三报文的示意图;
图28是本申请实施例提供的一种第三报文的示意图;
图29是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图30是本申请实施例提供的一种将第一报文演变至第二报文的示意图;
图31是本申请实施例提供的第一状态机的示意图;
图32是本申请实施例提供的第二状态机的示意图;
图33是本申请实施例提供的一种自发它收的场景下报文传输的示意图;
图34是本申请实施例提供的一种自发自收的场景下报文传输的示意图;
图35是本申请实施例提供的一种报文传输方法的流程图;
图36是本申请实施例提供的一种报文传输方法的流程图;
图37是本申请实施例提供的一种报文传输方法的流程图;
图38是本申请实施例提供的一种报文传输方法的流程图;
图39是本申请实施例提供的一种报文传输方法的流程图;
图40是本申请实施例提供的一种故障场景下传输报文的示意图;
图41是本申请实施例提供的一种报文传输方法的流程图;
图42是本申请实施例提供的一种故障场景下传输报文的示意图;
图43是本申请实施例提供的一种报文传输方法的流程图;
图44是本申请实施例提供的一种故障场景下传输报文的示意图;
图45是本申请实施例提供的一种第一代理节点的结构示意图;
图46是本申请实施例提供的一种第二代理节点的结构示意图;
图47是本申请实施例提供的一种网络设备的结构示意图;
图48是本申请实施例提供的一种计算设备的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合附图对本申请实施方式作进一步地详细描述。
本申请中术语“至少一个”的含义是指一个或多个,本申请中术语“多个”的含义是指两个或两个以上,例如,多个第二报文是指两个或两个以上的第二报文。本文中术语“系统”和“网络”经常可互换使用。
本申请中术语“第一”“第二”等字样用于对作用和功能基本相同的相同项或相似项进行区分,应理解,“第一”、“第二”、“第n”之间不具有逻辑或时序上的依赖关系,也不对数量和执行顺序进行限定。
应理解,在本申请的各个实施例中,各个过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
应理解,根据A确定B并不意味着仅仅根据A确定B,还可以根据A和/或其它信 息确定B。
应理解,说明书通篇中提到的“一个实施例”或“一实施例”意味着与实施例有关的特定特征、结构或特性包括在本申请的至少一个实施例中。因此,在整个说明书各处出现的“在一个实施例中”或“在一实施例中”未必一定指相同的实施例。此外,这些特定的特征、结构或特性可以任意适合的方式结合在一个或多个实施例中。
以下,示例性介绍业务功能链的系统架构。
传统电信网络的业务功能(如防火墙、负载均衡等)通常与硬件资源紧密耦合,各个业务节点的产品形态为专用的设备,导致部署复杂,并且难以扩容、迁移和升级。而SFC将每种业务功能虚拟为一个SF节点,通过组合多个SF节点得到一个有序的业务功能集合,即业务功能链。在SFC中,流量会按照指定的顺序经过一条业务链中的各个SF节点,通过各个SF节点依次对报文进行处理,从而完成业务处理流程。通过SFC技术,一方面,能够将业务功能与硬件资源解耦,从而方便了业务的灵活开通和快速部署。例如,如果开通了新业务,可以免去传统电信网络中对硬件设备改造和升级的繁琐操作,通过向SFC提供的业务链增加新的SF节点,即可基于新增的SF节点来支持新业务的运行。另一方面,SFC能够灵活地和其他虚拟化技术结合起来,例如,SFC可以和网络功能虚拟化(英文:Network Function Virtualization,简称:NFV)结合,则NFV中的NFV管理节点,比如网络功能虚拟化管理器(英文:Network Functions Virtualisation Manager,简称:VNFM)等,能够充当SFC的控制面来控制SFC中的各个节点;又如,SFC可以和软件定义网络(英文:Software Defined Network,简称:SDN)结合,则SDN中的SDN控制器(英文:SDN controller)能够充当SFC的控制面来控制SFC中的各个节点。
SFC的系统架构可以包括多种节点,每种节点分别具有对应的功能,不同节点相互协同来实现业务链的整体功能。通常情况下,SFC的系统架构可以包括流分类器(英文:classifier,简称:CF)、SF节点、代理(英文:proxy)节点、业务功能转发(英文:Service Function Forwarder,简称:SFF)节点以及路由(英文:router)节点。
例如,参见图1,图1是本申请实施例提供的一种SFC的系统架构图。在图1中,SF节点为SF节点1、SF节点2、SF节点3或SF节点4,SFF节点为SFF节点1或SFF节点2,代理节点为代理节点1或代理节点2,路由节点为路由节点1或路由节点2。图1中的业务功能链域(SFC domain,SFC域)是指使能了SFC的节点的集合。
需要说明的一点是,SFF节点和代理节点可以集成在同一设备中,即,SFF节点所在的硬件设备同时实现了代理节点的功能。例如,参见图1,SFF节点1和代理节点1位于同一硬件设备中,SFF节点2和代理节点2位于同一硬件设备中。
流分类器可以是SFC的入口路由器,流分类器用于对接收的流量按照分类规则进行分类,如果流量满足分类标准,则流分类器会将流量转发,使得流量进入SFC提供的业务链。通常情况下,流分类器确定流量满足分类标准后,会向流量对应的每个报文添加SFC信息,使得报文基于SFC信息,在网络中按照业务功能链对应的顺序,依次到达各个SF节点,从而被各个SF节点依次进行业务处理。流分类器可以是路由器、交换机或者其他网络设备。
SF节点用于对报文进行业务处理。通常来讲,SF节点可以和一个或多个SFF节 点连接,SF节可以从一个或多个SFF节点接收到待处理的报文。SF节点对应的业务功能可以根据业务场景设置。例如,SF节点对应的业务功能可以是用户认证、防火墙、网络地址转换(英文:Network Address Translation,简称:NAT)、带宽控制、病毒检测、云存储、深度包检测(英文:Deep Packet Inspection,简称:DPI)、入侵检测或者入侵防御等。SF节点可以是服务器、主机、个人计算机、网络设备或者终端设备。
SFF节点用于根据SFC信息,将接收到的报文转发至SF节点。通常情况下,SF节点对报文处理结束后,会将报文返回至同一个SFF节点,即之前向SF节点发送处理前的报文的SFF节点,SFF节点会将处理后的报文返回至网络。SFF节点可以是路由器、交换机或者其他网络设备。
SFF节点以及SF节点可以作为SFC的数据面。在一个实施例中,SFF节点和SF节点可以建立隧道,SFF节点和SF节点之间可以通过隧道来传输报文。其中,该隧道可以是虚拟扩展局域网(英文:Virtual Extensible Local Area Network,简称:VXLAN)、通用路由封装(英文:generic routing encapsulation,简称:GRE)隧道(tunnel)、移动IP数据封装和隧道(英文:IP-in-IP)等。在另一个实施例中,SFF节点和SF节点之间可以不通过隧道传输报文,而是直接传输IP报文或者以太网(Ethernet)报文。
SFC的数据面可以通过SFC的控制面来控制。SFC的控制面可以包括一个或多个SFC控制器。例如,参见图1,SFC控制器可以是SDN控制器、NFV管理节点1或者NFV管理节点1。在一个实施例中,在报文传输之前,SFC的客户端可以将业务功能链的属性、配置、策略等信息发送至SFC控制器,SFC控制器可以根据客户端发送的信息以及SFC的拓扑,生成业务功能链相关的信息,例如待进入SFC的报文的分类规则、业务功能链对应的每个SF节点的标识等发送至流分类器以及SFC中的其他节点,以使流分类器基于SFC控制器发来的信息,对接收到的报文进行分类以及添加SFC信息。
应理解,图1所示的SFC的系统架构中各个节点的类型为可选方式,本申请实施例中SFC的系统架构中节点的类型可以比图1所示的更多,此时SFC的系统架构还可以包括其他类型的节点;或者,SFC的系统架构中节点的类型也可以比图1所示的更少,本申请实施例对SFC的系统架构中节点的类型不做限定。
应理解,图1所示的SFC的系统架构中各个节点的数量为可选方式,本申请实施例中SFC的系统架构中节点的数量可以比图1所示的更多,或者比图1所示的更少,例如,上述路由节点可以仅为一个,或者上述路由节点为几十个或几百个,或者更多数量;又如,上述SF节点为几十个或几百个,或者更多数量,本申请实施例对SFC的系统架构中节点的数量不做限定。
以上介绍了SFC的系统架构,SFC可以和SRv6技术结合起来,这两种技术的结合通常称为SRv6 SFC。以下,对SRv6 SFC的系统架构进行示例性描述。
分段路由(英文:Segment Routing,简称:SR)是基于源路由的理念而设计的在网络中转发数据包的一种协议。SR将转发路径划分为一个个段,为这些段以及网络中的节点分配分段标识(Segment ID,SID),通过对SID进行有序排列,可以得到一个包含SID的列表,该列表在基于互联网协议第6版的分段路由(英文:Segment Routing Internet Protocol Version 6,简称:SRv6)通常称为段列表(SID List,也称段标识列表),在分 段路由多协议标签交换(英文:Segment Routing Multi-Protocol Label Switching,简称:SR-MPLS)通常称为标签栈。段列表可以指示一条转发路径。通过SR技术,可以指定携带了SID List的报文经过的节点以及路径,从而满足流量调优的要求。做一个类比,报文可以比作行李,SR可以比作行李上贴的标签,如果要将行李从A地区发送到D地区,途径B地区和C地区,则可以在始发地A地区给行李贴上一个标签“先到B地区,再到C地区,最后到D地区”,这样一来,各个地区只需识别行李上的标签,依据行李的标签将行李从一个地区转发至另一个地区即可。在SR技术中,源节点会向报文添加标签,中间节点可以根据标签转发至下一个节点,直至报文到达目的节点。例如在报文的包头中,插入<SID1,SID2,SID3>,则报文会首先转发给SID1对应的节点,之后转发给SID2对应的节点,之后转发给SID3对应的节点。
SRv6是指将SR技术应用在IPv6网络中,使用128比特的IPv6地址来作为SID的形式。在转发报文时,支持SRv6的节点会按照报文中的目的地址(Destination Address,DA),查询本地SID表(英文:local SID table,也称本地段标识表或my SID table),当报文的目的地址与本地SID表中的任一SID匹配时,确认目的地址命中了本地SID表,则基于该SID对应的拓扑、指令或服务,来执行相应的操作,例如,可以将报文从SID对应的出接口转发出去;如果报文的目的地址与本地SID表中的每个SID均不匹配,则根据目的地址查询IPv6的路由FIB,根据目的地址在路由FIB中命中的表项转发报文。其中,本地SID表是指节点本地存储的包含SID的列表,为了将本地SID表和报文中携带的段列表区分描述,本地SID表的名称通常包含“本地(local)”的前缀。
在SRv6 SFC的系统架构下,SRv6的头节点可以是SFC中的流分类器,SRv6的SRv6节点可以是SFC中的路由节点、SFF节点、代理节点或者SF节点。具体地,SFC信息可以采用SRv6的封装格式,SFC信息可以是段列表,通过对报文中段列表中的SID进行编排,能够指明报文在SFC中的转发路径或者待对报文执行的各种业务处理操作。具体地,段列表中包含一个或多个依次排列的SID,每个SID在形式上是一个128比特的IPv6地址,每个SID在实质上能够表示拓扑、指令或服务,报文的目的地址为SL当前指向的段列表中的SID。SFC中的节点接收到报文后,会读取报文的目的地址,根据目的地址来执行转发操作。如果接收到报文的节点支持SRv6,节点会根据目的地址查询本地SID表,当目的地址命中了本地SID表中的SID时,基于该SID对应的拓扑、指令或服务,来执行相应的操作,当目的地址未命中本地SID表中的SID时,节点会根据目的地址查询IPv6的路由FIB,根据目的地址在路由FIB中命中的表项转发报文。如果接收到报文的节点不支持SRv6,则节点会直接根据目的地址查询IPv6的路由FIB来转发报文。
在一些场景下,SF节点可以是不支持SRv6的节点,即不感知SRv6的(英文:SRv6-unaware)节点,SF节点本身可能无法识别SRH,此时可以通过代理节点,保证不识别SRH的SF节点也能正常处理报文。
代理节点用于代理SF节点识别SRH,将报文的SRH剥离,将不包含SRH的报文转发给SF节点,那么由于SF节点接收到的报文不包含SRH,使得不支持SRv6的SF节点能够处理报文。
参见图2,图2是本申请实施例提供的一种SRv6 SFC中执行代理操作的示意图。 代理节点能够和SFF节点以及SF节点通信,代理节点会代理SF节点从SFF节点接收到包含SRH的报文;代理节点会从报文剥离SRH,得到不包含SRH的报文,通过出接口发送不包含SRH的报文。SF节点会通过入接口接收到不包含SRH的报文,对报文进行处理后,通过出接口发送不包含SRH的报文,代理节点会接收不包含SRH的报文,对报文封装SRH,得到包含SRH的报文,将包含SRH的报文通过出接口发送出去,以使报文返回至SFF节点。代理节点可以是网络设备,例如路由器或者交换机,代理节点也可以是服务器、主机、个人计算机或者终端设备。代理节点可分为动态代理、静态代理、共享内存代理以及伪代理等类型,其中,动态代理与静态代理的区别在于,动态代理操作中会对剥离的SRH进行缓存,而静态代理操作中会对剥离的SRH进行丢弃。
应理解,上文描述的SFF节点和代理节点集成设置仅是可选方式,在另一个实施例中,SFF节点和代理节点也可以分别设置在不同的硬件设备中,SFF节点所在的硬件设备可以和代理节点所在的硬件设备通过网络进行通信。
以上介绍了SRv6 SFC的整体架构,以下示例性介绍基于SRv6 SFC实现双归接入的系统架构。双归接入可以是SF节点双归接入至代理节点,即,同一个SF节点同时接入了两个代理节点。通过基于SRv6 SFC实现双归接入,需要发给SF节点处理的报文从路由节点出发,可以经过某一个代理节点到达SF节点,也可以经过另一个代理节点到达SF节点,通过冗余的路径,可以大大提升网络的可靠性。
参见图3,图3是本申请实施例提供的一种双归接入的系统架构图,该系统包括第一代理节点101、第二代理节点102、SF节点103以及路由节点104,第一代理节点101与第二代理节点102连接到同一个SF节点103。该系统中的每个节点可以是SFC中的节点。例如,第一代理节点101可以是图1中的代理节点1,第二代理节点102可以是图1中的代理节点2,SF节点103可以是图1中的SF节点1或SF节点2,路由节点104可以是图1中的路由节点2。
图3所示的系统中不同节点之间可以建立链路,通过链路相连。例如,在图3中,第一代理节点101可以通过第一链路与第二代理节点102相连,第一代理节点101可以通过第二链路与路由节点104相连,路由节点104可以通过第三链路与第二代理节点102相连,第一代理节点101可以通过第四链路与SF节点103相连,第二代理节点102可以通过第五链路与SF节点103相连。
第一代理节点101以及第二代理节点102可以是关系对等的节点,即,第一代理节点101以及第二代理节点102在报文传输过程中的功能相同,或者说扮演的角色相同。对于第一代理节点101而言,第一代理节点101可以记为本地代理(英文:local proxy),第二代理节点102可以记为对端代理(英文:peer proxy)。同理地,对于第二代理节点102而言,第二代理节点102可以记为local proxy,第一代理节点101可以记为peer proxy。第一代理节点101与第二代理节点102之间的链路可以记为对等链路(英文:peer link)。
由于第一代理节点101与第二代理节点102关系对等,通常情况下,路由节点104可以选择将接收到的报文发送至第一代理节点101,也可以选择将接收到的报文发送至第二代理节点102,因此,报文既可以从第一代理节点101转发至SF节点103,也可以从第二代理节点102转发至SF节点103。同理地,SF节点103对报文处理后,可以选 择将处理后的报文发送至第一代理节点101,也可以选择将处理后的报文发送至第二代理节点102,因此,处理后的报文既可以从第一代理节点101返回至路由节点104,也可以从第二代理节点102返回至路由节点104。由此可见,SF节点103通过同时接入至第一代理节点101以及第二代理节点102,实现了双归接入的功能。
通过进行双归接入,一方面,能够实现负载分担的功能,对于待传输至SF节点103的报文而言,既可以通过第一代理节点101对报文执行动态代理操作,也可以通过第二代理节点102对报文执行动态代理操作,从而利用两个代理节点共同进行动态代理操作,减轻单个代理节点处理报文的压力。此外,对于SF节点103处理后的报文而言,报文既可以通过第一代理节点101返回至路由节点104,也可以通过第二代理节点102返回至路由节点104,从而达到对SF节点103处理后的流量进行分流的效果,减轻单个代理节点转发报文的压力,此外,报文通过在两条链路上传输,从而加大了传输带宽,因此提高了传输速度。另一方面,能够提高传输可靠性,如果第一代理节点101或者第四链路故障,报文仍可以通过第二代理节点102返回至路由节点104,如果第二代理节点102或者第五链路故障,报文仍可以通过第一代理节点101返回至路由节点104,从而避免单个代理节点故障后导致报文传输中断的问题。
应理解,图3所示的组网方式仅是可选方式。在图3中,第一代理节点101以及第二代理节点102通过部署了对等链路,能够提高网络的可靠性,在故障场景下,到达第一代理节点101的流量可以通过对等链路绕行至第二代理节点102,从而避免占用路由节点与代理节点之间的带宽。
在另一个实施例中,双归接入系统也可以采用其他的方式组网。例如,参见图4,第一代理节点101与第二代理节点102之间可以不部署对等链路,这种组网的部署较为简单,并且免去了对等链路占用的出接口,从而节省第一代理节点101的接口资源以及第二代理节点102的接口资源。
应理解,图3或图4所示的各个链路可以是一跳IP链路,也可以是多跳IP链路的组合。如果两个节点通过多跳IP链路相连,则两个节点之间的转发路径可以经过一个或多个中间节点,该中间节点可以转发该两个节点之间的报文。本实施例对各个链路包含的IP链路的跳数并不做限定。
其中,如果两个节点位于一跳IP链路,可以称两个节点on-link,从IP路由的角度来说,两个节点是一跳可达的。例如,在两个节点之间转发IPv4报文的情况下,当一个节点发送了IPv4报文时,该IPv4报文中的生存时间值(Time To Live,TTL)减一后,即可到达另一个节点。又如,在两个节点之间转发IPv6报文的情况下,当一个节点发送了IPv6报文时,该IPv4报文中的跳数限制(hop-limit)减一后,即可到达另一个节点。
需要说明的一点是,两个节点位于一跳IP链路,并不代表两个节点在物理上必须直接连接。两个节点位于一跳IP链路包含两个节点在物理上直接连接的情况,当然也包含两个节点在物理上不是直接连接的情况。例如,当两个节点之间通过一个或多个二层交换机相连时,两个节点也可以称为位于一跳IP链路。
此外,如果两个节点位于多跳IP链路,可以称两个节点off-link,从IP路由的角度来说,两个节点经过多跳路由可达。
应理解,图3或图4描述的双归接入系统仅是可选方式,SFC中具有两个关系对等的代理节点也仅是可选方式。在另一个实施例中,SFC中可以具有两个以上的关系对等的代理节点,任一个代理节点的peer proxy可以不仅是一个,而是多个,任一个代理节点接入的peer link可以不仅是一条,而是多条,代理节点可以通过每条peer link,分别和对应的peer proxy连接。
以上介绍了SRv6 SFC双归接入的系统架构,以下示例性介绍基于上文提供的系统架构传输报文的方法流程。
参见图5,图5是本申请实施例提供的一种报文传输方法的流程图,如图5所示,该方法的交互主体包括第一代理节点、第二代理节点、SF节点以及路由节点。其中,本文中的术语“节点”可以是指设备。例如,路由节点可以是网络设备,例如是路由器或交换机,比如说,该路由节点可以为图3或图4中的路由节点104;第一代理节点可以是网络设备,例如是路由器、交换机或防火墙,比如说,该第一代理节点可以为图3或图4中的第一代理节点101;第一代理节点也可以是计算设备,例如可以是主机、服务器、个人电脑或其他设备;第二代理节点可以是网络设备,例如是路由器、交换机或防火墙,比如说,第二代理节点可以为图3或图4中的第二代理节点102,第二代理节点也可以是计算设备,例如可以是主机、服务器、个人电脑或其他设备;SF节点可以是计算设备,例如可以是主机、服务器、个人电脑或其他设备,比如说,该SF节点可以为图3或图4中的SF节点103,该方法可以包括以下步骤:
步骤501、路由节点向第一代理节点发送第一报文。
第一报文:是指载荷待被SF节点处理的报文。参见图6,图6是本申请实施例提供的一种第一报文的示意图。第一报文可以是SRv6报文。第一报文可以包括IPv6头、SRH以及载荷。该IPv6头可以包括源地址以及目的地址,该SRH可以包括段列表、SL以及一个或多个TLV,该段列表可以包括一个或多个SID。第一报文的载荷可以是IPv4报文、IPv6报文或者以太网(英文:Ethernet)帧。其中,第一报文的目的地址可以用于承载段列表中的活跃SID(英文:active SID),该活跃SID也称活跃段(Active segment),为接收到报文的SR节点要处理的SID。在SR MPLS中活跃SID为标签栈的最外层标签,在IPv6中活跃SID为报文的目的地址。例如,如果段列表包括5个SID,分别是SID0、SID1、SID2、SID3以及SID4,而SL取值为2,则表明段列表中未被执行的SID有2个,分别是SID0以及SID1,段列表中当前被执行的SID是SID2,段列表中已被执行的SID有2个,分别是SID3以及SID4。本实施例中,第一报文的目的地址可以是端点动态代理SID(英文:End.AD.SID,其中AD表示动态代理),即,第一报文的活跃SID可以是End.AD.SID。
其中,第一报文的IPv6头中的源地址可以用于标识封装了SRH的设备,例如源地址可以是流分类器的地址,或者源地址是头节点器的地址。可选地,第一报文的IPv6头也可以不携带源地址。
应理解,图6仅是第一报文的示意,本申请实施例中,第一报文中字段的数量、类型可以比图6所示的更多,或者比图6所示的更少。此外,图6所示的第一报文中的每个字段的位置以及长度可以根据实际需要进行适应性的改变,本实施例不对报文中每个 字段的位置进行限定,也不对每个字段的长度进行限定。
End.AD.SID:用于指示第一代理节点或第二代理节点执行动态代理操作。动态代理操作也称End.AD操作,动态代理操作具体可以包括剥离报文中的SRH、缓存报文中的SRH以及将剥离了SRH的报文发送出去等操作。End.AD.SID可以预先在第一代理节点或第二代理节点上进行配置,End.AD.SID可以预先存储在第一代理节点或第二代理节点的本地SID表中。End.AD.SID可以由第一代理节点或第二代理节点发布至网络中。流分类器可以接收发布的End.AD.SID,通过将End.AD.SID添加至报文中,以使携带End.AD.SID的报文转发至第一代理节点或第二代理节点后,触发第一代理节点或第二代理节点对报文执行动态代理操作。
路由节点发送第一报文的方式可以是逐流(flow-based)的方式,也可以是逐包(packet-based)的方式,本实施例对发送第一报文的方式不做限定。
步骤502、第一代理节点从路由节点接收第一报文,第一代理节点在第一缓存表项中存储第一报文的SRH。
关于第一代理节点对第一报文执行的操作,在一个实施例中,第一代理节点可以读取第一报文的目的地址,第一代理节点可以根据第一报文的目的地址,查询本地SID表,判断目的地址与本地SID表中的SID是否匹配。当目的地址与本地SID表中的End.AD.SID匹配,即目的地址命中了本地SID表中的End.AD.SID时,第一代理节点可以确定第一报文是SRv6报文,执行End.AD.SID对应的动态代理操作。
第一缓存表项:是指第一代理节点存储第一报文的SRH的缓存表项。其中,第一代理节点可以包括接口板,接口板可以包括转发表项存储器,第一缓存表项可以存储在转发表项存储器中。在一个实施例中,第一代理节点可以检测缓存中是否已经存储了第一报文的SRH,如果缓存中未存储第一报文的SRH,则第一代理节点为第一报文的SRH分配缓存表项,得到第一缓存表项,在该第一缓存表项中存储第一报文的SRH。此外,如果检测到缓存中已存储第一报文的SRH,可以免去存储第一报文的SRH的步骤。
其中,检测缓存中是否已经存储第一报文的SRH的方式可以包括:对每个缓存表项中的SRH与第一报文的SRH进行比较,如果任一缓存表项中的SRH与第一报文的SRH一致,则确认该缓存表项已存储了第一报文的SRH,如果每个缓存表项中的SRH与第一报文的SRH均不一致,则确认该缓存未存储第一报文的SRH,要更新缓存,来存储第一报文的SRH。在一个实施例中,可以对缓存表项中的SRH的全部内容与第一报文的SRH的全部内容进行比较,也可以对缓存表项中的SRH的部分内容与第一报文的SRH的部分内容进行比较,可以对缓存表项中的SRH的哈希值与第一报文的SRH的哈希值进行比较,本实施例对比较方式不做限定。
在一个实施例中,第一报文的SRH的存储方式可以包括下述存储方式一至存储方式三中的任一种:
存储方式一、第一代理节点可以获取第一报文对应的流标识,以第一报文对应的流标识为索引,存储第一报文的SRH。其中,第一代理节点可以采用键值存储的方式来存储第一报文的SRH,则流标识为键(key),第一报文的SRH为值(value)。
流标识用于标识报文所属的数据流,同一条数据流中每个报文对应的流标识相同,流标识可以为报文中一个或多个字段的取值。作为示例,第一报文对应的流标识可以是 第一报文的五元组、MAC头信息或应用层信息。五元组可以包括报文的源IP地址、源端口号,目的IP地址、目的端口号和传输层协议号。MAC帧头信息可以包括报文的源MAC地址、目的MAC地址、以太网类型(Ether Type)以及VLAN标签(VLAN tag)。应用层信息可以是报文的载荷中一个或多个字段的取值。
存储方式二、第一代理节点可以获取第一缓存表项的标识,以第一缓存表项的标识为索引,存储第一报文的SRH,其中,第一代理节点可以采用键值存储的方式来存储第一报文的SRH,则第一缓存表项的标识为key,第一报文的SRH为value。此外,如果采用存储方式二,报文传输的流程可以参见下述图36所示实施例以及图37所示实施例。
存储方式三、第一代理节点可以以第一报文对应的流标识和End.AD.SID为索引,存储第一报文的SRH,其中,第一代理节点可以采用键值存储的方式来存储第一报文的SRH,则流标识和End.AD.SID的组合为key,第一报文的SRH为value。此外,如果采用存储方式三,报文传输的流程可以参见下述图38所示实施例以及图39所示实施例。
通过执行步骤502,第一报文的SRH能够存储在第一代理节点中,那么如果后续过程中,SF节点或者第二代理节点向第一代理节点发送了不包含SRH的报文,第一代理节点可以利用预先的存储,找到第一报文的SRH,从而恢复SRH。
步骤503、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
第二报文:用于将第一报文的SRH传输至对端代理节点的报文。本实施例中,第二报文可以为控制报文。第二报文可以包括第一绕行SID、控制信息和第一报文的SRH。第二报文的源地址可以是对应于第一代理节点的第二绕行SID,或者其他能够标识第一代理节点的信息。第二报文可以是SRv6报文,第二报文可以包括IPv6头、SRH以及载荷。该IPv6头可以包括第二报文的源地址以及第二报文的目的地址。第二报文的SRH可以包括第一报文的SRH中的每个SID以及第一绕行SID。第二报文的载荷可以为第一报文的载荷。需要说明的是,第二报文包括第一报文的载荷是可选方式,在另一个实施例中,第二报文可以不包括第一报文的载荷。其中,第二报文的IPv6头的源地址可以是对应于第一代理节点的第二绕行SID,第二绕行SID用于标识第二报文的源节点是第一代理节点。
在一个可选的实施例中,可以预先在第一代理节点上,配置End.AD.SID对应的处理策略,该处理策略可以包括执行动态代理操作以及生成和发送第二报文的操作,因此第一代理节点接收到第一报文后,根据第一报文中的End.AD.SID,不仅会执行步骤502、步骤508以及步骤509来提供动态代理服务,还会执行步骤503至步骤504以控制第二代理节点存储SRH。通过这种方式,充分利用了SRv6通过SID来进行编程的能力,扩展了End.AD.SID对应的操作。
关于配置End.AD.SID的过程,在一个实施例中,可以预先对第一代理节点进行配置操作,第一代理节点可以接收配置指令,从配置指令获取End.AD.SID;第一代理节点可以对End.AD.SID进行存储,例如将End.AD.SID存储在第一代理节点的本地SID表中。同理地,可以预先对第二代理节点进行配置操作,第二代理节点可以接收配置指令,从配置指令获取End.AD.SID;第二代理节点可以对End.AD.SID进行存储,例如将End.AD.SID存储在第二代理节点的本地SID表中。其中,配置操作可以由用户在第一 代理节点或第二代理节点上执行,也可以由SFC的控制面,例如SDN控制器或NFVM执行,本实施例对此不做限定。
在一个实施例中,第一代理节点与第二代理节点可以是任播端点动态代理SID(英文:anycast End.AD.SID)的关系。任播(Anycast)又称选播、泛播或任意播,为IPv6的一种通信方式,是指通过同一个地址来标识一组提供相同或相应服务的节点,使得以该地址为目的地址的报文能够被路由至这一组节点中的任一个节点。本实施例中,可以将同一个End.AD.SID分配给第一代理节点和第二代理节点,使得第一代理节点上配置的End.AD.SID和第二代理节点上配置的End.AD.SID相同。那么,如果路由节点接收到目的地址为End.AD.SID的报文,路由节点既可以将报文发送至第一代理节点,也可以将报文发送至第二代理节点,因此,报文既可以通过第一代理节点来被执行动态代理操作,也可以通过第二代理节点来被执行动态代理操作,从而便于实现双归接入的功能。此外,可以为任播End.AD.SID配置对应的任播索引(英文:anycast-index),任播索引用于标识同一组具有任播关系的节点的End.AD.SID,对于同一个End.AD.SID而言,第一代理节点上为End.AD.SID配置的任播索引和第二代理节点上为End.AD.SID配置的任播索引可以相同。
示例性地,第一代理节点可以通过接收以下配置指令,将End.AD.SID配置为任播End.AD.SID,并配置End.AD.SID对应的任播索引。
[proxy1]segment-routing ipv6
[proxy1-segment-routing-ipv6]locator t1 ipv6-prefix A::1 64 static 32
[proxy1-segment-routing-ipv6-locator]opcode::2 end-ad anycast-index 10
[proxy1-segment-routing-ipv6-locator-endad]encapsulation ip ipv4 outbound Ethernet1/0/0 next-hop 1.1.1.1inbound Ethernet1/0/1
上述配置指令中proxy1表示第一代理节点,第一行配置指令的含义是要配置IPv6的SR,第二行配置指令的含义是Locator(定位信息)是t1,IPv6的前缀是A::1 64,静态路由是32。第三行配置指令的含义是配置IPv6的SR的Locator,操作码是::2,End.AD.SID对应的任播索引是10。第四行配置指令的含义是配置IPv6的SR的End.AD.SID,封装出ipv4报文,以1槽位0子板0号以太网口为出接口,发送ipv4报文,下一跳的地址是1.1.1.1,入接口是1槽位0子板1号以太网口。上述配置指令体现的场景是,第一报文的载荷是IPv4报文,代理节点生成第三报文的方式是,获取第一报文承载的IPv4报文,作为第三报文。
绕行SID:是本实施例提供的一种新的SID。绕行SID用于标识报文的目的节点是对端代理节点,本端代理节点可以通过在报文中携带绕行SID,来将报文发送至对端代理节点,即,将local proxy的报文发送至peer proxy。具体地,本端代理节点通过在报文携带对端代理节点的绕行SID,能够将报文传输至对端代理节点,从而实现将报文从本端绕行至对端代理节点的功能。绕行SID可以称为End.ADB SID,End.ADB SID的含义是bypass SID for End.AD.SID(为动态代理操作扩展的一种具有绕行功能的SID)。End.ADB SID的名字可以视为End.AD.SID和B的组合,其中“ADB”中的“B”代表bypass,意为绕行,也可以翻译为旁路。绕行SID可以是IPv6地址的形式,绕行SID可以包括128个比特。绕行SID可以包括定位信息(Locator)以及功能信息(Function), 绕行SID的格式为Locator:Function。其中,Locator占据绕行SID的高比特位,Function占据绕行SID的低比特位。可选地,绕行SID还可以包括参数信息(Arguments),则绕行SID的格式为Locator:Function:Arguments。
在一个实施例中,绕行SID可以是节点SID,每个绕行SID可以唯一对应于一个代理节点,如果SFC的系统架构包括多个代理节点,则绕行SID可以存在多个,每个绕行SID对应于多个代理节点中的一个代理节点。如此,对于一个绕行SID而言,携带该绕行SID的报文能够唯一路由至一个代理节点,从而发送至该绕行SID对应的代理节点。而本实施例是以双归接入的系统架构为例进行描述,系统包括第一代理节点以及第二代理节点,为了区分描述,将对应于第二代理节点的绕行SID称为第一绕行SID。第一绕行SID用于将报文发送至第二代理节点。第一绕行SID可以由第一代理节点发布至网络中,则SFC中的其他节点可以接收到发布的第一绕行SID。此外,同理地,将对应于第一代理节点的绕行SID称为第二绕行SID。第二绕行SID用于将报文发送至第一代理节点。第二绕行SID可以由第二代理节点发布至网络中,则SFC中的其他节点可以接收到发布的第二绕行SID。
关于第一代理节点获取第一绕行SID的方式,可以预先对第一代理节点进行配置操作,第一代理节点可以接收配置指令,从配置指令获取该第一绕行SID;第一代理节点可以对第一绕行SID进行存储,例如将第一绕行SID存储在第一代理节点的本地SID表中;第一代理节点接收到第一报文时,可以读取预先存储的第一绕行SID,从而得到第一绕行SID。其中,配置操作可以由用户在第一代理节点上执行,也可以由网络的控制器,例如SDN控制器或NFVM执行,本实施例对此不做限定。
示例性地,第一代理节点可以接收以下配置指令:
[proxy1]segment-routing ipv6
[proxy1-segment-routing-ipv6]locator t2 ipv6-prefix B::64 static 32
[proxy1-segment-routing-ipv6-locator]opcode::1 end-adb peer-sid C::2
上述配置指令中proxy1表示第一代理节点,第一行配置指令的含义是要配置IPv6的SR,第二行配置指令的含义是Locator(定位信息)是t2,IPv6的前缀是B::64,静态路由是32。第三行配置指令的含义是操作码是::1,对应于第二代理节点的第一绕行SID是C::2。
同理地,可以预先对第二代理节点进行配置操作,第二代理节点可以接收配置指令,从配置指令获取该第二绕行SID,例如将第二绕行SID存储在第二代理节点的本地SID表中。示例性地,第二代理节点可以接收以下配置指令:
[proxy2]segment-routing ipv6
[proxy2-segment-routing-ipv6]locator t2 ipv6-prefix C::64 static 10
[proxy2-segment-routing-ipv6-locator]opcode::2 end-adb peer-sid B::1
上述配置指令中proxy2表示第二代理节点,第一行配置指令的含义是要配置IPv6的SR,第二行配置指令的含义是Locator(定位信息)是t2,IPv6的前缀是C::64,静态路由是10。第三行配置指令的含义是操作码是::2,对应于第一代理节点的第二绕行SID是B::1。
需要说明的一点是,第一代理节点和第二代理节点上配置的绕行SID可以是相互引 用的关系,即,第一代理节点上配置的第一绕行SID引用第二代理节点,以指明该第一绕行SID是用来将报文传输至第二代理节点的,同理地,第二代理节点上配置的第二绕行SID引用第一代理节点,以指明该第二绕行SID是用来将报文传输至第一代理节点的。
应理解,绕行SID和代理节点一一对应仅是示例性方式,在另一个实施例中,绕行SID和代理节点也可以具有其他对应关系,本实施例对此不做限定。
控制信息:用于指示第二代理节点存储第一报文的SRH。在一个实施例中,控制信息可以包括第一标志以及第二标志。第一标志用于标识报文是否为复制出的报文,例如,第一标志可以记为复制标志(英文:Copy flag,简称:C-flag)。第二报文可以包括复制标志字段,第一标志可以在该复制标志字段承载。例如第一标志的长度可以是1比特。可选地,在图5实施例中,第二报文可以是对第一报文的副本封装得到的报文,因此,第一标志可以设置为1,第一标志可以标识第二报文为复制出的报文。第二标志用于标识报文是否为确认报文,例如,第二标志可以记为确认标志(英文:copy acknowledge flag,简称:A-flag)。第二报文可以包括确认标志字段,第二标志可以在该确认标志字段承载。例如第二标志的长度可以是1比特。可选地,在图5实施例中,第二标志可以设置为0,第二标志可以标识第二报文不为确认报文。
控制信息可以携带在第二报文中的任意位置。作为示例,控制信息的位置可以包括下述(1)至(4)中的任一种。
(1)控制信息携带在第二报文的扩展头中。
扩展头可以包括两种类型,一种是逐跳解析的扩展头(英文:hop-by-hop option header),另一种是目的节点解析的扩展头(英文:destination option header)。按照协议的规定,如果报文携带了路由头(英文:routing header),目的节点解析的扩展头可以包括两种位置,一种位置是,目的节点解析的扩展头在路由头之前,这种情况下,扩展头由路由头中指明的节点解析;另一种情况是,目的节点解析的扩展头在路由头之后,这种情况下,扩展头由报文的最终目的节点解析。
在一个实施例中,第二报文中的扩展头可以是目的节点解析的扩展头,该目的节点解析的扩展头可以位于SRH之前,控制信息可以携带在该目的节点解析的扩展头中。通过这种方式,由于第二报文中包括了第一报文的SRH,而SRH又是路由头的一种类型,可以让SRH中指明的第二代理节点来解析扩展头,从而让第二代理节点通过读取到扩展头中的控制信息,从而根据控制信息来存储SRH。
参见图7,图7是本申请实施例提供的一种扩展头的示意图,扩展头可以包括以下字段中的一种或多种:
1、下一个报文头类型(英文:next header):报文在扩展头之后还可以包括一个或多个扩展头或一个或多个高层头,next header用于指明报文中扩展头之后的扩展头或高层头的类型。next header的长度可以为1字节。
2、扩展头的长度(英文:header Extended Length,简称:Hdr Ext Len):用于指明扩展头的长度。Hdr Ext Len指示的长度不包括扩展头的前8字节,也即是,Hdr Ext Len指示的长度是扩展头从第9个字节至最后一个字节的长度。Hdr Ext Len可以采用8字节为单位。Hdr Ext Len的取值可以设置为2。
3、选项类型(英文:option Type,简称:OptType)用于指明扩展头中选项的类型。 选项类型的长度可以为1字节。其中,选项类型可以是标准化的实验类型,以便不同厂商之间互通,选项类型也可以是实验性的选项类型。例如,实验性选项类型的取值可以包括0x1E、0x3E、0x5E、0x7E、0x9E、0xBE、0xDE或0xFE,按照协议的规定,选项类型中最高2个数位的2个比特定义了接收到未识别的选项时的处理动作,选项类型中的第3高数位的比特表示选项的数据在转发过程中是否允许被修改。本实施例中,扩展头的选项类型的取值可以设置为0x1E,该选项类型最高2个数位的2个比特是00,表示如果接收到该选项且不识别,则忽略该选项并继续处理报文的其余部分。选项类型中的第3高比特表示选项的数据在转发过程中允许被修改。4、选项的长度(英文:option length,简称:OptLen):用于指明扩展头中选项的长度,例如在图7中,OptLen指示的长度可以是扩展头从OptLen之后的第1个字节至选项的最后一个字节的长度。OptLen的取值可以设置为20。
5、子选项类型(英文:suboption type,简称:SubOptType):选项可以是嵌套的结构,一个选项中可以包括一个或多个子选项。子选项类型的取值可以设置为1,该取值表示子选项用于两个支持SRv6 SFC动态代理操作的节点之间要进行信息同步,即SRH的同步。
6、子选项长度(英文:suboption length,简称:SubOptLen):用于指示子选项的数据部分的长度,SubOptLen的长度为1字节,SubOptLen可以以字节为单位。子选项长度的取值可以设置为20。
7、保留字段(reserved1),长度为1字节。发送端可以将保留字段的取值设为0,接收端可以忽略该保留字段。
8、目标板索引(英文:Target Board,简称:TB):生成缓存表项的单板的硬件索引,长度为1字节。可以在N-flag设置为1时设置TB,如果N-flag缺省,可以将TB设置为0。
9、标志字段(flags):当前未定义的保留的标志字段,标志字段的长度可以为28比特,发送端可以将保留字段的取值设为0,接收端可以忽略该保留字段。
10、第一标志:用于标识报文是否为复制出的报文,可以记为copy flag、C或C-flag。第一标志的长度可以为1比特。如果将第一标志的取值设置为1,可以表示报文是复制出的报文,通常是控制报文,例如报文是正常场景下转发的第二报文,或者是第六报文。如果将第一标志的取值设置为0,可以表示报文不是复制出的报文,通常是数据报文,例如报文是故障场景下转发的第二报文。其中,正常场景下传输第二报文的流程可以参见图5实施例,故障场景下传输第二报文的流程可以参见下述图41实施例。
11、第二标志:用于标识报文是否为确认报文,可以记为Acknowledge-flag、A或A-flag。第二标志的长度可以为1比特。如果将第二标志的取值设置为1,可以表示报文是确认报文,例如报文是第六报文。如果将第二标志的取值设置为0,可以表示报文不是确认报文,例如报文是正常场景下转发的第二报文。
12、第三标志:用于标识End.AD.SID关联的SF节点是否是NAT类型的SF节点,即SF节点在处理报文的过程中,是否会对报文对应的流标识进行修改,可以记为NAT标志(英文:network address translation flag)、N、NAT flag或N-flag。第三标志的长度可以为1比特。如果End.AD.SID关联的SF节点是NAT类型的SF节点,可以将第三 标志的取值设置为1,如果第三标志缺省,可以将第三标志设置为0。其中,如果将第三标志的取值设置为1,可以设置TB和Cache-Index字段。
13、缓存索引(cache index):用于承载缓存表项的标识。缓存索引的长度为4字节。可以在第三标志设置为1时设置缓存索引,如果第三标志缺省,可以将缓存索引设置为0。
14、任播索引(anycast index)与任播End.AD.SID唯一对应的索引,长度为4字节。接收端可以对报文中的任播索引进行校验。
15、保留字段(Reserved2):长度为4字节。发送端将保留字段的取值设为0,接收端忽略该保留字段。
(2)控制信息携带在第一绕行SID中。
参见图8,图8为本申请实施例提供的一种使用绕行SID来携带控制信息的示意图。绕行SID可以包括定位信息(Locator)、功能信息(Function)或者参数信息(Arguments)中的一项或多项,控制信息可以携带在第一绕行SID的功能信息,也可以携带在第一绕行SID的参数信息。其中,该参数信息可以包含在功能信息中。通过这种方式,通过绕行SID本身即可实现控制对端代理节点存储SRH的功能,从而减少了控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
(3)控制信息携带在第二报文的IP头中。
例如,控制信息可以携带在第二报文的IPv6头中。其中,控制信息可以携带在IPv6头中第一绕行SID之外的其他部分中。此外,如果第二报文的载荷是IPv6报文,即第二报文的形式类似于IP in IP报文的形式,包含外层的IPv6头和内层的IPv6头,控制信息可以携带在外层的IPv6头中。
(4)控制信息携带在第二报文的SRH中的TLV中。
第二报文的SRH可以包括一个或多个TLV,控制信息可以携带在TLV中。通过这种方式,通过SRH本身即可实现控制第二代理节点存储SRH的功能,从而减少了控制信息在报文中额外占用的字节,因此节省了报文的数据量,也就减少了传输报文的开销。
第二报文的格式可以有多种,第二报文的格式可以因网络拓扑结构和/或第一代理节点对路径规划的需求而产生差异,还可以因携带控制信息的方式的不同而产生差异。以下结合图9至图18,通过实现方式一和实现方式二进行举例说明。
实现方式一、第二报文的目的地址可以是第一绕行SID,例如参见图10、图11、图12和图13,第二报文的IPv6头的目的地址(Destination Address,DA)字段可以携带第一绕行SID。
这种方式可以适于多种场景,以下通过场景一至场景三举例说明:
场景一、第一代理节点与第二代理节点位于一跳IP链路(on-link)。on-link的具体概念还请参见上文,在此不做赘述。参见图9,图9是第一代理节点和第二代理节点位于一跳IP链路的示意。
在SRv6网络中,SRv6报文的IPv6头中的目的地址通常用来标识下一个转发报文的SR节点。因此,如果第一代理节点与第二代理节点位于一跳IP链路,第一代理节点通过在目的地址中携带第一绕行SID,能够指明第二报文的下一个SR节点即为第二代理节点。
场景二、第一代理节点与第二代理节点位于多跳IP链路(off-link),而第一代理节点与第二代理节点之间的中间节点未使能SR功能。
对于未使能SR功能的中间节点而言,当中间节点接收到第二报文时,可以根据第二报文的目的地址(即第一绕行SID),查询路由FIB,根据目的地址在路由FIB中命中的表项来转发报文。
场景三、第一代理节点与第二代理节点位于多跳IP链路,而没有指定转发第二报文的中间节点的业务需求。
场景三可以包括多种情况,可以包含中间节点使能了SR功能的情况,也可以包含中间节点未使能SR功能的情况。例如,如果该多跳IP链路组成了多条转发路径,而第一代理节点没有指定经过哪条转发路径转发第二报文的需求,那么在该中间节点使能SR功能和未使能SR功能的情况下,均可以在目的地址中携带第一绕行SID。又如,如果该多跳IP链路组成的转发路径唯一,比如说第一代理节点与第二代理节点之间仅存在一个中间节点,那么在该中间节点使能SR功能和未使能SR功能的情况下,第一代理节点无需指定中间节点,仍可使得第二报文经过该中间节点转发至第二代理节点。
其中,对于使能SR功能的中间节点而言,当中间节点接收到第二报文时,可以根据第二报文的目的地址,即第一绕行SID来查询本地SID表,即将第一绕行SID与本地SID表中的每个SID进行匹配。那么由于第一绕行SID不是中间节点发布的SID,没有存储在中间节点的本地SID表中,中间节点会确定第一绕行SID未命中本地SID表,则中间节点会按照传统的IP路由方式来转发第二报文,具体地,中间节点会根据第一绕行SID,查询路由FIB,根据目的地址在路由FIB中命中的表项来转发第二报文。
根据携带控制信息的方式的不同,通过实现方式一来实现的第二报文的格式可以具体分为多种。以下结合图10至图13,通过实现方式1.1至实现方式1.4进行举例说明:
实现方式1.1、参见图10,如果使用扩展头来携带控制信息,第二报文的格式可以如图10所示,第二报文包括IPv6头、扩展头、SRH和载荷,其中,扩展头可以位于IPv6头和SRH之间。
实现方式1.2、参见图11,如果使用第一绕行SID(即图中的End.ADB.SID)来携带控制信息,第二报文的格式可以如图11所示,第二报文包括IPv6头、SRH和载荷,通过对比图10和图11可以看出,实现方式1.2减少了第二报文的长度,能够实现对第二报文进行压缩的功能。
实现方式1.3、参见图12,如果使用SRH中的TLV来携带控制信息,第二报文的格式可以如图12所示,第二报文包括IPv6头、SRH和载荷。
实现方式1.4、参见图13,如果使用IPv6头来携带控制信息,第二报文的格式可以如图13所示,第二报文包括IPv6头、SRH和载荷。
实现方式二、第二报文的目的地址可以是对应于下一个SR节点的SID,例如参见图15、图16、图17和图18,第二报文的IPv6头的目的地址(DA)字段可以携带对应于下一个SR节点的SID。其中,该对应于下一个SR节点的SID可以由该下一个SR节点发布至SR网络中,该对应于下一个SR节点的SID可以存储在下一个SR节点的本地SID表中。
这种方式可以适于第一代理节点与第二代理节点位于多跳IP链路,且具有指定转 发第二报文的中间节点的业务需求。第一代理节点可以通过在目的地址携带对应于下一个SR节点的SID,来指明第二报文要从本端转发至该SR节点,以使该第二报文经过该SR节点的转发,到达第二代理节点。
例如,参见图14,图14是第一代理节点和第二代理节点位于多跳IP链路的示意,第一代理节点和第二代理节点的中间节点包括中间节点1、中间节点2和中间节点3,中间节点1、中间节点2和中间节点3均使能了SR功能。当第一代理节点要指定第二报文通过中间节点3转发至第二代理节点时,可以在第二报文的目的地址中,携带对应于中间节点3的SID。那么,当中间节点3接收到第二报文时,可以根据第二报文的目的地址,来查询本地SID表,确定第二报文的目的地址命中了本地SID表中的SID,则执行SID对应的操作,例如将第二报文转发给第二代理节点。
应理解,第一代理节点和第二代理节点之间的中间节点的数量可以根据组网的需求设置,中间节点的数量可以更多或更少,例如可以仅为1个,或者可以为更多数量,本实施例对位于多跳IP链路场景下两个代理节点之间的中间节点的数量不做限定。
根据携带控制信息的方式的不同,通过实现方式二来实现的第二报文的格式可以具体分为多种。以下结合图15、图16、图17和图18,通过实现方式2.1至实现方式2.4进行举例说明:
实现方式2.1、参见图15,如果使用扩展头来携带控制信息,第二报文的格式可以如图15所示,第二报文包括IPv6头、扩展头、SRH和载荷。
实现方式2.2、参见图16,如果使用第一绕行SID(即图16中的End.ADB.SID)来携带控制信息,第二报文的格式可以如图16所示,第二报文包括IPv6头、SRH和载荷。
实现方式2.3、参见图17,如果使用SRH中的TLV来携带控制信息,第二报文的格式可以如图17所示,第二报文包括IPv6头、SRH和载荷。
实现方式2.4、参见图18,如果使用IPv6头来携带控制信息,第二报文的格式可以如图18所示,第二报文包括IPv6头、SRH和载荷。
在一个实施例中,第二报文的生成过程可以包括下述步骤一至步骤三。
步骤一、第一代理节点向第一报文的SRH的段列表插入第一绕行SID。
第一代理节点可以通过执行SR中的压入(push)操作,来将第一绕行SID插入段列表,使得插入第一绕行SID之后,段列表在包含原有的第一报文的每个SID的基础上,新增第一绕行SID。关于第一绕行SID在段列表中的插入位置,第一代理节点可以在SRH中的活跃SID之后插入第一绕行SID,也即是,在End.AD.SID之后插入第一绕行SID。参见图19,图19是本申请实施例提供的一种插入第一绕行SID的示意图,图19的左侧示出了第一报文的段列表,图19的右侧示出了第二报文的段列表,在图19中,End.AD.SID为SID2,第一绕行SID为SID5,则第一代理节点在SID2之后插入了SID5。
需要说明的一点是,上述描述是以从SID0至SID5的前后顺序进行描述的,SID0的位置称为前,SID5的位置称为后。而SRv6节点在解析SRH时,通常按照逆序的顺序读取SRH中的每个SID,例如在图19中,第一报文的SRH中第一个被读取并执行的SID是SID4,第二个被读取并执行的SID是SID3,之后是SID2,以此类推,最后一个被读取并执行的SID是SID0。
步骤二、第一代理节点对该第一报文的SRH的SL进行更新。
步骤三、第一代理节点对第一报文的目的地址进行更新,得到第二报文。
通过执行更新SL的步骤,第一代理节点生成的第二报文的SL会大于接收到的第一报文的SL。在SRv6网络中,SRv6报文的SRH中的SL通常用来标识待处理的SID的数量,那么通过修改SL的取值,能够指明第一代理节点发出的第二报文相对于第一代理节点接收到的第一报文而言,SRH包含了更多的待处理的SID,以便第二代理节点和/或第一代理节点与第二代理节点之间的中间节点在SL的指示下,对这些更多的SID进行处理。
通过执行更新目的地址的操作,能够将第二报文的下一个待处理的SID从End.AD.SID更新为其他SID,以便第二代理节点和/或中间节点使用目的地址,查询本地SID表时,会执行该其他SID对应的操作。
需要说明的是,上述步骤一至步骤三与SRv6的正常转发操作有所区别。在SRv6的通常情况下,SRH中的每个SID是由头节点插入至SRv6报文的,对于头节点之外的每个SRv6节点而言,每当SRv6节点接收到SRv6报文时,仅是执行SRH中活跃SID对应的操作,而不会向SRH插入SID,也不会从SRH删除SID,也就是说每个中间的SRv6节点均不会改变SRH中的SID。此外,SRv6节点会对SRH中的SL减一,即执行SL--,来表示本端执行了SRH中的一个SID,因此SRH中剩余的未被执行的SID减少了一个。而本实施例中,由于SRH中的第一绕行SID是由第一代理节点插入至第一报文的,因此第一代理节点会对SRH中的SL加一,即执行SL++,来表示SRH中剩余的未被执行的SID增加了一个,该新增的未被执行的SID即为第一绕行SID,第一绕行SID要传输至第二代理节点,以使第二代理节点来执行第一绕行SID对应的操作。SL加一后,SRH中的活跃SID会从End.AD.SID更新为第一绕行SID。
应理解,本实施例仅是以先描述步骤一,再描述步骤二,最后描述步骤三为例进行描述,本实施例对步骤一、步骤二以及步骤三的时间先后顺序并不做限定。在一个实施例中,也可以采用其他顺序来执行这三个步骤,例如先执行步骤三,再执行步骤一,最后执行步骤二,又如先执行步骤二,再执行步骤一,最后执行步骤三。当然,这三个步骤也可以并行执行,即,可以同时执行步骤一、步骤二以及步骤三。
在一个实施例中,如果控制信息携带在扩展头中,则第一代理节点可以不仅执行步骤一、步骤二以及步骤三,还执行以下步骤四。
步骤四、第一代理节点生成扩展头,向第一报文封装扩展头。
应理解,本实施例对步骤四和步骤一、步骤二以及步骤三的时间先后顺序也不做限定,例如,步骤四也可以在步骤一、步骤二以及步骤三之前执行。
在一个实施例中,第一代理节点可以对第一报文进行复制,从而得到两份第一报文,一份是第一报文本身,另一份是第一报文的副本。第一代理节点可以利用其中一份第一报文来生成第二报文,利用另一份第一报文来生成第三报文。可选地,第一代理节点可以利用第一报文的副本来生成第二报文,利用第一报文本身来生成第三报文,第二报文中可以通过携带第一标志(C-flag),来指明第二报文是为第一报文复制得出的报文。
上述描述的第二报文的生成方式可以根据具体场景的不同,包括多种实现方式。以下通过实现方式一至实现方式二进行举例说明。
实现方式一、第一代理节点将该第一报文的SL加一。上述步骤二可以是:将该第 一报文的目的地址更新为该第一绕行SID。
参见图19,图19的左侧示出了第一报文的SL,第一报文的SL取值为2,则表明段列表中未被执行的SID有2个,分别是SID0以及SID1,段列表中当前的活跃SID是SID2,段列表中已被执行的SID有2个,分别是SID3以及SID4。第一代理节点可以将SL加一,使得SL的取值从2更新为3,则如图19的右侧所示,第二报文的SL取值为3,则表明段列表中未被执行的SID有3个,分别是SID0、SID1以及SID2,段列表中当前的活跃SID是SID5,段列表中已被执行的SID有2个,分别是SID3以及SID4。此外,第一代理节点可以修改第一报文中IPv6头的目的地址,使得第一报文的目的地址从End.AD.SID更新为第一绕行SID。
实现方式一可以应用在多种场景,例如第一代理节点与第二代理节点位于一跳IP链路,又如第一代理节点与第二代理节点位于多跳IP链路(off-link),而第一代理节点与第二代理节点之间的中间节点未使能SR功能,再如第一代理节点与第二代理节点位于多跳IP链路,而没有指定转发第二报文的中间节点的业务需求,本实施例对第一代理节点在何种场景下执行实现方式一不做限定。
通过实现方式一得到的第二报文可以如图10、图11、图12或图13所示。
实现方式二、第一代理节点还可以向该第一报文的SRH的段列表插入一个或多个目标SID,上述步骤一可以是:第一代理节点将该第一报文的SL加N,该N为大于1的正整数。上述步骤二可以是:第一代理节点将该第一报文的目的地址更新为该一个或多个目标SID中对应于下一个SR节点的SID。
其中,该一个或多个目标SID用于指示目标转发路径,目标SID可以是节点SID,也可以是链路SID(也称邻接SID),该目标SID可以标识一个中间节点、一条链路、一个或多个指令、一种服务或一个VPN。
目标转发路径为第一代理节点至第二代理节点之间的路径。目标转发路径可以是显式路径(strict explicit),即指定了每段链路或每个中间节点的路径;目标转发路径也可以是松散路径(loose explicit),即指定了部分链路或部分中间节点的路径。目标转发路径可以是一条链路,也可以是多条链路的组合。目标转发路径可以是数据链路层的路径、物理层的路径或者通过隧道技术构建的路径。目标转发路径可以是传输光信号的路径,也可以是传输电信号的路径。
示例性地,参见图14和图20,如果第一代理节点与第二代理节点之间存在中间节点1、中间节点2和中间节点3,则第一代理节点插入的一个或多个目标SID可以是3个SID,分别是对应于中间节点1的SID6、对应于中间节点2的SID7和对应于中间节点3的SID20。例如,图20的左侧示出了第一报文的SL,第一报文的SL取值为2,则表明段列表中未被执行的SID有2个,分别是SID0以及SID1,段列表中当前的活跃SID是SID2,段列表中已被执行的SID有2个,分别是SID3以及SID4。第一代理节点可以将SL的取值从2更新为6,则如图20的右侧所示,第二报文的SL取值为6,则表明段列表中未被执行的SID有6个,分别是SID0、SID1、SID2、SID5、SID8、SID7,段列表中当前的活跃SID是SID6,段列表中已被执行的SID有2个,分别是SID3以及SID4。
通过这种方式,可以指明目标转发路径为第一代理节点—中间节点1—中间节点2 —中间节点3。需要说明的是,将分别对应于每个节点的SID作为目标SID仅是示例,如果目标转发路径是松散路径,第一代理节点插入的一个或多个目标SID可以是对应于部分中间节点的SID。
通过实现方式二得到的第二报文可以如图15、图16、图17或图18所示。
实现方式二可以应用在多种场景,例如第一代理节点与第二代理节点位于多跳IP链路,且具有指定转发第二报文的中间节点的业务需求,本实施例对第一代理节点在何种场景下执行实现方式二不做限定。
在一些可能的实施例中,第一代理节点可以通过插入目标SID来进行流量规划。例如,参见图21,图21是本申请实施例提供的一种通过目标SID进行流量规划的示意图。在图21中,第一代理节点与第二代理节点之间存在节点N1、节点N2、节点N3至节点N9这10个转发节点,第一代理节点与第二代理节点之间包括3条路径,分别是(N1,N2,N3)、(N1,N4,N5,N3)、(N1,N6,N7,N8,N9,N3),可以通过目标SID,控制第二报文从这3条路径中的一条指定的路径传输至第二代理节点。示例性地,如果路径(N1,N2,N3)的带宽最大,要将第二报文通过路径(N1,N2,N3)传输,可以将节点N1发布的对应于链路(N1,N2)的SID作为目标SID,将该目标SID携带在第二报文的段列表中。那么,节点N1接收到报文时,根据目标SID查询本地SID表,会查询到与链路(N1,N2)绑定的出接口1,选择出接口1来发送第二报文,使得报文通过出接口1发送出去后,会到达节点N2,进而通过(N2,N3)到达第二代理节点。
通过这种方式,达到的效果至少可以包括:能够根据业务需求,来选择第二报文的转发路径,可以控制第二报文经过指定的目标转发路径传输,从而便于进行路径规划,实现流量调优的效果。
此外,第一代理节点可以修改第一报文中IPv6头的目的地址,使得第一报文的目的地址从End.AD.SID更新为对应于下一个SR节点的SID。
需要说明的一点是,上述上述描述的实现方式一至实现方式二仅是示例,在另一些实施例中,第二报文的生成方式可以采用其他实现方式,例如可以采用绑定的SID(英文:Binding SID,简称:BSID)来标识目标转发路径,则第一代理节点可以向第一报文的SRH的段列表插入第一绕行SID和BSID,将第一报文的IPv6头的目的地址更新为BSID,将第一报文的SL加2。下一个SR节点接收到第二报文时,使用BSID查询本地SID表后,可以执行End.B6.Insert操作(SRv6中的一种插入操作),向第二报文插入一个新的SRH,或者执行End.B6.Encaps操作(SRv6中的一种封装操作),向第二报文插入一个包含SRH的外层IPv6头。通过BSID,能够减少SRH的深度,达到压缩SRH的功能。
基于上述步骤一至步骤四,参见图22、图23、图24和图25,图22是本申请实施例提供的一种将第一报文转换为第二报文的示意图,图22适于描述使用扩展头来携带控制信息的情况。从图22可以看出,可以通过在第一报文的基础上,添加两个信息,一个是用于将报文传输至第二代理节点的第一绕行SID,另一个是用于携带控制信息的扩展头,通过这种简单的操作,即可得到第二报文。此外,可选地,可以添加目标SID来指示目标转发路径。图23适于描述使用绕行SID来携带控制信息的情况。从图23可以看出,可以通过在第一报文的基础上,添加第一绕行SID,即可得到第二报文。图24 适于描述使用TLV来携带控制信息的情况。从图24可以看出,可以通过在第一报文的基础上,添加第一绕行SID,并向SRH中的TLV写入控制信息,即可得到第二报文。图25适于描述使用IP头来携带控制信息的情况。从图25可以看出,可以通过在第一报文的基础上,添加第一绕行SID,并向IPv6头写入控制信息,即可得到第二报文。
通过这种方式来生成第二报文,达到的效果至少可以包括:
通常情况下,代理节点缓存的每个SRH是由代理节点的数据面(如转发芯片)在转发报文的过程中,从报文中动态学习得到的,为了避免影响数据面的转发性能,通常不会让数据面将SRH发送给CPU。对于代理节点的CPU而言,由于CPU未得到SRH,就无法生成包含SRH的报文,CPU也就无法将SRH通过报文备份至第二代理节点。而对于代理节点的数据面而言,由于数据面通常只能执行简单的处理操作,让数据面根据SRH从0开始生成整个报文的复杂度很高,难度很大,可行性不高。由此可见,如何能将SRH备份到第二代理节点上,并且通过数据面就能实现是一个极大的技术难点。而通过提出上述报文生成方法,对接收到的第一报文进行复制,插入一个SID(第一绕行SID),封装扩展头,即可得到第二报文,相对于从0开始生成整个报文的方式而言,处理操作十分简单,因此生成报文的过程可以由数据面执行,而不依赖于控制面,从而成功克服了这一技术难点。经实验,对转发芯片的微码配置上述报文生成逻辑,则转发芯片即可执行以实现步骤503,因此支持了微码架构,实用性强。
在一个实施例中,生成第二报文的触发条件可以包括以下条件(1)至条件(3)中的一种或多种。
条件(1)检测到第一缓存表项生成。
第一代理节点可以对缓存进行检测,如果缓存中原本不包括第一缓存表项,而接收到第一报文后,缓存中新增了第一缓存表项,则检测到第一缓存表项生成,那么第一代理节点会生成第二报文,发送第二报文。
条件(2)检测到第一缓存表项更新。
第一代理节点可以对缓存中的第一缓存表项进行检测,如果第一缓存表项被执行了写操作,使得第一缓存表项发生更新,那么第一代理节点会生成第二报文,发送第二报文。
此外,如果接收到第一报文后,未检测到第一缓存表项生成或更新,表明第一代理节点之前已经存储过第一报文的SRH,那么由于本实施例中,通常会保持第一代理节点和第二代理节点之间缓存表项的同步,则如果第一代理节点已经存储了SRH,表明第二代理节点在很大概率上也已经存储了第一报文的SRH,则在这种情况下,第一代理节点可以不执行生成第二报文以及发送第二报文的步骤。
通过以上述条件(1)或条件(2)来触发第二报文的转发流程,达到的效果至少可以包括:一方面,每当第一代理节点接收到一个新的SRH时,由于新的SRH会触发缓存表项的生成,从而触发第二报文的转发,因此能够通过第二报文的转发,将新的SRH及时传输给第二代理节点,从而保证新的SRH被实时同步至第二代理节点。另一方面,对于之前已经存储过的SRH而言,由于已存储的SRH不会触发缓存表项的生成,也就不会触发第二报文的转发,因此免去了为传输这种SRH生成第二报文的处理开销以及传输第二报文的网络开销。
在一个示例性场景中,一条具有N个数据包的数据流到达了第一代理节点,当第一代理节点接收到数据流的第一个第一报文时,该第一个第一报文的SRH对于第一代理节点而言是新的SRH,则第一代理节点会生成第一缓存表项,来存储第一个第一报文的SRH。此时满足了条件(1),会触发第一代理节点生成第二报文和发送第二报文,则第一个第一报文的SRH会通过第二报文传输至第二代理节点,使得第一个第一报文的SRH也在第二代理节点上进行存储。而当第一代理节点接收到数据流的第二个第一报文、第三个第一报文至最后一个第一报文中的任意第一报文时,由于同一条数据流的每个第一报文的SRH通常是相同的,这些第一报文的SRH对于第一代理节点而言是已经存储过的SRH,则第一代理节点不会生成第一缓存表项,则由于不满足条件(1),第一代理节点不会生成第二报文和发送第二报文,从而免去了转发(N-1)个第二报文带来的开销。此外,当第一代理节点或者第二代理节点接收到数据流的每个第一报文时,均可以向该第一报文封装第一个第一报文的SRH来恢复报文。其中,N为正整数。
条件(3)检测到当前时间点到达第二报文的发送周期。
第一代理节点可以周期性发送第二报文,具体地,第一代理节点可以为第二报文设置发送周期,每当时间经过一个发送周期,则生成第二报文,发送第二报文。
通过这种方式,第一代理节点可以定期将第一报文的SRH通过第二报文同步至第二代理节点,以使第二代理节点能够定期刷新缓存的SRH,从而提升同步SRH的可靠性。
步骤504、第一代理节点向第二代理节点发送第二报文。
第一代理节点发送第二报文的转发路径可以包括以下转发路径一至转发路径二中的一种或多种:
转发路径一、通过对等链路(peer link)发送第二报文。
参见图3,第一代理节点与第二代理节点之间的对等链路可以是第一链路,第一代理节点可以通过第一链路发送第二报文,第二代理节点可以通过第一链路接收第二报文。
通过对等链路来传输第二报文的过程可以包括以下步骤1.1至步骤1.2:
步骤1.1、第一代理节点通过对应于第一链路的第一出接口,向第二代理节点发送第二报文。
第一出接口是指第一代理节点中对应于第一链路的出接口。第一代理节点可以预先接收配置指令,该配置指令用于指示第一链路与第一出接口之间的对应关系,第一代理节点可以从配置指令获取第一链路与第一出接口之间的对应关系,将第一链路与第一出接口之间的对应关系存储在路由表项中。第一代理节点生成第二报文后,可以选择对应于第一链路的第一出接口,通过第一出接口发送第二报文,以使第二报文通过第一链路传输至第二代理节点。示例性地,第一代理节点可以接收以下配置指令:
[proxy1]segment-routing ipv6
[proxy1-segment-routing-ipv6]service-chain peer-link Ethernet2/0/0 next-hop FE80::1
上述配置指令中proxy1表示第一代理节点,第一行配置指令的含义是要配置IPv6的SR,第二行配置指令的含义是业务功能链中对等链路对应的出接口是2槽位0子板0号以太网口,下一跳是FE80::1。
步骤1.2、第二代理节点通过对应于第一链路的第一入接口,从第一代理节点接收第二报文。
第一入接口是指第二代理节点中对应于第一链路的入接口,第二报文可以通过第一链路,到达第二代理节点的第一入接口,从第一入接口进入至第二代理节点。
在一个实施例中,第一代理节点可以采用指定出接口的方式,实现转发路径一。具体地,可以将第二报文的出接口指定为第一出接口,因此当要发送第二报文时,第一代理节点会默认通过第一出接口发送第二报文;或者,可以采用指定下一跳的方式,实现转发路径一。具体地,可以将第一代理节点的下一跳指定为第二代理节点,因此当要发送第二报文时,第一代理节点会默认将第二代理节点作为下一跳,通过第一出接口发送第二报文。
转发路径二、通过路由节点转发。
参见图3,通过路由节点转发的过程可以包括以下步骤2.1至步骤2.3:
步骤2.1、第一代理节点通过对应于第二链路的第二出接口,向路由节点发送第二报文,
第二出接口是指第一代理节点中对应于第二链路的出接口。第一代理节点可以预先接收配置指令,该配置指令用于指示第二链路与第二出接口之间的对应关系,第一代理节点可以从配置指令获取第二链路与第二出接口之间的对应关系,将第二链路与第二出接口之间的对应关系存储在路由表项中。第一代理节点生成第二报文后,可以选择对应于第二链路的第二出接口,通过第二出接口发送第二报文,以使第二报文通过第二链路传输至第二代理节点。
步骤2.2、路由节点通过第三链路,将第二报文转发至第二代理节点。
步骤2.3、第二代理节点通过对应于第三链路的第二入接口,从路由节点接收第二报文。
第二入接口是指第二代理节点中对应于第三链路的入接口,第二报文可以通过第三链路,到达第二代理节点的第二入接口,从第二入接口进入至第二代理节点。
在一个实施例中,第一代理节点可以优先使用转发路径一来传输第二报文,在一个实施例中,可以通过执行以下步骤(1)至步骤(3),实现优先使用转发路径一。
步骤(1)第一代理节点对第一链路的状态进行检测,如果第一链路处于可用状态,则第一代理节点执行步骤(2),或者,如果第一链路处于不可用状态,则第一代理节点执行步骤(3)。
其中,第一代理节点可以检测第一链路是否已部署且第一出接口的状态是否为开启(up),如果第一链路已部署且第一出接口的状态为up,则确认第一链路处于可用状态。如果第一链路未部署或第一出接口的状态为关闭(down),则确认第一链路处于不可用状态。
步骤(2)第一代理节点通过对应于第一链路的第一出接口,向第二代理节点发送第二报文。
步骤(3)第一代理节点通过对应于第二链路的第二出接口,向路由节点发送第二报文。
通过优先使用转发路径一,使得第二报文优先通过对端链路传输至第二代理节点, 那么由于转发路径一比转发路径二短,转发路径一经过的节点比转发路径二经过的节点少,能够减少传输第二报文的时延。
可选地,可以通过以下方式一至方式二中的任一种,来实现第二报文的传输可靠性。
方式一、重复发送。
第一代理节点可以生成多个第二报文,该多个第二报文可以是相同的,第一代理节点可以向第二代理节点连续发送多个第二报文,第二代理节点从第一代理节点连续接收多个第二报文。
通过这种方式,第一报文的SRH通过多个第二报文中的任一个第二报文均可以传输至第二代理节点,因此降低了SRH在传输过程中发生丢失的概率,并且实现起来较为简单,可行性高。
方式二、确认机制。
可以为第二报文设计对应的确认(Acknowledge,ACK)报文,让第二报文的接收端通过返回ACK报文来通知第二报文的发送端,其已接收到了第二报文。在一个实施例中,基于确认机制的报文传输流程可以包括以下步骤一至步骤四:
步骤一、第二代理节点根据第一绕行SID、第二代理节点对应的第二绕行SID和第二报文,生成第六报文。
第六报文为第二报文对应的ACK报文,第六报文表示确认接收到了第二报文,通过将第六报文转发给第一代理节点,能够将第二代理节点接收到第二报文的事件通知给第一代理节点。第六报文的封装格式可以和第二报文类似,相区别的是,第六报文包括对应于第一代理节点的第二绕行SID,例如,第六报文的目的地址可以为该第二绕行SID。其中,第六报文可以携带第二标志,第六报文中的第二标志可以表示第六报文为确认报文。例如,第六报文可以包括扩展头,扩展头中携带C-flag和A-flag,C-flag设置为1,A-flag设置为1。
在一个实施例中,第二代理节点可以对第二报文进行重新封装,得到第六报文。具体而言,第六报文的生成过程可以包括:第二代理节点将第二报文中的源地址更新为第一绕行SID;第二代理节点将第二报文的目的地址从第一绕行SID更新为第二绕行SID;第二代理节点修改第二报文中的第二标志的取值,得到第六报文。其中,更新源地址、更新目的地址、修改第二标志这三个步骤可以采用任意顺序组合,本实施例对这三个步骤的时序不做限定。
通过这种方式生成第六报文,达到的效果至少可以包括:第二代理节点可以通过对接收到的第二报文修改源地址、目的地址和第二标志,即可得到第二报文,相对于从0开始生成整个报文的方式而言,处理操作十分简单,因此生成报文的过程可以由数据面执行,而不依赖于控制面,从而提供了一种不依赖于CPU等控制面,而是通过转发引擎等数据面即可实现的ACK报文生成方式,经实验,对转发芯片的微码配置上述报文生成逻辑,则转发芯片即可执行以实现第六报文的生成步骤,因此支持了微码架构,实用性强。
步骤二、第二代理节点向第一代理节点发送第六报文。
步骤三、当接收到第六报文时,第一代理节点确定第二代理节点接收到了第二报文。
在一些实施例中,第一代理节点可以从第二报文获取控制信息,确定控制信息表示 接收到了第二报文。例如,第一代理节点可以识别第二报文中第一标志的取值和第二标志的取值,如果第一标志的取值设置为1且第二标志的取值设置为1,则确定控制信息表示接收到了第二报文。
步骤四、当未接收到第六报文时,第一代理节点向第二代理节点重传第二报文。
通过第六报文的转发流程,第一代理节点能够通过是否接收到第六报文,确认第二代理节点是否接收到了SRH,在第二代理节点未得到SRH的情况下,通过重传来保证将SRH传输给第二代理节点,从而保证了传输SRH的可靠性。
步骤505、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
第一代理节点与第二代理节点之间传输第二报文的具体流程可以根据组网的不同,包括多种实现方式。以下通过实现方式一至实现方式二进行举例说明。
实现方式一、如果该第一代理节点与该第二代理节点位于一跳IP链路,则第一代理节点将第二报文从出接口发送后,第二报文可以通过第一代理节点与第二代理节点之间的链路,到达第二代理节点的入接口,第二代理节点可以通过入接口接收到第二报文。
实现方式二、如果该第一代理节点与该第二代理节点位于多跳IP链路,则第一代理节点将第二报文从出接口发送后,第二报文可以通过第一代理节点与下一个节点之间的链路,到达下一个节点的入接口,下一个节点可以通过入接口接收到第二报文,将第二报文通过出接口发送出去,直至第二报文到达第二代理节点的入接口,则第二代理节点通过入接口接收到第二报文。
在第一代理节点和第二代理节点之间的中间节点转发报文的过程中,中间节点可以识别第二报文的IPv6头的目的地址,根据目的地址查询本地SID表,当目标SID与本地SID表中的目标SID匹配时,执行目标SID对应的操作,例如选择目标SID对应的出接口,将第二报文通过该出接口转发至下一个中间节点。此外,中间节点可以对第二报文的SRH中的SL减一,将第二报文的IPv6头的目的地址更新为减一后的SL对应的SID。例如,参见图20,如果第二报文的SRH如图20的右侧所示,则中间节点1接收到第二报文后,会将目的地址从SID6更新为SID7,将SL从6更新为5;中间节点2接收到第二报文后,会将目的地址从SID7更新为SID8,将SL从5更新为4;中间节点3接收到第二报文后,会将目的地址从SID8更新为SID5,将SL从4更新为3。
关于对第二报文执行的动作,第一代理节点可以读取第二报文的目的地址,第一代理节点可以根据第二报文的目的地址,查询本地SID表,判断目的地址与本地SID表中的SID是否匹配。当目的地址与本地SID表中的第一绕行SID匹配,即目的地址命中了本地SID表中的第一绕行SID时,第一代理节点可以确定第二报文是SRv6报文,执行第一绕行SID对应的操作。
第一绕行SID对应的操作可以包括:识别第二报文中的控制信息,如果控制信息指示存储第一报文的SRH,则根据第二报文,获取第一报文的SRH,存储第一报文的SRH。其中,第一绕行SID对应的操作可以预先在第二代理节点上进行配置,例如,可以将第一绕行SID对应的操作存储至本地SID表中与第一绕行SID对应的条目中,当接收到第二报文时,使用第二报文的目的地址查询本地SID表,发现该目的地址与第一绕行SID匹配,即命中了第一绕行SID对应的条目,则执行该条目中包含的第一绕行SID对 应的操作。
在一些可能的实施例中,第二代理节点可以识别第二报文中的第一标志的取值和第二标志的取值,如果第一标志的取值设置为1且第二标志设置为0,则确定控制信息指示存储第一报文的SRH。
需要说明的是,控制信息对应的操作与End.AD.SID对应的操作均包含存储第一报文的SRH的步骤,而控制信息对应的操作与End.AD.SID对应的操作相区别的是,End.AD.SID对应的操作包含将第一报文的载荷转发至SF节点的步骤,而控制信息对应的操作可以不包含将第一报文的载荷转发至SF节点的步骤,以避免第一代理节点和第二代理节点均将第一报文的载荷转发至SF节点后,导致SF节点对同一份报文的载荷重复进行处理的情况。
步骤506、第二代理节点解析第二报文,得到第一报文的SRH。
第二报文的SRH可以包括第一报文的SRH的每个SID,还可以包括第一报文的SRH中SID之外的其他信息,第二代理节点可以根据第二报文的SRH,获取第一报文的SRH。具体地,可以通过执行以下步骤一至步骤二,得到第一报文的SRH。
步骤一、第二代理节点从第二报文的SRH中的段列表,删除第一绕行SID。
本步骤可以视为步骤503中第二报文的生成过程中步骤一的逆过程。具体地,第二代理节点可以通过执行SR中的弹出(pop)操作,来将第一绕行SID从段列表删除,使得删除第一绕行SID之后,段列表恢复至原有的第一报文的段列表。其中,第二代理节点可以删除SRH中的活跃SID,也即是,删除End.AD.SID之后的第一绕行SID。参见图19,图19的左侧示出了第一报文的段列表,图19的右侧示出了第二报文的段列表,在图19中,End.AD.SID为SID2,第一绕行SID为SID5,则第二代理节点可以将SID2之后的SID5删除,从而将段列表从右侧恢复至左侧。
可选地,第二代理节点还可以从第二报文的SRH中的段列表,删除一个或多个目标SID。
参见图20,图20的左侧示出了第一报文的段列表,图20的右侧示出了第二报文的段列表,在图20中,目标SID共有3个,分别是SID6、SID7和SID8,则第二代理节点可以将SID6、SID7和SID8删除,从而将段列表从右侧恢复至左侧。
步骤二、第二代理节点将第二报文的SRH中的SL减一,得到第一报文的SRH。
本步骤可以视为步骤503中第二报文的生成过程中步骤二的逆过程。例如,参见图19,图19的右侧示出了第二报文的SL,第二报文的SL取值为3,则表明段列表中未被执行的SID有3个,分别是SID0、SID1以及SID2,段列表中当前的活跃SID是SID5,段列表中已被执行的SID有2个,分别是SID3以及SID4。第二代理节点可以将SL减一,使得SL的取值从3更新为2,则如图19的左侧所示,第一报文的SL取值为2,则表明段列表中未被执行的SID有2个,分别是SID0以及SID1,段列表中当前的活跃SID是SID2,段列表中已被执行的SID有2个,分别是SID3以及SID4。此外,对于第一代理节点指定转发第二报文的中间节点的场景,由于第二报文的SL在传输过程中可以由中间节点进行递减,则第二代理节点接收到的第二报文相对于第一代理节点发送的第二报文可以由于SL的更新而产生差异,第二代理节点接收到的第二报文的SL可以小于第一代理节点发送的第二报文的SL。例如参见图20,第一代理节点发送第二报 文时第二报文的SL为6,而由于中间节点1将SL从6更新为5,中间节点2将SL从5更新为4,中间节点3将SL从4更新为3,则第二代理节点接收到的第二报文的SL为3。
需要说明的是,上述步骤一至步骤二与SRv6的正常转发操作有所区别。在SRv6的通常情况下,SRH中的每个SID是由头节点插入至SRv6报文的,对于头节点之外的每个SRv6节点而言,每当SRv6节点接收到SRv6报文时,仅是执行SRH中活跃SID对应的操作,而不会向SRH压入SID,也不会弹出SRH中的SID,也就是说每个中间的SRv6节点均不会改变SRH中的SID。
而本实施例中,由于第二报文的SRH中的第一绕行SID是由第一代理节点插入的,第一绕行SID不是第一报文的SRH原本就包括的SID,因此第二代理节点会从第二报文的SRH中删除第一绕行SID,从而剥离第一绕行SID的封装,将第二报文的SRH还原至第一报文的SRH。
通过这种方式,一方面,从路由节点的角度而言,由于第二代理节点得到的报文的SRH和路由节点之前发送给第一代理节点的报文的SRH基本保持一致,因此后续如果第二代理节点接收到处理后的报文,第二代理节点恢复出的SRH能和路由节点之前发送的SRH基本保持一致,那么第二代理节点将恢复SRH的报文返回给路由节点后,路由节点得到的报文的SRH与之前发送的报文的SRH基本保持一致,从而避免SRH由于不被SF节点识别而丢失的情况,实现了第二代理节点的代理功能。另一方面,从SF节点的角度而言,SF节点选择将处理后的报文转发至第一代理节点或者第二代理节点后,由于第一代理节点与第二代理节点存储的SRH基本保持一致,因此第一代理节点以及第二代理节点中的任一个节点接收到报文后,向报文恢复的SRH基本保持一致,从而保证第一代理节点与第二代理节点功能的对等性,满足基于两个代理节点实现双归属的架构。
应理解,这里所说的两个报文的SRH基本保持一致,是指两个报文的SRH的主要内容保持一致,而并不限定两个报文的SRH必须完全相同。在一个实施例中,两个报文的SRH可以存在细微的差异,例如,两个报文的SRH中段列表相同,而两个报文的SRH中SL有所区别,比如由于End.AD.SID被第一代理节点执行使得SL减少了1;又如,SRH的TLV包含的一些参数由于被两个代理节点识别处理,或者被两个代理节点之间的转发节点识别处理,使得TLV的内容发生了改变,这些存在细微差异的情况也应涵盖在本申请实施例的保护范围之内。
步骤507、第二代理节点在第二缓存表项中存储第一报文的SRH。
第二缓存表项是指第二代理节点存储第一报文的SRH的缓存表项。其中,第二代理节点可以包括接口板,接口板可以包括转发表项存储器,第二缓存表项可以位于第二代理节点的转发表项存储器中。在一个实施例中,第二代理节点可以检测缓存中是否已经存储了第一报文的SRH,如果缓存中未存储第一报文的SRH,则第二代理节点为第一报文的SRH分配缓存表项,得到第二缓存表项,在该第二缓存表项中存储第一报文的SRH。此外,如果检测到缓存中已存储第一报文的SRH,第二代理节点可以免去存储第一报文的SRH的步骤。
结合以上步骤505至步骤507,在一些可能的实施例中,以第一绕行SID记为S, 第二代理节点记为N为例,根据第一绕行SID执行的操作可以如下:
When N receives a packet destined to S and S is a local End.ADB SID,N does/注释:当N接收到目的地址为S的数据包,且S为N发布的绕行SID时,N执行以下步骤/
decrement SL/注释:将SL减一/
update the IPv6 DA with SRH[SL]/注释:使用SRH[SL]更新IPv6的目的地址/
lookup the local SID table using the updated DA/注释:使用更新后的目的地址查询本地SID表。
forward accordingly to the matched entry/注释:根据本地SID表中的匹配条目执行转发操作。
其中,将SL减一后,SRH[SL]会对应End.AD.SID,则使用SRH[SL]更新目的地址后,目的地址会更新为End.AD.SID,那么使用End.AD.SID查询本地SID表,会找到End.AD.SID对应的匹配条目。本地SID表中End.AD.SID对应的匹配条目可以用于指明End.AD.SID对应的操作,可以预先对End.AD.SID对应的匹配条目进行配置,使得End.AD.SID对应的操作包括:检测报文是否包含控制信息,如果报文中包含控制信息,则存储报文的SRH,而不将报文的载荷转发给SF节点。
在一个实施例中,可以预先对第二代理节点的转发规则进行配置,配置第二代理节点在接收到第二报文后,禁止(must not)使用第二报文本身对第一代理节点进行响应,从而避免第一代理节点接收到第二报文时重复执行生成第二报文和向第二代理节点发送第二报文的步骤,也就避免了第二报文在第一代理节点与第二代理节点之间形成转发环路。
步骤508、第一代理节点根据第一报文,生成第三报文。
第三报文:用于将第一报文的载荷传输至SF节点的报文。第三报文可以包括第一报文的载荷且不包括第一报文的SRH。在一些实施例中,第三报文可以就是第一报文的载荷;在另一些实施例中,第三报文可以由第一报文的载荷和隧道头组成。关于第三报文的生成过程,第一代理节点可以从第一报文剥离SRH,得到第三报文。第三报文可以是数据报文。
可选地,第三报文还可以包括隧道头(tunnel header)。具体而言,第一代理节点可以和SF节点建立传输隧道,则第一代理节点可以不仅剥离第一报文的SRH,还生成该传输隧道对应的隧道头,封装隧道头,使得第三报文包括该隧道头。第三报文的隧道头可以是VXLAN隧道头、GRE隧道头或者IP-in-IP隧道头等。其中,第一代理节点可以先剥离第一报文的SRH,再向剥离了SRH的第一报文封装隧道头,也可以先向第一报文封装隧道头,再从封装了隧道头的第一报文剥离SRH,本实施例对第三报文的生成过程中剥离SRH与封装隧道头的时序不做限定。应理解,第三报文包括隧道头是可选方式,在另一个实施例中,第一代理节点可以将剥离了SRH的第一报文直接作为第三报文,则第三报文也可以不包括隧道头。
其中,第三报文可以包括源地址和目的地址。如果第一代理节点和SF节点建立了传输隧道,第三报文的隧道头可以包括源地址和目的地址,第三报文的隧道头中的源地址用于标识第一代理节点,第三报文的隧道头中的目的地址用于标识SF节点。其中,可以预先在第一代理节点上配置第一代理节点的地址和SF节点的地址,第一代理节点 生成第三报文的过程中,可以查询配置,得到第一代理节点的地址,将其写入隧道头的源地址;并且得到入SF节点的地址,将其写入隧道头的目的地址。其中,第一代理节点和SF节点上配置的地址可以相互引用,例如,第一代理节点上配置的第一代理节点的地址可以和SF节点上配置的第一代理节点的地址可以相同,第一代理节点上配置的SF节点的地址可以和SF节点上配置的SF节点的地址可以相同。隧道头中的源地址和目的地址的形式可以根据配置操作确定,例如可以是IPv4地址的形式,也可以是IPv6地址的形式,当然也可以根据需求被配置为其他形式。
另外,如果第一代理节点和SF节点未建立传输隧道,第三报文的源地址可以为载荷本身的源设备的地址,例如是终端设备的地址,第三报文的目的地址可以是该载荷处理后最终待到达的目的设备的地址,例如是另一个终端设备的地址。
第一代理节点发往SF节点的第三报文可以根据实际业务场景的不同,具有多种格式。
例如,参见图26,如果第一报文的有效载荷是IPV4报文,则第三报文可以如图26所示,第三报文可以是IPv4报文。可选地,如果第一代理节点和SF节点建立了传输隧道,第三报文可以在包含IPv4报文的基础上,还包括隧道头,该隧道头可以IPv4报文的IPv4头之前。
参见图27,如果第一报文的有效载荷是IPV6报文,则第三报文可以如图27所示,第三报文可以是IPv6报文。可选地,如果第一代理节点和SF节点建立了传输隧道,第三报文可以在包含IPv6报文的基础上,还包括隧道头,该隧道头可以在IPv6报文的IPv6头之前。
参见图28,如果第一报文的有效载荷是以太网帧,则第三报文可以如图28所示,第三报文可以是以太网帧。可选地,如果第一代理节点和SF节点建立了传输隧道,第三报文在包含以太网帧的基础上,还可以包含隧道头,该隧道头可以在以太网帧的帧头之前。
在第一代理节点和SF节点未建立传输隧道的场景下,参见图29,其示出了将第一报文演变至第三报文的示意图,可以获取第一报文的载荷,作为第三报文。在第一代理节点和SF节点建立传输隧道的场景下,参见图30,其示出了将第一报文演变至第三报文的示意图,可以获取第一报文的载荷,并在载荷之前封装隧道头,得到第三报文。
步骤509、第一代理节点向SF节点发送第三报文。
上述步骤508至步骤509带来的效果至少可以包括:一方面,第一代理节点能够将第一报文的载荷传输至SF节点,使得SF节点能够对载荷进行处理,从而通过处理报文来实现对应
的业务功能;另一方面,通过避免将报文的SRH传输至SF节点,从而避免SF节点由于不识别SRH而导致报文处理失败的情况,因此实现了动态代理的功能。
应理解,本实施例仅是以先描述第三报文的生成和发送过程,再描述第二报文的生成和发送过程为例进行描述,本实施例对生成第二报文与生成第三报文的先后顺序不做限定,对发送第二报文与发送第三报文的先后顺序也不做限定。在一个实施例中,步骤503至步骤504和步骤508至步骤509可以顺序执行。例如,可以先执行步骤508至步骤509,再执行步骤503至步骤504;也可以先执行步骤503至步骤504,再执行步骤 508至步骤509。在另一个实施例中,可以同时执行步骤503至步骤504和步骤508至步骤509。
步骤510、SF节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
第四报文是指SF节点进行处理得到的报文。SF节点可以根据配置的业务策略,对第三报文进行业务功能处理,从而得到该第四报文。其中,本实施例是以代理节点自发它收的场景进行举例说明,自发它收的场景是指,由一个代理节点向SF节点发送报文,而由另外一个代理节点接收SF节点返回的报文。应用在该场景,本实施例以SF节点接收到来自第一代理节点的报文后,将处理后的报文发送给第二代理节点为例进行说明。
其中,SF节点选择将处理后的报文(即第四报文)发送给第二代理节点的应用场景可以有多种,例如,如果SF节点确定与第一代理节点相连的出接口的状态为down(关闭),SF节点可以选择将处理后的报文发送给第二代理节点。又如,如果SF节点与第一代理节点之间的链路拥塞,SF节点可以选择将处理后的报文发送给第二代理节点,以使报文经过SF节点与第二代理节点之间的链路传输,从而达到分流的效果。
关于SF节点处理报文的方式,例如,SF节点可以对第三报文进行入侵检测,从而实现防火墙的业务功能。又如,SF节点可以修改第三报文的五元组、MAC帧信息或应用层信息,从而实现NAT的业务功能。再如,SF节点可以对第三报文进行负载均衡、用户认证等,本实施例对SF节点的处理方式不做限定。
其中,第四报文可以包括源地址和目的地址。如果第一代理节点和SF节点未建立传输隧道,第四报文的源地址可以和第三报文的源地址相同,第四报文的源地址可以为载荷本身的源地址,即载荷的源设备的地址,例如是终端设备的地址。第四报文的目的地址可以和第三报文的源地址相同,是该载荷处理后最终待到达的目的设备的地址,例如是另一个终端设备的地址。如果第一代理节点和SF节点建立了传输隧道,第四报文的隧道头可以包括源地址和目的地址,第四报文的隧道头中的源地址用于标识SF节点,第四报文的隧道头中的目的地址用于标识第一代理节点。其中,可以预先在SF节点上配置第一代理节点的地址和SF节点的地址,SF节点生成第四报文的过程中,可以查询配置,向隧道头的源地址写入SF节点的地址,向隧道头的目的地址写入第一代理节点的地址。
步骤511、SF节点向第二代理节点发送第四报文。
SF节点发送第四报文的方式可以根据SF节点的具体功能以及组网部署的不同存在多种,作为示例,应用在自发它收的场景下,SF节点可以预先存储对应于第一代理节点的入接口与对应于第二代理节点的出接口之间的绑定关系,那么,当SF节点从入接口接收到第三报文,生成第四报文后,基于该绑定关系,会选择该入接口绑定的出接口发送第四报文,使得报文通过该出接口发送出去后到达第二代理节点。其中,该绑定关系可以根据配置操作确定。例如,如果SF节点通过入接口1和proxy1相连,且通过出接口2和proxy2相连,可以预先为SF节点配置转发规则:对入接口1接收的报文处理后,得到的报文都通过出接口2发送。那么,当proxy1发送了第三报文后,SF节点会通过入接口1接收到第三报文,SF节点查询转发表中的该转发规则,会将第三报文通过出接口2发送至proxy2。
步骤512、第二代理节点从SF节点接收第四报文,根据第四报文,查询第二缓存表项,得到第一报文的SRH。
第二代理节点可以根据第四报文,获取SRH对应的索引,根据索引,查询第二代理节点的缓存,找到第二缓存表项,从第二缓存表项中得到第一报文的SRH。
在一个实施例中,第一报文的SRH的查询方式可以包括下述查询方式一至查询方式三中的任一种:
查询方式一、第二代理节点可以根据第四报文,获取第一报文对应的流标识,以第一报文对应的流标识为索引,查询缓存,得到第一报文的SRH。其中,第四报文对应的流标识为可以为第一报文对应的流标识。
查询方式二、第二代理节点可以根据第四报文,获取第一缓存表项的标识,以第一缓存表项的标识为索引,查询缓存,得到第一报文的SRH,其中,第四报文可以包括第一缓存表项的标识,这一查询方式可以具体参见下述图36所示实施例以及图37所示实施例。
查询方式三、第二代理节点可以根据第四报文,获取第四报文对应的流标识和VPN标识,根据第四报文对应的流标识、VPN标识和End.AD.SID,查询缓存,得到第一报文的SRH,其中,第四报文对应的流标识可以是第一报文对应的流标识,第四报文可以包括VPN标识,该VPN标识可以映射至End.AD.SID,这一查询方式可以具体参见下述图38所示实施例以及图39所示实施例。
步骤513、第二代理节点根据第四报文以及第一报文的SRH,生成第五报文。
第五报文是指在第四报文的基础上恢复了SRH得到的报文。第五报文包括第四报文的载荷以及第一报文的SRH。第二代理节点可以向第四报文封装第一报文的SRH,得到第五报文。
与步骤508相应,在一个实施例中,第二代理节点可以将封装了第一报文的SRH的第四报文直接作为第五报文。在另一个实施例中,第二代理节点可以和SF节点建立传输隧道,第四报文包括该传输隧道对应的隧道头,则第二代理节点可以不仅封装第一报文的SRH,还剥离第四报文的隧道头,使得第五报文不包括第四报文的隧道头。第四报文的隧道头可以是VXLAN隧道头、GRE隧道头或者IP-in-IP隧道头等。其中,第二代理节点可以先向第四报文封装第一报文的SRH,再从封装了SRH的第四报文剥离隧道头,也可以先从第四报文剥离隧道头,再向剥离了隧道头的第四报文封装SRH,本实施例对剥离隧道头与封装SRH的时序不做限定。
步骤514、第二代理节点向路由节点发送第五报文。
步骤515、路由节点从第二代理节点接收第五报文,转发第五报文。
本实施例提供的方法,通过为End.AD.SID扩展了一种具有绕行功能的新的SID,本地的代理节点在接收到携带End.AD.SID的报文时,利用这种新的SID,将报文的SRH绕行传输至对端的代理节点,从而让对端的代理节点也能够得到报文的SRH,使得报文的SRH能够存储至对端的代理节点。如此,双归接入至同一个SF节点的两个代理节点之间可以实现报文的SRH的同步,使得报文的SRH得到冗余备份。那么,SF节点对报文进行处理后,可以选择将处理后的报文返回给对端代理节点,由于对端代理节点预先均得到了报文的SRH,因此对端代理节点能够利用之前存储的SRH来恢复报文的SRH, 使得SF节点处理后的报文能够通过恢复的SRH得到正常传输。
在上述方法实施例中,第一代理节点或第二代理节点可以通过状态机,来保持两个代理节点之间缓存表项的同步,以下对该状态机进行示例性描述。
状态机可以记为Peer Cache(对端缓存)状态机,状态机表示对端代理节点存储SRH的状态。其中,第一代理节点的状态机表示第二代理节点存储SRH的状态,第二代理节点的状态机表示第一代理节点存储SRH的状态。对于第一代理节点或者第二代理节点中的任一个代理节点而言,代理节点可以为缓存中的每个缓存表项增加目标字段,将状态机当前的状态写入目标字段,在接收到第一报文时,根据缓存表项中目标字段的取值,来决定执行的动作。其中,该目标字段可以记为PeerCacheState(对端缓存状态)字段,该目标字段用于记录状态机当前的状态。状态机可以包括当前状态、下一状态、事件以及目标动作,第一代理节点可以根据检测到的事件以及状态机的当前状态,执行状态机中事件与当前状态对应的目标动作,将状态机切换为当前状态的下一状态。作为示例,状态机可以为第一状态机或者第二状态机中的任一种:
参见图31,图31是本申请实施例提供的第一状态机的示意图,第一状态机适于通过重复发送来保证传输可靠性的情况,第一状态机提供的处理策略可以包括以下(1)至(7)中的一种或多种。应理解,下述第一状态机可以应用于第一代理节点,也可以应用于第二代理节点,对于第一代理节点而言,对端代理节点是指第二代理节点,对于第二代理节点而言,对端代理节点是指第一代理节点。
(1)如果检测到第一事件,向对端代理节点连续发送多个第二报文,将状态机切换为第一状态。
其中,第一事件为生成了第一缓存表项,第一事件在程序中可以记为local cache new(本地增加了新的缓存表项)事件。第一状态表示第二缓存表项与第一缓存表项保持一致,第一状态在程序中可以记为缓存一致(cache equal)状态。
(2)如果检测到第二事件,向对端代理节点连续发送多个第二报文,将状态机切换为第一状态。
其中,第二事件为更新了第一缓存表项,第二事件在程序中可以记为LocalCache_Upd(直译为本地缓存更新,Upd表示update,意为更新)事件。
(3)如果检测到第三事件且状态机处于第二状态,向对端代理节点连续发送多个第二报文,将状态机切换为第一状态。
其中,第三事件为接收了第一报文,第三事件在程序中可以记为Local Cache Touch(本地缓存被读取)事件。第二状态表示确定对端的缓存表项比本地的缓存表项更陈旧,或者对端的缓存表项比本地的缓存表项疑似更陈旧。在第二状态下,本地代理节点可以向对端代理节点发送第二报文,来更新对端代理节点的缓存表项中的SRH。第二状态在程序中可以记为缓存更老(Cache_Older)状态。
(4)如果检测到第四事件且状态机处于第一状态,删除第一缓存表项。
第四事件为老化定时器超时,第四事件在程序中可以记为AgingTimer_Expired(老化定时器超时)事件。老化定时器超时表示第一缓存表项处于陈旧状态,可以清除第一缓存表项,来节省缓存空间。具体地,第一代理节点可以启动老化定时器,为第一缓存 表项设置对应的老化标志(agingflag),该老化标志的取值可以是1或0。此外,第一代理节点可以通过老化定时器定期扫描第一缓存表项,如果扫描到第一缓存表项对应的老化标志为1,则将老化标志修改为0,如果扫描到第一缓存表项对应的老化标志为0,则删除第一缓存表项。其中,第一代理节点在转发报文的过程中,如果查询缓存时命中了第一缓存表项,则第一代理节点将第一缓存表项对应的老化标志设置为1,从而将第一缓存表项驻留在缓存中。
(5)如果检测到第五事件且状态机处于第一状态,比较第二报文携带的SRH与缓存已存储的SRH是否一致,如果第二报文中携带的SRH与已存储的SRH不一致,则将第二报文中携带的SRH存储在缓存表项中,从而更新缓存,并设置老化标志为1,保持状态机的当前状态不变。
第五事件为接收到第二报文的事件,第五事件在程序中可以记为CacheSync_Recv(直译为缓存同步接收,Sync表示Synchronize,意为同步,含义是将SRH同步至对端代理节点的缓存,Recv表示receive,意为接收)事件。
(6)如果检测到第五事件且状态机处于第二状态,比较第二报文中携带的第一报文的SRH与缓存已存储的SRH是否一致,如果第二报文中携带的第一报文的SRH与已存储的SRH不一致,则将第二报文中携带的第一报文的SRH存储在缓存表项中,从而更新缓存,并设置老化标志为1,将状态机切换为第一状态。
参见图32,图32是本申请实施例提供的第二状态机的示意图,第二状态机适于通过确认机制来保证传输可靠性的情况,第二状态机提供的处理策略可以包括以下(1)至(10)中的一种或多种。需要说明的是,以下着重描述第二状态机与第一状态机的区别之处,而与第一状态机同理的特征还请参见第一状态机,以下不做赘述。
(1)如果检测到第一事件,向对端代理节点发送一个第二报文,将状态机切换为第三状态。
第三状态表示本地生成或更新了缓存表项,已经向对端代理节点发送了第二报文,等待对端代理节点对第二报文操作结果的确认,第三状态在程序中可以记为Sync_Start(直译为同步启动,Sync表示Synchronize,Start意为启动)状态。
(2)如果检测到第二事件且状态机处于第三状态,向对端代理节点发送一个第二报文,保持状态机的当前状态不变。
(3)如果检测到第二事件且状态机处于第一状态,向对端代理节点发送一个第二报文,将状态机切换为第三状态。
(4)如果检测到第三事件且状态机处于第三状态,将状态机切换为第一状态。
(5)如果检测到第四事件且状态机处于第三状态,将状态机切换为第一状态。
其中,在(4)和(5)下,可以不执行发送第二报文的动作,以避免两个代理节点同时处于第三状态,互相锁定。
(6)如果检测到第五事件且状态机处于第一状态,比较第二报文中携带的第一报文的SRH与缓存已存储的SRH是否一致,如果第二报文中携带的第一报文的SRH与已存储的SRH不一致,则将第二报文中携带的第一报文的SRH存储在缓存表项中,从而更新缓存,并设置老化标志(agingflag)为1,保持状态机的当前状态不变。
(7)如果检测到第四事件且状态机处于第一状态,将状态机切换为第二状态。
(8)如果检测到第五事件且状态机处于第二状态,比较第二报文中携带的第一报文的SRH与缓存已存储的SRH是否一致,如果第二报文中携带的第一报文的SRH与已存储的SRH不一致,则将第二报文中携带的第一报文的SRH存储在缓存表项中,从而更新缓存,并设置老化标志(agingflag)为1,将状态机切换为第一状态。
(9)如果检测到第三事件且状态机处于第二状态,向对端代理节点发送一个第二报文,将状态机切换为第三状态。
(10)如果检测到第六事件且状态机处于第三状态,将状态机切换为第一状态。
其中,第六事件为接收到第六报文,第六事件在程序中可以记为CacheSync_Ack(直译为缓存同步确认,Sync表示Synchronize,Ack表示Acknowledge,意为确认)事件。
通过状态机的机制,达到的效果至少可以包括:通过状态机,能够实时根据对端代理节点缓存SRH的状态来决定当前执行的动作,从而将SRH及时传输给对端的代理节点,能够保证本地代理节点与对端代理节点之间的缓存表项保持同步,实现双归连接到同一个SF节点的代理节点之间SRH的一致性。
上述方法实施例可以应用在代理节点自发它收的场景下,参见图12,图12是本申请实施例提供的一种自发它收的场景下报文传输的示意图,在图12中,SF节点接收到代理节点1的报文后,将处理后的报文返回给了代理节点2。而自发它收的场景是可选方式,本申请实施例提供的报文传输方法还可以应用在代理节点自发自收的场景下。自发自收的场景是指,由一个代理节点向SF节点发送报文后,仍由该代理节点接收SF节点返回的报文。在该场景中,SF节点接收到来自一个代理节点的报文后,将处理后的报文返回给该代理节点。参见图34,图34是本申请实施例提供的一种自发自收的场景下报文传输的示意图,在图34中,SF节点接收到代理节点1的报文后,将处理后的报文返回给了代理节点1。
以下,通过图35实施例,示例性介绍自发自收的场景下的报文传输流程。需要说明的是,图35实施例着重描述与图5实施例的区别之处,而与图5实施例同理的步骤还请参见图5实施例,在图35实施例中不做赘述。
参见图35,图35是本申请实施例提供的一种报文传输方法的流程图,该方法可以包括以下步骤:
步骤3501、路由节点向第一代理节点发送第一报文。
步骤3502、第一代理节点从路由节点接收该第一报文,第一代理节点在第一缓存表项中存储第一报文的SRH。
步骤3503、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
步骤3504、第一代理节点向第二代理节点发送第二报文。
步骤3505、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
步骤3506、第二代理节点解析第二报文,得到第一报文的SRH。
步骤3507、第二代理节点在第二缓存表项中存储第一报文的SRH。
步骤3508、第一代理节点根据第一报文,生成第三报文。
步骤3509、第一代理节点向SF节点发送第三报文。
步骤3510、SF节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤3511、SF节点向第一代理节点发送第四报文。
与图5实施例相区别,SF节点可以将第四报文返回给第一代理节点。具体地,可以选择对应于第一代理节点的出接口,通过该出接口发送第四报文。
作为示例,在自发自收的场景下,SF节点可以预先存储对应于第一代理节点的入接口与对应于第一代理节点的出接口之间的绑定关系,那么,当从入接口接收到第三报文,生成第四报文后,基于该绑定关系,SF节点会选择该入接口绑定的出接口发送第四报文,使得报文通过该出接口发送出去后到达第一代理节点。其中,该绑定关系可以根据配置操作确定。例如,如果SF节点通过接口3和proxy1相连,可以预先对SF节点配置转发规则:对接口3接收的报文处理后,得到的报文都通过接口3返回。SF节点会将该转发规则存储在转发表中,当proxy1发送了第三报文后,SF节点会通过接口3接收到第三报文,SF节点查询转发表中的该转发规则,会将第三报文通过接口3返回至第一代理节点。
步骤3512、第一代理节点从SF节点接收第四报文,根据第四报文,查询第一缓存表项,得到第一报文的SRH。
步骤3513、第一代理节点根据第四报文以及第一报文的SRH,生成第五报文。
步骤3514、第一代理节点向路由节点发送第五报文。
步骤3515、路由节点从第一代理节点接收第五报文,转发第五报文。
本实施例提供的方法,通过为End.AD.SID扩展了一种具有绕行功能的新的SID,本地的代理节点在接收到携带End.AD.SID的报文时,利用这种新的SID,将报文的SRH绕行传输至对端的代理节点,从而让对端的代理节点也能够得到报文的SRH,使得报文的SRH能够存储至对端的代理节点。如此,双归接入至同一个SF节点的两个代理节点之间可以实现报文的SRH的同步,使得报文的SRH得到冗余备份,提高报文的SRH的安全性。
上述方法实施例描述了报文传输的基本流程,本申请实施例提供的方法还可以支持NAT的场景。NAT的场景是指SF节点的业务功能包括对流标识进行修改,在NAT的场景下,SF节点对报文进行业务处理后,通常情况下,处理后的报文对应的流标识会与处理前的报文对应的流标识不一致。例如,如果SF节点的业务功能包括进行NAT,SF节点会对报文的五元组进行修改,则处理后的报文的五元组会与处理前的报文的五元组不一致。在一个示例性场景中,如果SF是防火墙,SF通常会包括目的地址转换(Destination Network Address Translation,DNAT)的业务功能,SF会存储有地址池,会将报文中的目的地址替换为地址池中的某一地址,将目的地址替换后的报文返回给代理节点,使得代理节点接收到的报文的目的地址会与本地代理节点或对端代理节点之前向SF节点发送的报文的目的地址不同。
在NAT的场景下,可以通过执行本申请实施例提供的方法,来保证在SF节点修改了报文对应的流标识的情况下,代理节点根据已修改流标识的报文,仍能够查找到报文 的SRH,从而保证报文的SRH得到恢复。以下,结合图36具体描述NAT的场景下的报文传输流程。需要说明的是,图36实施例着重描述与图5实施例的区别之处,而与图5实施例同理的步骤以及其他特征还请参见图5实施例,在图36实施例不做赘述。
参见图36,图36是本申请实施例提供的一种报文传输方法的流程图,如图36所示,该方法可以包括以下步骤:
步骤3601、路由节点向第一代理节点发送第一报文。
步骤3602、第一代理节点从路由节点接收该第一报文,第一代理节点确定SF节点的业务功能包括对流标识进行修改。
关于确定SF节点的业务功能的过程,在一个实施例中,可以预先对第一代理节点进行配置操作,第一代理节点可以接收配置指令,从配置指令获取SF节点的业务功能信息,该业务功能信息表示SF节点的业务功能包括对流标识进行修改,第一代理节点可以存储SF节点的业务功能信息,根据该业务功能信息,确定SF节点的业务功能包括对流标识进行修改。在另一个实施例中,也可以由第一代理节点自动发现SF节点的业务功能包括对流标识进行修改,例如第一代理节点可以向SF节点发送能力查询请求,SF节点可以接收能力查询请求,响应于能力查询请求,向第一代理节点发送业务功能信息,第一代理节点可以根据接收的业务功能信息,确定SF节点的业务功能包括对流标识进行修改。当然,还可以由流分类器或者网络的控制器将SF节点的业务功能信息发送给SF节点,本实施例对第一代理节点如何确定SF节点的业务功能不做限定。
本实施例中,第一代理节点可以通过确定SF节点的业务功能包括对流标识进行修改,执行后续步骤提供的方式,保证在SF节点修改了报文对应的流标识的情况下,代理节点根据已修改流标识的报文,仍能够查找到报文的SRH,从而保证报文的SRH得到恢复。
步骤3603、第一代理节点以第一缓存表项的标识为索引,在第一缓存表项中存储第一报文的SRH。
第一缓存表项的标识用于在第一代理节点的缓存中唯一标识第一缓存表项。第一缓存表项的标识可以包括第一缓存表项的缓存索引(cache Index),该缓存索引用于在第一代理节点的接口板中标识第一缓存表项。在一个实施例中,第一代理节点可以包括多个接口板,则第一缓存表项的标识还可以包括目标接口板(Target Board,TB)的标识,目标接口板为第一缓存表项所在的接口板,目标接口板的标识用于在第一代理节点的每个接口板中标识目标接口板。
步骤3604、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
步骤3605、第一代理节点向第二代理节点发送第二报文。
第二报文可以包括第一缓存表项的标识,第二报文中的控制信息可以在指示第二代理节点存储第一报文的SRH的基础上,还指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系。其中,第二缓存表项为第二代理节点存储第一报文的SRH的缓存表项。
通过在第二报文中携带第一缓存表项的标识以及控制信息,第一代理节点能够利用第二报文,将本端用于存储SRH的缓存表项的标识传输至第二代理节点,利用控制信 息,指示第二代理节点维护第一代理节点上存储SRH的缓存表项与第二代理节点上存储SRH的缓存表项之间的映射关系。
可选地,控制信息可以包括第三标志,该第三标志用于标识SF节点的业务功能包括对流标识进行修改,可以预先对第二代理节点进行配置,令第二代理节点如果确定控制信息包括第三标志,则存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系。例如,该第三标志可以记为NAT标志,该NAT标志用于标识SF节点的业务功能包括NAT。示例性地,参见图7,第二报文可以包括如图7所示的扩展头,其中,缓存索引字段以及TB字段用于携带第一缓存表项的标识,缓存索引字段的取值可以为第一缓存表项的标识中的缓存索引,TB字段的取值可以为第一缓存表项的标识中目标接口板的标识,N字段用于携带第三标志,例如,N字段的取值可以是1,表示SF节点是NAT类型的SF,SF节点的业务功能包括修改流标识,因此识别出N字段的取值为1时,要维护两个代理节点的缓存表项的标识之间的映射关系。
步骤3606、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
步骤3607、第二代理节点解析第二报文,得到第一报文的SRH。
步骤3608、第二代理节点确定SF节点的业务功能包括对流标识进行修改。
第二代理节点的确定步骤与第一代理节点的确定步骤同理,可以参见步骤3602,在此不做赘述。
步骤3609、第二代理节点以第二缓存表项的标识为索引,在第二缓存表项中存储第一报文的SRH。
第二缓存表项的标识用于在第二代理节点的缓存中唯一标识第二缓存表项。第二缓存表项的标识可以包括第二缓存表项的缓存索引(cache Index),该缓存索引用于在第二代理节点的接口板中标识对应的第二缓存表项。在一个实施例中,第二代理节点可以包括多个接口板,则第二缓存表项的标识还可以包括目标接口板的标识,目标接口板为第二缓存表项所在的接口板,目标接口板的标识用于在第二代理节点的每个接口板中标识目标接口板。另外,第二代理节点的存储步骤与第一代理节点的存储步骤同理,可以参见步骤3603,在此不做赘述。
在一个实施例中,第二代理节点根据第二报文中的控制信息,可以确定控制信息指示第二代理节点存储第一报文的SRH以及存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,则第二代理节点可以从第二报文获取到第一缓存表项的标识,存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系。示例性地,如果第一代理节点将SRH存储在TB1中,SRH在TB1的缓存索引是cache index1,第二代理节点将SRH存储在TB2中,SRH在TB2的缓存索引是cache index2,则第一缓存表项的标识为(TB1,cache index1),第二缓存表项的标识为(TB2,cache index2),第二代理节点存储的映射关系可以包括(TB1,cache index1,TB2,cache index2)。
其中,第二代理节点可以识别第二报文中第一标志的取值、第二标志的取值和第三标志的取值,如果第一标志的取值设置为1、第二标志的取值设置为0,第三标志的取值设置为1,则确定控制信息指示存储第一报文的SRH,且存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系。
步骤3610、第一代理节点根据第一报文,生成第三报文。
步骤3611、第一代理节点向SF节点发送第三报文。
第三报文可以包括第一缓存表项的标识,例如,第一缓存表项的标识可以携带在第三报文的载荷中。通过在第三报文中携带第一缓存表项的标识,使得SF节点接收到的报文会携带第一缓存表项的标识,那么SF节点对报文进行业务处理后,得到的报文也会携带第一缓存表项的标识,使得SF节点向代理节点返回的报文携带第一缓存表项的标识,因此代理节点可以利用返回的报文中的第一缓存表项的标识,查找到之前存储SRH的缓存表项。
步骤3612、SF节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤3613、SF节点向第二代理节点发送第四报文。
步骤3614、第二代理节点从SF节点接收第四报文,以第二缓存表项的标识为索引,查询第二缓存表项,得到第一报文的SRH。
在一些实施例中,第四报文可以包括第一缓存表项的标识和载荷。第二代理节点可以根据第一缓存表项的标识,查询第一缓存表项的标识与第二缓存表项的标识之间的映射关系,得到第一缓存表项的标识对应的第二缓存表项的标识。例如,如果第二报文中第一缓存表项的标识为(TB1,cache index1),第二代理节点存储的映射关系包括(TB1,cache index1,TB2,cache index2),第二代理节点根据(TB1,cache index1),查询映射关系,可以得到(TB2,cache index2),在TB2上查找cache index2对应的缓存表项,从该缓存表项中得到第一报文的SRH。
步骤3615、第二代理节点根据第四报文以及第一报文的SRH,生成第五报文,第五报文包括第四报文的载荷以及第一报文的SRH。
步骤3616、第二代理节点向路由节点发送第五报文。
步骤3617、路由节点从第二代理节点接收第五报文,转发第五报文。
相关技术中,SRv6 SFC的动态代理节点通常以报文的五元组为key,在缓存表项中存储报文的SRH,当接收到SF节点返回的报文后,仍以报文的五元组为key,从缓存中查询对应的缓存表项。那么,如果SF节点的业务功能是NAT,由于SF节点会在处理报文的过程中,修改报文的五元组,使得SF节点返回的报文的五元组会与之前接收的报文的五元组不一致,也就使得代理节点从SF节点接收的报文与之前向SF节点发送的报文的五元组不一致,那么代理节点根据接收到的报文查询缓存时,由于查询缓存时使用的key和在缓存中存储时使用的key不一致,导致查找不到SRH,也就无法向报文恢复SRH,进而导致报文传输失败。比如,代理节点之前以五元组1为key存储了SRH,而报文的五元组被SF节点从五元组1修改为五元组2,代理节点以五元组2为key查询缓存时,会无法得到SRH。
本实施例提供的方法,代理节点通过确定SF节点的业务功能包括修改流标识,以存储了SRH的缓存表项的标识为key,来存储报文的SRH,那么如果SF节点将修改了流标识的报文返回给代理节点,由于存储SRH的缓存表项相对于代理节点来说通常是固定的,不会由于流标识的修改而改变,因此存储SRH时缓存表项的标识和查询SRH时缓存表项的标识能够保持一致,因此以缓存表项的标识为索引,能够查询到修改了流 标识的报文的SRH,从而向修改了流标识的报文恢复SRH。基于这种方式,即使SF是NAT功能的SF,使得报文在传输过程中流标识被改变,代理节点也能够恢复SRH,因此可以让代理节点支持接入NAT功能的SF,从而为实现了NAT功能的SF提供动态代理功能。
上述方法实施例提供了一种基于自发它收的流程来支持NAT的场景。本申请实施例还可以基于自发自收的流程来支持NAT的场景,以下通过图37所示方法实施例具体描述。需要说明的是,图37实施例着重描述与图36实施例的区别之处,而与图36实施例同理的步骤还请参见图36实施例,在图37实施例中不做赘述。
参见图37,图37是本申请实施例提供的一种报文传输方法的流程图,如图37所示,该方法的交互主体包括第一代理节点、第二代理节点、SF节点以及路由节点,该方法可以包括以下步骤:
步骤3701、路由节点向第一代理节点发送第一报文。
步骤3702、第一代理节点从路由节点接收该第一报文,第一代理节点确定SF节点的业务功能包括对流标识进行修改。
步骤3703、第一代理节点以第一缓存表项的标识为索引,在第一缓存表项中存储第一报文的SRH。
步骤3704、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
步骤3705、第一代理节点向第二代理节点发送第二报文。
步骤3706、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
步骤3707、第二代理节点解析第二报文,得到第一报文的SRH。
步骤3708、第二代理节点确定SF节点的业务功能包括对流标识进行修改。
步骤3709、第二代理节点以第二缓存表项的标识为索引,在第二缓存表项中存储第一报文的SRH。
步骤3710、第一代理节点根据第一报文,生成第三报文。
步骤3711、第一代理节点向SF节点发送第三报文。
第二报文还包括第一缓存表项的标识。本实施例中,控制信息还用于指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,第二缓存表项为第二代理节点存储第一报文的SRH的缓存表项。
步骤3712、SF节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤3713、SF节点向第一代理节点发送第四报文,第四报文包括第一缓存表项的标识和载荷。
步骤3714、第一代理节点从SF节点接收第四报文,以第一缓存表项的标识为索引,查询第一缓存表项,得到第一报文的SRH。
本步骤可以与步骤3614相区别,由于第四报文中的缓存表项的标识对应于本地的缓存,
第一代理节点可以免去查询本地缓存表项与对端缓存表项之间的映射关系的步骤,从第四报文获取第一缓存表项的标识,以第一缓存表项的标识为key查询缓存,即可找到第一缓存表项。
步骤3715、第一代理节点根据第四报文以及第一报文的SRH,生成第五报文。
步骤3716、第一代理节点向路由节点发送第五报文。
步骤3717、路由节点从第一代理节点接收第五报文,转发第五报文。
本实施例提供的方法,代理节点通过确定SF节点的业务功能包括修改流标识,以存储了SRH的缓存表项的标识为key,来存储报文的SRH,那么如果SF节点将修改了流标识的报文返回给代理节点,由于存储SRH的缓存表项相对于代理节点来说通常是固定的,不会由于流标识的修改而改变,因此存储SRH时缓存表项的标识和查询SRH时缓存表项的标识能够保持一致,因此以缓存表项的标识为索引,能够查询到修改了流标识的报文的SRH,从而向修改了流标识的报文恢复SRH。基于这种方式,即使SF是NAT功能的SF,使得报文在传输过程中流标识被改变,代理节点也能够恢复SRH,因此可以让代理节点支持接入NAT功能的SF,从而为实现了NAT功能的SF提供动态代理功能。
上述方法实施例描述了支持NAT场景的报文传输流程。本申请实施例提供的方法还可以支持SRv6 VPN的场景,SRv6 VPN是指基于SRv6隧道来传输VPN数据,在SRv6VPN的场景下,业务链中的SF节点要对VPN数据进行处理,从而实现VPN业务。在这种场景下,代理节点可以支持实现VPN业务的SF节点,通过执行下述方法实施例,来为实现VPN业务的SF节点提供动态代理服务。以下通过图38所示的方法实施例,对VPN场景下的报文传输流程进行描述。需要说明的是,图38实施例着重描述与图5实施例的区别之处,而与图5实施例同理的步骤或者其他特征还请参见图5实施例,在图38实施例中不做赘述。
参见图38,图38是本申请实施例提供的一种报文传输方法的流程图,该方法可以包括以下步骤:
步骤3801、路由节点向第一代理节点发送第一报文。
步骤3802、第一代理节点从路由节点接收该第一报文,第一代理节点以第一报文对应的流标识和端点动态代理SID为索引,在第一缓存表项中存储第一报文的SRH。
第一代理节点可以接入多个SF节点,该多个SF节点可以属于不同的VPN,因此,在第一代理节点为该多个SF节点共同提供动态代理服务的过程中,第一代理节点可以接收到待发往不同VPN的报文。而不同VPN的报文可能出现对应于相同的流标识的情况,这就导致如果以流标识为索引来存储SRH,不同VPN的报文的SRH的索引会相同,导致不同VPN的报文的SRH会被存储在相同的位置,那么,一方面,无法达到不同VPN之间信息隔离的目的,影响安全性,另一方面,由于同一个索引在缓存中同时命中多个SRH,导致无法确定向报文恢复哪一个SRH,进而导致恢复SRH失败。
而本实施例中,可以预先为每个VPN的SF节点分配对应的End.AD.SID,不同VPN的SF节点对应的End.AD.SID不同。通过这种分配End.AD.SID的方式,End.AD.SID能够作为VPN的标识,发往不同VPN的SF节点的报文中携带的End.AD.SID会不同, 通过不同的End.AD.SID,能够区分不同VPN的报文。例如,如果第一代理节点接入了10个SF节点,分别是SF节点0、SF节点1至SF节点9,这10个SF节点分别属于2个不同的VPN,其中SF节点0至SF节点5属于VPN1,SF节点6至SF节点9属于VPN2。在这一例子中,可以为10个SF节点分配2个End.AD.SID,其中,为SF节点0至SF节点5分配1个End.AD.SID,为SF节点6至SF节点9分配另1个End.AD.SID,通过报文中的End.AD.SID是这两个End.AD.SID中的哪一个,能够标识报文是发往哪个VPN的报文。例如,为SF节点0至SF节点5分配的End.AD.SID可以是A::1,为SF节点6至SF节点9分配的End.AD.SID可以是B::2,那么如果接收到的报文1和报文2,报文1携带的End.AD.SID是A::1,报文2携带的End.AD.SID是B::2,则表明报文1是VPN1对应的报文,报文2是VPN2对应的报文。
由于发往不同VPN的SF节点的报文中携带的End.AD.SID不同,因此,通过以流标识和End.AD.SID为索引,来存储SRH,不同VPN的报文的索引能够通过不同的End.AD.SID区别开,从而保证不同VPN的报文的SRH得以分开存储,实现不同VPN的信息隔离,此外避免了同一个索引命中多个VPN的报文的情况,从而解决了上述技术问题。
此外,第一代理节点接入的SF节点可以分别属于公网和VPN,可以为公网的SF节点和VPN的SF节点分别分配对应的End.AD.SID,公网的SF节点对应的End.AD.SID和VPN的SF节点对应的End.AD.SID不同。例如,如果第一代理节点接入的10个SF节点中,SF节点0至SF节点4属于公网,SF节点5至SF节点9属于VPN1。在这一例子中,可以为10个SF节点分配2个End.AD.SID,其中,为SF节点0至SF节点4分配1个End.AD.SID,为SF节点5至SF节点9分配另1个End.AD.SID,通过这种分配方式,以流标识和End.AD.SID为索引,来存储SRH,能够保证公网的报文的SRH与VPN的报文的SRH得以分开存储,此外避免了同一个索引命中公网的报文以及VPN的报文的情况。
步骤3803、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
步骤3804、第一代理节点向第二代理节点发送第二报文。
步骤3805、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
步骤3806、第二代理节点解析第二报文,得到第一报文的SRH。
步骤3807、第二代理节点根据第二报文对应的流标识和端点动态代理SID,在第二缓存表项中存储第一报文的SRH,其中,第二报文对应的流标识可以为第一报文对应的流标识。
其中,第二报文对应的流标识可以为第一报文对应的流标识。例如,第二报文可以包括五元组,而第二报文中的五元组和第一报文中的五元组相同。第二报文可以包括End.AD.SID,例如,第二报文可以包括第一报文的SRH,而第一报文的SRH可以包括End.AD.SID,因此第二代理节点可以从第二报文,得到第一报文对应的流标识以及End.AD.SID。VPN标识用于标识业务功能节点所属的VPN,不同VPN的VPN标识不同,因此通过VPN标识,能够区分不同的VPN。例如,VPN标识可以是VLAN ID。 VPN标识可以预先在第二代理节点中存储。
根据流标识和End.AD.SID来存储SRH的方式可以有多种,以下以方式一和方式二举例说明。
方式一、第二代理节点以第一报文对应的流标识和End.AD.SID为索引,在第二缓存表项中存储第一报文的SRH。
对于不同VPN的报文而言,考虑到不同VPN的报文可能出现流标识相同的情况,那么如果仅使用流标识为索引来存储,可能会由于索引相同,而导致不同VPN的报文存储在同一位置,无法实现不同VPN之间信息隔离。而通过方式一,即使不同VPN的报文对应的流标识相同,由于它们对应的End.AD.SID不同,因此可以保证它们对应的索引通过End.AD.SID区别开,因此可以保证不同VPN的报文的SRH存储在不同位置,实现不同VPN之间信息隔离。
方式二、第二代理节点以第一报文对应的流标识、End.AD.SID和VPN标识为索引,在第二缓存表项中存储第一报文的SRH。
对于不同VPN的报文而言,考虑到不同VPN的报文可能出现流标识相同的情况,那么如果仅使用流标识为索引来存储,可能会由于索引相同,而导致不同VPN的报文存储在同一位置,无法实现不同VPN之间信息隔离。而通过方式二,即使不同VPN的报文对应的流标识相同,由于它们对应的End.AD.SID和VPN标识不同,因此可以保证它们对应的索引通过End.AD.SID和VPN标识区别开,因此可以保证不同VPN的报文的SRH存储在不同位置,实现不同VPN之间信息隔离。
步骤3808、第一代理节点根据第一报文,生成第三报文。
步骤3809、第一代理节点向SF节点发送第三报文。
第三报文对应的流标识可以和第一报文对应的流标识相同,第三报文可以包括VPN标识。在一个实施例中,第一代理节点可以和SF节点建立传输隧道,第一代理节点可以生成该传输隧道对应的隧道头,封装隧道头,使得第三报文包括该隧道头,该隧道头可以包括VPN标识。
步骤3810、业务功能节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤3811、业务功能节点向第二代理节点发送第四报文。
步骤3812、第二代理节点从SF节点接收第四报文,根据VPN标识、End.AD.SID和第四报文对应的流标识,查询第二缓存表项,得到第一报文的SRH。
第二代理节点查询SRH的方式可以有多种,作为示例,查询得到SRH的方式可以包括以下方式一或方式二:
方式一具体可以包括以下步骤一至步骤二:
步骤一、第二代理节点根据VPN标识,查询端点动态代理SID与VPN标识之间的映射关系,得到VPN标识对应的端点动态代理SID。
第四报文对应的流标识可以为第一报文对应的流标识,第四报文包括VPN标识和载荷,第二代理节点可以从第四报文中获取VPN标识,根据VPN标识查询End.AD.SID与VPN标识之间的映射关系,从而找到与VPN标识对应于哪个End.AD.SID。例如,如果End.AD.SIDA::1与VLAN ID100之间具有映射关系,End.AD.SIDA::2与VLAN  ID200之间具有映射关系,当第四报文携带的VPN标识是100时,第二代理节点根据100查询映射关系,可以得到A::1。
关于第二代理节点存储End.AD.SID与VPN标识之间的映射关系的过程,在一个实施例中,可以在第二代理节点上进行配置操作,来配置为每个VPN的SF节点分配的End.AD.SID。第二代理节点可以接收配置指令,第二代理节点可以根据配置指令,存储端点动态代理SID与VPN标识之间的映射关系。其中,配置指令包括每个VPN中的SF节点对应的End.AD.SID。第二代理节点可以解析配置指令,得到配置指令携带的每个VPN中的SF节点对应的End.AD.SID。例如,如果第二代理节点接入了两个VLAN中的SF节点,两个VLAN分别通过VLAN ID100和VLAN ID200来标识,可以预先在第二代理节点上配置两个End.AD.SID,例如将A::1分配给VLAN ID100的VLAN中的SF,将A::2分配给VLAN ID200的VLAN中的SF,可以存储A::1与100之间的映射关系以及A::2与200之间的映射关系。当然,通过上述方式来存储End.AD.SID与VPN标识之间的映射关系仅是示例,第二代理节点也可以通过其他方式End.AD.SID与VPN标识之间的映射关系,例如自动学习End.AD.SID与VPN标识之间的映射关系,从控制器接收End.AD.SID与VPN标识之间的映射关系等。
通过不仅使用流标识,还使用End.AD.SID为索引,可以保证不同VPN的报文的索引通过End.AD.SID区别开,因此可以避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
步骤二、第二代理节点以第四报文对应的流标识和端点动态代理SID为索引,查询第二缓存表项,得到第一报文的SRH。
方式二、第二代理节点以End.AD.SID、VPN标识和第四报文对应的流标识为索引,查询第二缓存表项,得到第一报文的SRH。
通过不仅使用流标识,还使用End.AD.SID和VPN标识为索引,可以保证不同VPN的报文的索引通过End.AD.SID和VPN标识区别开,因此可以避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
步骤3813、第二代理节点根据第四报文以及第一报文的SRH,生成第五报文,第五报文包括第四报文的载荷以及第一报文的SRH。
步骤3814、第二代理节点向路由节点发送第五报文。
步骤3815、路由节点从第二代理节点接收第五报文,转发第五报文。
本实施例提供了一种支持SRv6 VPN场景下提供动态代理服务的方法,通过为每个VPN的SF节点分配对应的End.AD.SID,能够通过End.AD.SID来作为VPN的标识,因此代理节点根据流标识和End.AD.SID,来存储SRH,能够让不同VPN的报文的SRH的索引能够通过不同的End.AD.SID区别开,从而保证不同VPN的报文的SRH的索引不同,使得不同VPN的报文的SRH得以分开存储,实现不同VPN的信息隔离,同时,避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
上述方法实施例基于自发它收的流程来支持了VPN的场景。本申请实施例还可以基于自发自收的流程支持VPN的场景,以下通过图39所示方法实施例具体描述。需要说明的是,图39实施例着重描述与图38实施例的区别之处,而与图38实施例同理的 步骤还请参见图38实施例,在图39实施例中不做赘述。
参见图39,图39是本申请实施例提供的一种报文传输方法的流程图,该方法可以包括以下步骤:
步骤3901、路由节点向第一代理节点发送第一报文。
步骤3902、第一代理节点从路由节点接收该第一报文,第一代理节点根据第一报文对应的流标识和端点动态代理SID,在第一缓存表项中存储第一报文的SRH。
第一代理节点存储SRH的方式可以有多种,以下通过方式一或方式二举例说明。
方式一、第一代理节点以第一报文对应的流标识和端点动态代理SID为索引,在第二缓存表项中存储第一报文的SRH。
方式二、第一代理节点以第一报文对应的流标识、端点动态代理SID和VPN标识为索引,在第二缓存表项中存储第一报文的SRH。
步骤3903、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
步骤3904、第一代理节点向第二代理节点发送第二报文。
步骤3905、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH。
步骤3906、第二代理节点解析第二报文,得到第一报文的SRH。
步骤3907、第二代理节点根据第二报文对应的流标识和端点动态代理SID,在第二缓存表项中存储第一报文的SRH,其中,第二报文对应的流标识为第一报文对应的流标识。
第二代理节点存储SRH的方式可以有多种,以下通过方式一或方式二举例说明。
方式一、第二代理节点以第一报文对应的流标识和端点动态代理SID为索引,在第二缓存表项中存储第一报文的SRH。
方式二、第二代理节点以第一报文对应的流标识、端点动态代理SID和VPN标识为索引,在第二缓存表项中存储第一报文的SRH。
步骤3908、第一代理节点根据第一报文,生成第三报文。
步骤3909、第一代理节点向SF节点发送第三报文。
步骤3910、SF节点从第一代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤3911、SF节点向第一代理节点发送第四报文。
步骤3912、第一代理节点从SF节点接收第四报文,根据VPN标识、端点动态代理SID和第四报文对应的流标识,查询第一缓存表项,得到第一报文的SRH。
其中,第四报文对应的流标识为第一报文对应的流标识。
第一代理节点查询SRH的方式可以有多种,以下通过方式一或方式二举例说明。
方式一可以包括以下步骤一至步骤二:
步骤一、第一代理节点根据VPN标识,查询端点动态代理SID与VPN标识之间的映射关系,得到VPN标识对应的端点动态代理SID。
步骤二、第一代理节点以第四报文对应的流标识和端点动态代理SID为索引,查询第一缓存表项,得到第一报文的SRH。
方式二、第一代理节点以VPN标识、端点动态代理SID和第四报文对应的流标识为索引,查询第一缓存表项,得到第一报文的SRH。
其中,步骤3912是针对于步骤3907中的方式二的情况。
步骤3913、第一代理节点根据第四报文以及第一报文的SRH,生成第五报文,第五报文包括第四报文的载荷以及第一报文的SRH。
步骤3914、第一代理节点向路由节点发送第五报文。
步骤3915、路由节点从第一代理节点接收第五报文,转发第五报文。
本实施例提供了一种支持SRv6 VPN场景下提供动态代理服务的方法,通过为每个VPN的SF节点分配对应的End.AD.SID,能够通过End.AD.SID来作为VPN的标识,因此代理节点通过以流标识和End.AD.SID为索引,来存储SRH,能够让不同VPN的报文的SRH的索引能够通过不同的End.AD.SID区别开,从而保证不同VPN的报文的SRH的索引不同,使得不同VPN的报文的SRH得以分开存储,实现不同VPN的信息隔离,同时,避免通过同一个索引命中多个VPN的报文的SRH的情况,保证查询的准确性。
上述各个方法实施例提供了正常状态下的报文传输方法。本申请实施例提供的方法还可以支持故障场景,当代理节点出现故障或者代理节点连接的链路出现故障时,也能够提供动态代理服务,保证报文的正常传输。例如,参见图40,代理节点1和SF节点之间的链路、代理节点2和SF节点之间的链路这两条链路可以组成冗余链路,当代理节点1和SF节点之间的链路发生故障时,流量能够绕行至代理节点2,被代理节点2执行动态代理操作,经过代理节点2继续向SF节点传输。
以下通过图41所示方法实施例,对故障场景下的传输流程进行具体描述。需要说明的是,图41实施例着重描述与上述各个实施例的区别之处,而与上述各个实施例同理的步骤还请参见上文,在图41实施例中不做赘述。
参见图41,图41是本申请实施例提供的一种报文传输方法的流程图,该方法可以包括以下步骤:
步骤4101、路由节点向第一代理节点发送第一报文。
步骤4102、第一代理节点从路由节点接收该第一报文,检测到故障事件。
故障事件可以包括两种类型,一种类型是链路故障,另一种类型是节点故障,第一代理节点检测到这两种类型中任一种类型的故障事件时,可以通过执行本实施例提供的方法,对待进行动态代理处理的报文进行重新封装,转发至第二代理节点,从而指示第二代理节点代替第一代理节点来执行动态代理操作,从重新封装的报文剥离SRH,将重新封装的报文的载荷传输至SF节点。
故障事件包括第一代理节点与SF节点之间的链路产生故障的事件或者第一代理节点产生故障的事件中的至少一项。参见图3和图4,第一代理节点与SF节点之间的链路产生故障的事件可以是第四链路产生故障的事件。示例性地,第一代理节点可以通过第三出接口与SF节点相连,第一代理节点可以通过第三出接口来将报文传输至第四链路,第一代理节点可以检测第三出接口的状态,如果第三出接口的状态为关闭(down),则第一代理节点检测到第四链路产生故障的事件。
在一个实施例中,可以对第一代理节点预先部署双向转发检测(Bidirectional forwarding detection,BFD)机制,BFD用于快速检测、监控网络中链路或者IP路由的转发连通状况,通过部署BFD机制,能够保证第一代理节点实时监控第四链路是否处于连通状态,从而快速感知到第四链路的故障。作为一种可能的实现,部署BFD机制的过程可以包括:在第一代理节点执行配置操作,第一代理节点接收配置指令,根据配置指令,为第三出接口开启BFD,建立与SF节点的BFD会话。BFD会话建立后,第一代理节点会每隔预设时间周期,通过第三出接口发送BFD报文,如果在预设时长内未收到返回的BFD响应报文,则可以确定第四链路产生故障。
应理解,本实施例是以先描述接收第一报文,后描述检测故障事件为例进行说明,在另一个实施例中,第一代理节点也可以先检测到故障事件,后接收第一报文,或者,第一代理节点在接收到第一报文的同时,检测到故障事件,本实施例对接收第一报文与检测故障事件这两个步骤的时序不做限定。
步骤4103、第一代理节点在第一缓存表项中存储第一报文的SRH。
步骤4104、第一代理节点根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文。
本实施例中,第二报文可以视为第一报文的绕行报文,第二报文可以是数据报文。第二报文包括第一报文的载荷。第二报文中的控制信息可以在指示第二代理节点存储第一报文的SRH的基础上,还指示第二代理节点向业务功能节点发送第一报文的载荷。可选地,控制信息可以包括第一标志以及第二标志,图41实施例中,第一标志的取值可以设置为0,表示报文不是复制出的报文,通常是数据报文,接收端收到报文后,要转发报文中的载荷。控制信息中的第二标志的取值可以设置为0,表示报文不是确认报文。
可选地,本实施例中生成第二报文的触发条件可以和图5实施例生成第二报文的触发条件不同。图5实施例中,生成第二报文的触发条件可以是检测到第一缓存表项生成、更新或者检测到当前时间点到达第二报文的发送周期,第二报文的数量可以和第一报文的数量具有显著差异,比如说,一条数据流包括N个第一报文,这N个第一报文的SRH通常情况下很大概率上是相同的,那么在图5实施例中,第一代理节点可以为一条数据流生成一条缓存表项来存储一个SRH,因此这条数据流总共触发一个第二报文,每个第一报文的载荷可以由第一代理节点传输至SF节点。而本实施例中,由于第一代理节点产生节点故障或链路故障,可以将每个第一报文的载荷分别携带在对应的第二报文中,从而将每个第一报文的载荷传输给第二代理节点,使得每个第一报文的载荷通过第二代理节点传输至SF节点,那么,生成第二报文的触发条件可以为:每当接收到一个第一报文,则对应生成一个第二报文,第二报文的数量可以和第一报文的数量大致相同。
步骤4105、第一代理节点向第二代理节点发送第二报文。
步骤4106、第二代理节点从第一代理节点接收第二报文,确定控制信息指示第二代理节点存储第一报文的SRH和向SF节点发送第二报文的载荷,该第二报文的载荷为第一报文的载荷。
例如,第二代理节点可以识别第二报文中第一标志的取值和第二标志的取值,如果第一标志的取值设置为0且第二标志的取值设置为0,则确定控制信息指示存储第一报 文的SRH,且发送第二报文的载荷。
步骤4107、第二代理节点解析第二报文,得到第一报文的SRH。
步骤4108、第二代理节点在第二缓存表项中存储第一报文的SRH。
步骤4109、第二代理节点根据第二报文,生成第三报文。
第三报文包括第一报文的载荷,第二代理节点生成的第三报文与上述图5实施例中第一代理节点生成的第三报文可以相同。在一个实施例中,步骤4109可以包括以下步骤一至步骤二:
步骤一、第二代理节点将第二报文还原为第一报文。
在一个实施例中,第二代理节点可以从第二报文的SRH删除第一绕行SID,将第二报文的SRH的SL减一,将第二报文的目的地址更新为端点动态代理SID,得到第一报文。
步骤二、第二代理节点根据第一报文,生成第三报文。
步骤二与上述步骤508同理,在此不做赘述。
应理解,本实施例是以先描述步骤4107,后描述步骤4108为例进行说明,在另一个实施例中,第一代理节点也可以先执行步骤4108,后执行步骤4107,或者,第一代理节点在执行步骤4107的同时,执行步骤4108,本实施例对步骤4107与步骤4108的时序不做限定。
步骤4110、第二代理节点向SF节点发送第三报文。
第二代理节点转发第三报文的流程与图5实施例第一代理节点转发第三报文的流程同理,在此不做赘述。
步骤4111、SF节点从第二代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤4112、SF向第二代理节点发送第四报文。
步骤4113、第二代理节点从SF节点接收第四报文,根据第四报文,查询第二缓存表项,得到第一报文的SRH。
步骤4114、第二代理节点根据第四报文以及第一报文的SRH,生成第五报文。
步骤4115、第二代理节点向路由节点发送第五报文。
步骤4116、路由节点从第二代理节点接收第五报文,转发第五报文。
在一个实施例中,随着时间的推移,故障可以被消除,则链路或者节点可以从故障状态恢复至正常状态。在这种情况下,第一代理节点后续再次从路由节点接收到第一报文,可以检测到故障恢复事件,则可以停止执行本实施例提供的方法流程,不再将第一报文的载荷传输至第二代理节点,由第二代理节点将第一报文的载荷传输至SF节点,而是执行图4实施例提供的方法流程,由本端将第一报文的载荷传输至SF节点,从而避免正常转发场景下出现流量绕行的情况。其中,第一代理节点可以检测第三出接口的状态,如果第三出接口的状态为开启(up),则第一代理节点检测到第四链路恢复为连通状态。
本实施例提供了一种支持故障场景下提供动态代理服务的方法,本地代理节点通过检测到故障事件时,对待进行动态代理处理的报文进行重新封装,转发至对端代理节点,指示对端代理节点代替本地代理节点来执行动态代理操作,从重新封装的报文剥离 SRH,将重新封装的报文的载荷传输至SF节点,如此,能够在本地的代理节点发生设备故障或链路故障时,将本地的流量绕行至对端继续转发,从而在故障场景下,也能保证报文得以被SF节点处理,从而得到正常传输,因此,极大地提高了网络的稳定性、可靠性和健壮性。
上述方法实施例提供了第一代理节点检测出故障事件触发的报文传输流程,本申请实施例中,还提供了路由节点检测到故障事件后触发的报文传输流程。例如,参见图42,如果路由节点感知到代理节点1故障,或者感知到路由节点与代理节点1之间的链路故障,可以通过执行本申请实施例提供的方法,来保证报文得到动态代理操作,能够被SF节点进行处理。以下,通过图43实施例进行具体描述。
参见图43,图43是本申请实施例提供的一种报文传输方法的流程图,该方法可以包括以下步骤:
步骤4301、路由节点检测到故障事件,向第二代理节点发送第一报文。
参见图3,故障事件包括第二链路产生故障的事件、第四链路产生故障的事件或者第一代理节点产生故障的事件中的至少一项。示例性地,如果路由节点通过第四出接口接入第二链路,路由节点可以检测第四出接口的状态,如果第四出接口的状态为关闭(down),则路由节点检测到第二链路产生故障的事件。
可选地,可以对第一代理节点部署监控链路组(monitor link),该监控链路组包括第二链路以及第四链路,第二链路可以为监控链路组中的受监控者,第四链路可以为监控链路组中的监控者。第一代理节点可以检测第二出接口的状态,如果第二出接口的状态为down(关闭),可以确定第二链路发生故障,则第一代理节点将第三出接口的状态设置为down,使得第四链路随着第二链路的故障而同步关闭,从而使得第四链路与第二链路实现联动。例如,参见图44,代理节点1可以检测与SF节点相连的出接口的状态,如果与SF节点相连的出接口的状态为down,则代理节点1将与路由节点相连的出接口的状态设置为down,使得代理节点1与路由节点之间的链路关闭。其中,图44中的虚线表示第二链路的关闭引起第四链路的关闭。
路由节点检测到故障事件后,可以确定第一报文的载荷暂时无法由第一代理节点转发至SF节点,则利用双归接入的组网架构,向第二代理节点发送第一报文,从而将第一报文的载荷传输至第二代理节点,以便将第一报文的载荷由第二代理节点转发至SF节点。示例性地,参见图3,如果路由节点通过第五出接口接入第三链路,路由节点可以选择第五出接口作为第一报文的出接口,通过第五出接口发送第一报文,则第一报文会通过第三链路传输至第二代理节点。
步骤4302、第二代理节点从路由节点接收第一报文,第二代理节点在第二缓存表项中存储第一报文的SRH。
步骤4303、第二代理节点根据第一报文,生成第三报文。
步骤4304、第二代理节点向SF节点发送第三报文。
步骤4305、SF节点从第二代理节点接收第三报文,对第三报文进行处理,得到第四报文。
步骤4306、SF节点向第二代理节点发送第四报文。
步骤4307、第二代理节点从SF节点接收第四报文,根据第四报文,查询第二缓存表项,得到第一报文的SRH。
步骤4308、第二代理节点根据第四报文以及第一报文的SRH,生成第五报文。
步骤4309、第二代理节点向路由节点发送第五报文。
步骤4310、路由节点从第二代理节点接收第五报文,转发第五报文。
本实施例提供了一种支持故障场景下提供动态代理服务的方法,路由节点通过检测到一个代理节点相关的链路产生故障时,将报文转发至另外一个代理节点,使得另一个代理节点代替发生故障的代理节点来执行动态代理操作,从而在故障场景下,也能保证报文得以被SF节点处理,从而得到正常传输,因此,极大地提高了网络的稳定性、可靠性和健壮性。
上述图41实施例至图44实施例示出了故障场景下传输报文的方法流程,应理解,上述图41实施例以及图44实施例仅是对故障场景的举例,本申请实施例提供的方法还可以同理地应用在其他故障场景下。本申请实施例提供的方法具有自适应的特点,即,能够应用一套转发流程,来正确处理各种场景下的流量转发需求,为简洁起见,上文只针对部分故障场景进行说明,其余故障场景的处理与之类似,在此不再赘述。
以上介绍了本申请实施例的报文传输方法,以下介绍第一代理节点和第二代理节点。
图45是本申请实施例提供的一种第一代理节点的结构示意图,如图45所示,该第一代理节点包括:接收模块4501,用于执行接收第一报文的步骤,例如可以执行上述方法实施例中的步骤502、步骤3502、步骤3602、步骤3702、步骤3802、步骤3902、步骤4102或步骤4302;生成模块4502,用于执行生成第二报文的步骤,例如可以执行上述方法实施例中的步骤503、步骤3503、步骤3604、步骤3704、步骤3803、步骤3903或步骤4104;发送模块4503,用于执行发送第二报文的步骤,例如可以执行上述方法实施例中的步骤504、步骤3504、步骤3605、步骤3705、步骤3804、步骤3904或步骤4105。
可选地,生成模块4502,用于向第一报文的SRH的段列表插入第一绕行SID;将第一报文的SRH的剩余段数量SL加一;将第一报文的目的地址更新为第一绕行SID,得到第二报文。
可选地,控制信息携带在第二报文的扩展头中;或者,控制信息携带在第一绕行SID中;或者,控制信息携带在第二报文的IP头中;或者,控制信息携带在第二报文的SRH中的TLV中。
可选地,第一代理节点还包括:存储模块,用于执行存储第一报文的SRH的步骤,例如可以执行上述方法实施例中的步骤502、步骤3502、步骤3603、步骤3703、步骤3802、步骤3902或步骤4103;
生成模块4502,还用于执行生成第三报文的步骤,例如可以执行上述方法实施例中的步骤508、步骤3508、步骤3610、步骤3710、步骤3808或步骤3908;
发送模块4503,还用于执行发送第三报文的步骤,例如可以执行上述方法实施例中 的步骤509、步骤3509、步骤3611、步骤3711、步骤3809或步骤3909。
可选地,存储模块,用于执行以第一缓存表项的标识为索引存储SRH的步骤,例如可以执行上述方法实施例中的步骤3602和步骤3603,或步骤3702和步骤3703。
可选地,第一代理节点还包括:确定模块,还用于确定业务功能节点的业务功能包括对流标识进行修改;
可选地,第二报文还包括第一缓存表项的标识,控制信息还用于指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,第二缓存表项为第二代理节点存储第一报文的SRH的缓存表项。
可选地,接收模块4501,还用于执行接收第四报文的步骤,例如可以执行上述方法实施例中的步骤3714;第一代理节点还包括:查询模块,用于执行查询得到第一报文的SRH的步骤,例如可以执行上述方法实施例中的步骤3714;生成模块4502,还用于执行生成第五报文的步骤,例如可以执行上述方法实施例中的步骤3715;发送模块4503,还用于执行发送第五报文的步骤,例如可以执行上述方法实施例中的步骤3716。
可选地,生成第二报文的触发条件包括以下一种或多种:检测到第一缓存表项生成;检测到第一缓存表项更新。
可选地,存储模块,用于执行以第一报文对应的流标识和端点动态代理SID为索引,存储SRH的步骤,例如可以执行上述方法实施例中的步骤3802或1802。
可选地,接收模块4501,用于执行接收配置指令的步骤;存储模块,用于执行存储端点动态代理SID与VPN标识之间的映射关系的步骤。
可选地,第三报文对应的流标识为第一报文对应的流标识,第三报文包括VPN标识,接收模块4501,还用执行接收第四报文的步骤,例如执行上述方法实施例中的步骤3912;第一代理节点还包括:查询模块,用于执行查询端点动态代理SID与VPN标识之间的映射关系的步骤,例如执行上述方法实施例中的步骤3912;查询模块,还用于执行查询得到第一报文的SRH的步骤,例如执行上述方法实施例中的步骤3913;生成模块4502,还用于执行生成第五报文的步骤,例如执行上述方法实施例中的步骤3914。发送模块4503,还用于执行生成第五报文的步骤,例如执行上述方法实施例中的步骤3915。
可选地,第二报文还包括第一报文的载荷,控制信息还用于指示第二代理节点向业务功能节点发送第一报文的载荷,该第一代理节点还包括:检测模块,用于执行检测到故障事件的步骤,例如执行上述方法实施例中的步骤4102。
可选地,第一代理节点通过第一链路与第二代理节点相连,发送模块4503,用于通过对应于第一链路的第一出接口,向第二代理节点发送第二报文;或者,第一代理节点通过第二链路与路由节点相连,路由节点通过第三链路与第二代理节点相连,发送模块4503,用于通过对应于第二链路的第二出接口,向路由节点发送第二报文,第二报文由路由节点通过第三链路转发至第二代理节点。
可选地,该检测模块,还用于对第一链路的状态进行检测;发送模块4503,用于如果第一链路处于可用状态,通过对应于第一链路的第一出接口,向第二代理节点发送第二报文;或者,如果第一链路处于不可用状态,通过对应于第二链路的第二出接口,向路由节点发送第二报文。
可选地,发送模块4503,用于向第二代理节点连续发送多个第二报文;或者,当未接收到第六报文时,向第二代理节点重传第二报文,第六报文的源地址为第一绕行SID,第六报文可以包括对应于第一代理节点的第二绕行SID,例如,第六报文的目的地址为第二绕行SID,第六报文用于表示接收到了第二报文。
应理解,图45实施例提供的第一代理节点对应于上述方法实施例中的第一代理节点,第一代理节点中的各模块和上述其他操作和/或功能分别为了实现方法实施例中的第一代理节点所实施的各种步骤和方法,具体细节可参见上述方法实施例,为了简洁,在此不再赘述。
需要说明的一点是,图45实施例提供的第一代理节点在传输报文时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将第一代理节点的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的第一代理节点与上述报文传输的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
图46是本申请实施例提供的一种第二代理节点的结构示意图,如图46所示,该第二代理节点包括:
接收模块4601,用于执行接收第二报文的步骤,例如可以执行上述方法实施例中的步骤505、步骤3505、步骤3606、步骤3709、步骤3805、步骤3905或步骤4106;
确定模块4602,用于执行确定控制信息指示第二代理节点存储第一报文的SRH的步骤,例如可以执行上述方法实施例中的步骤505、步骤3505、步骤3606、步骤3706、步骤3805、步骤3905或步骤4106;
解析模块4603,用于执行解析第二报文的步骤,例如可以执行上述方法实施例中的步骤506、步骤3506、步骤3607、步骤3707、步骤3806、步骤3906或步骤4107;
存储模块4604,用于执行存储第一报文的SRH的步骤,例如可以执行上述方法实施例中的步骤507、步骤3507、步骤3609、步骤3709、步骤3807、步骤3907、步骤4108或步骤4302。
可选地,解析模块4603,用于:从第二报文的SRH中的段列表,删除第一绕行SID;将第二报文的SRH中的SL减一,得到第一报文的SRH。
可选地,控制信息携带在第二报文的扩展头中;或者,控制信息携带在第一绕行SID中;或者,控制信息携带在第二报文的IP头中;或者,控制信息携带在第二报文的SRH中的TLV中。
可选地,存储模块4604,用于执行以第二缓存表项的标识为索引存储SRH的步骤,例如可以执行上述方法实施例中的步骤508至步骤3609,或步骤3708至步骤3709。
可选地,确定模块4602,还用于确定业务功能节点的业务功能包括对流标识进行修改。
可选地,第二报文还包括第一缓存表项的标识,第一缓存表项为第一代理节点存储第一报文的SRH的缓存表项,确定模块4602,还用于确定控制信息还指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系;存储模块4604,还用于存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系。
可选地,接收模块4601,还用于执行接收携带了第一缓存表项的标识的第四报文的步骤;第二代理节点还包括:查询模块,用于根据第一缓存表项的标识,查询第一缓存表项的标识与第二缓存表项的标识之间的映射关系,得到第一缓存表项的标识对应的第二缓存表项的标识;查询模块,还用于以第二缓存表项的标识为索引,查询第二缓存表项,得到第一报文的SRH;生成模块,用于根据第四报文以及第一报文的SRH,生成第五报文,第五报文包括第四报文的载荷以及第一报文的SRH;发送模块,用于发送第五报文。
可选地,第二报文对应的流标识为第一报文对应的流标识,第一报文的SRH包括端点动态代理SID,存储模块4604,用于执行以第一报文对应的流标识和端点动态代理SID为索引存储SRH的步骤,例如执行上述方法实施例中的步骤3807或步骤3907。
可选地,接收模块4601,还用于接收配置指令,配置指令包括每个虚拟专用网络VPN中的业务功能节点对应的端点动态代理SID;存储模块4604,还用于根据配置指令,存储端点动态代理SID与VPN标识之间的映射关系。
可选地,接收模块4601,还用于执行步骤3812中接收第四报文的步骤;第二代理节点还包括:查询模块,用于执行根据VPN标识、端点动态代理SID和第四报文对应的流标识查询SRH的步骤,例如执行步骤3812;生成模块,还用于执行步骤3813;发送模块,还用于执行步骤3814。
可选地,第二代理节点还包括:确定模块4602,还用于确定控制信息还指示第二代理节点向业务功能节点发送第一报文的载荷的步骤,例如执行步骤4106;生成模块,还用于执行生成第三报文的步骤,例如执行步骤4109;发送模块,还用于执行步骤4110。
可选地,第二代理节点通过第一链路与第一代理节点相连,接收模块4601,用于通过对应于第一链路的第一入接口,从第一代理节点接收第二报文;或者,第二代理节点通过第三链路与路由节点相连,路由节点通过第二链路与第一代理节点相连,接收模块4601,用于通过对应于第三链路的第二入接口,从路由节点接收第二报文,第二报文由第一代理节点通过第二链路发送至路由节点。
可选地,接收模块4601,用于从第一代理节点连续接收多个第二报文;或者,第二代理节点还包括:生成模块,用于根据第一绕行SID、第二代理节点对应的第二绕行SID和第二报文,生成第六报文;发送模块,还用于向第一代理节点发送第六报文,第六报文的源地址为第一绕行SID,第六报文包括对应于第一代理节点的第二绕行SID,第六报文用于表示接收到了第二报文。
应理解,图46实施例提供的第二代理节点对应于上述方法实施例中的第二代理节点,第二代理节点中的各模块和上述其他操作和/或功能分别为了实现方法实施例中的第二代理节点所实施的各种步骤和方法,具体细节可参见上述方法实施例,为了简洁,在此不再赘述。
需要说明的一点是,图46实施例提供的第二代理节点在传输报文时,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将第二代理节点的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。另外,上述实施例提供的第二代理节点与上述报文传输的方法实施例属于同一构思,其具体实现过程详见方法实施例,这里不再赘述。
以上介绍了本申请实施例提供的第一代理节点和第二代理节点,以下介绍第一代理节点和第二代理节点可能的产品形态。应理解,但凡具备上述第一代理节点的特征的任何形态的产品,和但凡具备第二代理节点的特征的任何形态的产品,都落入本申请的保护范围。还应理解,以下介绍仅为举例,不限制本申请实施例的第一代理节点和第二代理节点的产品形态仅限于此。
本申请实施例提供了一种代理节点,该代理节点可以是第一代理节点,也可以是第二代理节点。
该代理节点包括处理器,该处理器用于执行指令,使得该代理节点执行上述各个方法实施例提供的报文传输方法。
作为示例,该处理器可以是网络处理器(Network Processor,简称NP)、中央处理器(central processing unit,CPU)、特定应用集成电路(application-specific integrated circuit,ASIC)或用于控制本申请方案程序执行的集成电路。该处理器可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。该处理器的数量可以是一个,也可以是多个。
在一些可能的实施例中,该代理节点还可以包括存储器。
存储器可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其它类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其它类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only Memory,CD-ROM)或其它光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其它磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其它介质,但不限于此。
存储器和处理器可以分离设置,存储器和处理器也可以集成在一起。
在一些可能的实施例中,该代理节点还可以包括收发器。
收发器用于与其它设备或通信网络通信,网络通信的方式可以而不限于是以太网,无线接入网(RAN),无线局域网(wireless local area networks,WLAN)等。在本申请实施例中,通信接口可以用于接收其他节点发送的报文,也可以向其他节点发送报文。
在一些可能的实施例中,上述代理节点可以实现为网络设备,该网络设备中的网络处理器可以执行上述方法实施例的各个步骤。例如,该网络设备可以是路由器、交换机或防火墙等。
参见图47,图47示出了本申请一个示例性实施例提供的网络设备的结构示意图,该网络设备可以配置为第一代理节点或第二代理节点。
网络设备4700包括:主控板4710、接口板4730、交换网板4720和接口板4740。主控板4710用于完成系统管理、设备维护、协议处理等功能。交换网板4720用于完成各接口板(接口板也称为线卡或业务板)之间的数据交换。接口板4730和4740用于提供各种业务接口(例如,以太网接口、POS接口等),并实现数据包的转发。主控板4710、 接口板4730和4740,以及交换网板4720之间通过系统总线与系统背板相连实现互通。接口板4730上的中央处理器4731用于对接口板进行控制管理并与主控板4710上的中央处理器4711进行通信。
如果网络设备4700被配置为第一代理节点,物理接口卡4733接收第一报文,发送给网络处理器4732,网络处理器4732根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文,根据出接口等信息,在完成链路层封装后,将第二报文从物理接口卡4733发送出去,使得第二报文传输至第二代理节点。
在一个实施例中,网络处理器4732可以对该第一报文的SRH的SL进行更新;对该第一报文的目的地址进行更新,得到该第二报文,该第二报文的SL大于该第一报文的SL。
在一个实施例中,网络处理器4732可以将该第一报文的SL加一;将该第一报文的目的地址更新为该第一绕行SID。
在一个实施例中,网络处理器4732可以向该第一报文的SRH的段列表插入一个或多个目标SID,该一个或多个目标SID用于指示目标转发路径,该目标转发路径为该第一代理节点至该第二代理节点之间的路径;将该第一报文的SL加N,该N为大于1的正整数;将该第一报文的目的地址更新为该一个或多个目标SID中对应于下一个SR节点的SID。
在一个实施例中,该控制信息携带在该第二报文的扩展头中;或者,该控制信息携带在该第一绕行SID中;或者,该控制信息携带在该第二报文的IP头中;或者,该控制信息携带在该第二报文的SRH中的TLV中。
在一个实施例中,网络处理器4732可以将第一报文的SRH存储在转发表项存储器534中。网络处理器4732可以根据第一报文,生成第三报文,根据出接口等信息,在完成链路层封装后,将第三报文从物理接口卡4733发送出去,使得第三报文传输至第SF节点。
在一个实施例中,网络处理器4732可以以该第一缓存表项的标识为索引,将第一报文的SRH存储在转发表项存储器534中。
在一个实施例中,该第二报文还包括该第一缓存表项的标识,该控制信息还用于指示该第二代理节点存储该第一缓存表项的标识与第二缓存表项的标识之间的映射关系,该第二缓存表项为该第二代理节点存储该第一报文的SRH的缓存表项。
在一个实施例中,物理接口卡4733接收第四报文,将第四报文发给网络处理器4732,网络处理器4732以该第一缓存表项的标识为索引,查询得到该第一报文的SRH;网络处理器4732根据该第四报文以及该第一报文的SRH,生成第五报文,该第五报文包括该第四报文的载荷以及该第一报文的SRH;网络处理器4732将第五报文从物理接口卡4733发送出去。
在一个实施例中,网络处理器4732在检测到该第一缓存表项生成时,生成该第二报文
在一个实施例中,网络处理器4732在检测到该第一缓存表项更新时,生成该第二报文。
在一个实施例中,网络处理器4732以该第一报文对应的流标识和该端点动态代理 SID为索引,在转发表项存储器4734中存储该第一报文的SRH。
在一个实施例中,主控板4710上的中央处理器4711可以接收配置指令,根据配置指令,将端点动态代理SID与VPN标识之间的映射关系存储至转发表项存储器4734。
在一个实施例中,物理接口卡4733接收第四报文,网络处理器4732根据VPN标识,查询转发表项存储器534中端点动态代理SID、端点动态代理SID、第一报文对应的流标识和端点动态代理SID,查询转发表项存储器534得到第一报文的SRH;网络处理器4732根据第四报文以及第一报文的SRH,生成第五报文,将第五报文从物理接口卡4733发送出去。
在一个实施例中,网络处理器4732可以检测到故障事件,故障事件包括第一代理节点与业务功能节点之间的链路产生故障的事件或者第一代理节点产生故障的事件中的至少一项。
在一个实施例中,网络处理器4732对第一链路的状态进行检测;如果第一链路处于可用状态,网络处理器4732选择物理接口卡4733中通过对应于第一链路的第一出接口,将第二报文从物理接口卡4733的第一出接口发送出去;或者,如果第一链路处于不可用状态,网络处理器4732择物理接口卡4733中通过对应于第二链路的第二出接口,将第二报文从物理接口卡4733的第二出接口发送出去。
在一个实施例中,网络处理器4732连续生成多个该第二报文,发给物理接口卡4733,物理接口卡4733向该第二代理节点连续发送多个该第二报文。
在一个实施例中,当物理接口卡4733未接收到第六报文时,网络处理器4732向该第二代理节点重传该第二报文。
如果网络设备4700被配置为第二代理节点,物理接口卡4733接收第二报文,发送给网络处理器4732,网络处理器4732确定控制信息指示第二代理节点存储第一报文的SRH;解析第二报文,得到第一报文的SRH;将第一报文的SRH存储至转发表项存储器4734中的第二缓存表项。
在一个实施例中,网络处理器4732从第二报文的SRH中的段列表,删除第一绕行SID;将第二报文的SRH中的剩余段数量SL减一,得到第一报文的SRH。
在一个实施例中,该控制信息携带在该第二报文的扩展头中;或者,该控制信息携带在该第一绕行SID中;或者,该控制信息携带在该第二报文的IP头中;或者,该控制信息携带在该第二报文的SRH中的TLV中。
在一个实施例中,网络处理器4732以第二缓存表项的标识为索引,将第一报文的SRH存储至转发表项存储器4734中的第二缓存表项。
在一个实施例中,网络处理器4732确定SF节点的业务功能包括对流标识进行修改;
在一个实施例中,网络处理器4732确定控制信息还指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,网络处理器4732将第一缓存表项的标识与第二缓存表项的标识之间的映射关系存储至转发表项存储器4734。
在一个实施例中,物理接口卡4733接收第四报文,发送给网络处理器4732,网络处理器4732根据第一缓存表项的标识,查询转发表项存储器534中第一缓存表项的标识与第二缓存表项的标识之间的映射关系,得到第一缓存表项的标识对应的第二缓存表项的标识;网络处理器4732查询转发表项存储器534中,以第二缓存表项的标识为索 引,查询转发表项存储器534中的第二缓存表项,得到第一报文的SRH;网络处理器4732根据第四报文以及第一报文的SRH,生成第五报文;网络处理器4732根据出接口等信息,在完成链路层封装后,将第五报文从物理接口卡4733发送出去。
在一个实施例中,网络处理器4732以该第二报文对应的流标识和该端点动态代理SID为索引,在该第二缓存表项中存储该第一报文的SRH。
在一个实施例中,主控板4710上的中央处理器4711可以接收配置指令,根据配置指令,将端点动态代理SID与VPN标识之间的映射关系存储至转发表项存储器4734。
在一个实施例中,物理接口卡4733接收第四报文,发送给网络处理器4732,网络处理器4732根根据该VPN标识、端点动态代理SID和该第四报文对应的流标识,查询转发表项存储器4734中的该第二缓存表项,得到该第一报文的SRH;网络处理器4732根据第四报文以及第一报文的SRH,生成第五报文;网络处理器4732根据出接口等信息,在完成链路层封装后,将第五报文从物理接口卡4733发送出去。
在一个实施例中,网络处理器4732确定控制信息还指示第二代理节点向业务功能节点发送第一报文的载荷;网络处理器4732根据第二报文,生成第三报文,网络处理器4732将第三报文从物理接口卡4733发送出去,使得第三报文传输至SF节点。
在一个实施例中,物理接口卡4733通过对应于第一链路的第一入接口,接收第二报文;或者,物理接口卡4733通过对应于第三链路的第二入接口,接收第二报文。
在一个实施例中,物理接口卡4733连续接收多个该第二报文,发给网络处理器4732。
在一个实施例中,网络处理器4732根据第一绕行SID、该第二代理节点对应的第二绕行SID和该第二报文,生成第六报文,网络处理器4732根据出接口等信息,在完成链路层封装后,将将第六报文通过物理接口卡4733发送出去。
应理解,本申请实施例中接口板4740上的操作与接口板4730的操作一致,为了简洁,不再赘述。应理解,本实施例的网络设备4700可对应于上述各个方法实施例中的第一代理节点或第二代理节点,该网络设备4700中的主控板4710、接口板4730和/或4740可以实现上述各个方法实施例中的第一代理节点或第二代理节点所具有的功能和/或所实施的各种步骤,为了简洁,在此不再赘述。
值得说明的是,主控板可能有一块或多块,有多块的时候可以包括主用主控板和备用主控板。接口板可能有一块或多块,网络设备的数据处理能力越强,提供的接口板越多。接口板上的物理接口卡也可以有一块或多块。交换网板可能没有,也可能有一块或多块,有多块的时候可以共同实现负荷分担冗余备份。在集中式转发架构下,网络设备可以不需要交换网板,接口板承担整个系统的业务数据的处理功能。在分布式转发架构下,网络设备可以有至少一块交换网板,通过交换网板实现多块接口板之间的数据交换,提供大容量的数据交换和处理能力。所以,分布式架构的网络设备的数据接入和处理能力要大于集中式架构的设备。可选地,网络设备的形态也可以是只有一块板卡,即没有交换网板,接口板和主控板的功能集成在该一块板卡上,此时接口板上的中央处理器和主控板上的中央处理器在该一块板卡上可以合并为一个中央处理器,执行两者叠加后的功能,这种形态设备的数据交换和处理能力较低(例如,低端交换机或路由器等网络设备)。具体采用哪种架构,取决于具体的组网部署场景,此处不做任何限定。
在一些可能的实施例中,上述代理节点可以实现为计算设备,该计算设备中的中央处理器可以执行上述方法实施例的各个步骤。例如,该计算设备可以是主机、服务器或个人计算机等。该计算设备可以由一般性的总线体系结构来实现。
参见图48,图48示出了本申请一个示例性实施例提供的计算设备的结构示意图,该计算设备可以配置为第一代理节点或第二代理节点。
计算设备4800包括:处理器4810、收发器4820、随机存取存储器4840、只读存储器4850以及总线4860。其中,处理器4810通过总线4860分别耦接收发器4820、随机存取存储器4840以及只读存储器4850。其中,当需要运行计算设备4800时,通过固化在只读存储器4850中的基本输入输出系统或者嵌入式系统中的bootloader(启动装载)引导系统进行启动,引导计算设备4800进入正常运行状态。
如果计算设备4800被配置为第一代理节点,在计算设备4800进入正常运行状态后,在随机存取存储器4840中运行应用程序和操作系统,使得:
收发器4820接收第一报文,发送给处理器4810,处理器4810根据端点动态代理SID、第一报文和对应于第二代理节点的第一绕行SID,生成第二报文,根据出接口等信息,在完成链路层封装后,将第二报文从收发器4820发送出去,使得第二报文传输至第二代理节点。
在一个实施例中,处理器4810可以对该第一报文的SRH的剩余段数量SL进行更新;对该第一报文的目的地址进行更新,得到该第二报文,该第二报文的SL大于该第一报文的SL。
在一个实施例中,处理器4810可以将该第一报文的SL加一;将该第一报文的目的地址更新为该第一绕行SID。
在一个实施例中,处理器4810可以向该第一报文的SRH的段列表插入一个或多个目标SID,该一个或多个目标SID用于指示目标转发路径,该目标转发路径为该第一代理节点至该第二代理节点之间的路径;将该第一报文的SL加N,该N为大于1的正整数;将该第一报文的目的地址更新为该一个或多个目标SID中对应于下一个SR节点的SID。
在一个实施例中,该控制信息携带在该第二报文的扩展头中;或者,该控制信息携带在该第一绕行SID中;或者,该控制信息携带在该第二报文的IP头中;或者,该控制信息携带在该第二报文的SRH中的TLV中。
在一个实施例中,处理器4810可以将第一报文的SRH存储在转发表项存储器534中。处理器4810可以根据第一报文,生成第三报文,根据出接口等信息,在完成链路层封装后,将第三报文从收发器4820发送出去,使得第三报文传输至第SF节点。
在一个实施例中,处理器4810可以以该第一缓存表项的标识为索引,将第一报文的SRH存储在转发表项存储器534中。
在一个实施例中,该第二报文还包括该第一缓存表项的标识,该控制信息还用于指示该第二代理节点存储该第一缓存表项的标识与第二缓存表项的标识之间的映射关系,该第二缓存表项为该第二代理节点存储该第一报文的SRH的缓存表项。
在一个实施例中,收发器4820接收第四报文,将第四报文发给处理器4810,处理 器4810以该第一缓存表项的标识为索引,查询得到该第一报文的SRH;处理器4810根据该第四报文以及该第一报文的SRH,生成第五报文,该第五报文包括该第四报文的载荷以及该第一报文的SRH;处理器4810将第五报文从收发器4820发送出去。
在一个实施例中,处理器4810在检测到该第一缓存表项生成时,生成该第二报文
在一个实施例中,处理器4810在检测到该第一缓存表项更新时,生成该第二报文。
在一个实施例中,处理器4810以该第一报文对应的流标识和该端点动态代理SID为索引,在随机存取存储器4840中存储该第一报文的SRH。
在一个实施例中,处理器4810可以接收配置指令,根据配置指令,将端点动态代理SID与VPN标识之间的映射关系存储至随机存取存储器4840。
在一个实施例中,收发器4820接收第四报文,处理器4810根据VPN标识,查询转发表项存储器534中端点动态代理SID、端点动态代理SID、第一报文对应的流标识和端点动态代理SID,查询转发表项存储器534得到第一报文的SRH;处理器4810根据第四报文以及第一报文的SRH,生成第五报文,将第五报文从收发器4820发送出去。
在一个实施例中,处理器4810可以检测到故障事件,故障事件包括第一代理节点与业务功能节点之间的链路产生故障的事件或者第一代理节点产生故障的事件中的至少一项。
在一个实施例中,处理器4810对第一链路的状态进行检测;如果第一链路处于可用状态,处理器4810选择收发器4820中通过对应于第一链路的第一出接口,将第二报文从收发器4820的第一出接口发送出去;或者,如果第一链路处于不可用状态,处理器4810择收发器4820中通过对应于第二链路的第二出接口,将第二报文从收发器4820的第二出接口发送出去。
在一个实施例中,处理器4810连续生成多个该第二报文,发给收发器4820,收发器4820向该第二代理节点连续发送多个该第二报文。
在一个实施例中,当收发器4820未接收到第六报文时,处理器4810向该第二代理节点重传该第二报文。
如果计算设备4800被配置为第二代理节点,收发器4820接收第二报文,发送给处理器4810,处理器4810确定控制信息指示第二代理节点存储第一报文的SRH;解析第二报文,得到第一报文的SRH;将第一报文的SRH存储至随机存取存储器4840中的第二缓存表项。
在一个实施例中,处理器4810从第二报文的SRH中的段列表,删除第一绕行SID;将第二报文的SRH中的剩余段数量SL减一,得到第一报文的SRH。
在一个实施例中,该控制信息携带在该第二报文的扩展头中;或者,该控制信息携带在该第一绕行SID中;或者,该控制信息携带在该第二报文的IP头中;或者,该控制信息携带在该第二报文的SRH中的TLV中。
在一个实施例中,处理器4810以第二缓存表项的标识为索引,将第一报文的SRH存储至随机存取存储器4840中的第二缓存表项。
在一个实施例中,处理器4810确定SF节点的业务功能包括对流标识进行修改;
在一个实施例中,处理器4810确定控制信息还指示第二代理节点存储第一缓存表项的标识与第二缓存表项的标识之间的映射关系,处理器4810将第一缓存表项的标识 与第二缓存表项的标识之间的映射关系存储至随机存取存储器4840。
在一个实施例中,收发器4820接收第四报文,发送给处理器4810,处理器4810根据第一缓存表项的标识,查询转发表项存储器534中第一缓存表项的标识与第二缓存表项的标识之间的映射关系,得到第一缓存表项的标识对应的第二缓存表项的标识;处理器4810查询转发表项存储器534中,以第二缓存表项的标识为索引,查询转发表项存储器534中的第二缓存表项,得到第一报文的SRH;处理器4810根据第四报文以及第一报文的SRH,生成第五报文;处理器4810根据出接口等信息,在完成链路层封装后,将第五报文从收发器4820发送出去。
在一个实施例中,处理器4810以该第二报文对应的流标识和该端点动态代理SID为索引,在该第二缓存表项中存储该第一报文的SRH。
在一个实施例中,处理器4810可以接收配置指令,根据配置指令,将端点动态代理SID与VPN标识之间的映射关系存储至随机存取存储器4840。
在一个实施例中,收发器4820接收第四报文,发送给处理器4810,处理器4810根根据该VPN标识、端点动态代理SID和该第四报文对应的流标识,查询随机存取存储器4840中的该第二缓存表项,得到该第一报文的SRH;处理器4810根据第四报文以及第一报文的SRH,生成第五报文;处理器4810根据出接口等信息,在完成链路层封装后,将第五报文从收发器4820发送出去。
在一个实施例中,处理器4810确定控制信息还指示第二代理节点向业务功能节点发送第一报文的载荷;处理器4810根据第二报文,生成第三报文,处理器4810将第三报文从收发器4820发送出去,使得第三报文传输至SF节点。
在一个实施例中,收发器4820通过对应于第一链路的第一入接口,接收第二报文;或者,收发器4820通过对应于第三链路的第二入接口,接收第二报文。
在一个实施例中,收发器4820连续接收多个该第二报文,发给处理器4810。
在一个实施例中,处理器4810根据第一绕行SID、该第二代理节点对应的第二绕行SID和该第二报文,生成第六报文,处理器4810根据出接口等信息,在完成链路层封装后,将将第六报文通过收发器4820发送出去。
本申请实施例的计算设备可对应于上述各个方法实施例中的第一代理节点或第二代理节点,并且,该计算设备中的处理器4810、收发器4820等可以实现上述各个方法实施例中的第一代理节点或第二代理节点所具有的功能和/或所实施的各种步骤和方法。为了简洁,在此不再赘述。
在一些可能的实施例中,上述代理节点可以实现为虚拟化设备。
例如,虚拟化设备可以是运行有用于发送报文功能的程序的虚拟机(英文:Virtual Machine,VM),虚拟机部署在硬件设备上(例如,物理服务器)。虚拟机指通过软件模拟的具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。可以将虚拟机配置为第一代理节点或第二代理节点。例如,可以基于通用的物理服务器结合NFV技术来实现第一代理节点或第二代理节点。第一代理节点或第二代理节点为虚拟主机、虚拟路由器或虚拟交换机。本领域技术人员通过阅读本申请即可结合NFV技术在通用物理服务器上虚拟出具有上述功能的第一代理节点或第二代理节点。此处不再赘述。
例如,虚拟化设备可以是容器,容器是一种用于提供隔离的虚拟化环境的实体,例如,容器可以是docker容器。可以将容器配置为第一代理节点或第二代理节点。例如,可以通过对应的镜像来创建出代理节点,例如可以通过proxy-container(提供代理服务的容器)的镜像,为proxy-container创建2个容器实例,分别是容器实例proxy-container1、容器实例proxy-container2,将容器实例proxy-container1提供为第一代理节点,将容器实例proxy-container2提供为第二代理节点。采用容器技术实现时,代理节点可以利用物理机的内核运行,多个代理节点可以共享物理机的操作系统。通过容器技术可以将不同的代理节点隔离开来。容器化的代理节点可以在虚拟化的环境中运行,例如可以在虚拟机中运行,容器化的代理节点可也可以直接在物理机中运行。
例如,虚拟化设备可以是Pod,Pod是Kubernetes(Kubernetes是谷歌开源的一种容器编排引擎,英文简称为K8s)为部署、管理、编排容器化应用的基本单位。Pod可以包括一个或多个容器。同一个Pod中的每个容器通常部署在同一主机上,因此同一个Pod中的每个容器可以通过该主机进行通信,并且可以共享该主机的存储资源和网络资源。可以将Pod配置为第一代理节点或第二代理节点。例如,具体地,可以指令容器即服务(英文全称:container as a service,英文简称:CaaS,是一种基于容器的PaaS服务)来创建Pod,将Pod提供为第一代理节点或第二代理节点。
当然,代理节点还可以是其他虚拟化设备,在此不做一一列举。
在一些可能的实施例中,上述代理节点也可以由通用处理器来实现。例如,该通用处理器的形态可以是一种芯片。具体地,实现第一代理节点或者第二代理节点的通用处理器包括处理电路和与该处理电路内部连接通信的输入接口以及输出接口,该处理电路用于通过输入接口执行上述各个方法实施例中的报文的生成步骤,该处理电路用于通过输入接口执行上述各个方法实施例中的接收步骤,该处理电路用于通过输出接口执行上述各个方法实施例中的发送步骤。可选地,该通用处理器还可以包括存储介质,该处理电路用于通过该存储介质执行上述各个方法实施例中的存储步骤。
作为一种可能的产品形态,本申请实施例中的第一代理节点或者第二代理节点,还可以使用下述来实现:一个或多个现场可编程门阵列(英文全称:field-programmable gate array,英文简称:FPGA)、可编程逻辑器件(英文全称:programmable logic device,英文简称:PLD)、控制器、状态机、门逻辑、分立硬件部件、任何其它适合的电路、或者能够执行本申请通篇所描述的各种功能的电路的任意组合。
在一些可能的实施例中,上述代理节点还可以使用计算机程序产品实现。具体地,本申请实施例提供了一种计算机程序产品,当该计算机程序产品在第一代理节点上运行时,使得第一代理节点执行上述方法实施例中的报文传输方法。本申请实施例还提供了一种计算机程序产品,当该计算机程序产品在第二代理节点上运行时,使得第二代理节点执行上述方法实施例中的报文传输方法。
应理解,上述各种产品形态的第一代理节点或者第二代理节点,分别具有上述方法实施例中第一代理节点或者第二代理节点的任意功能,此处不再赘述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例中描述的各方法步骤和单元,能够以电子硬件、计算机软件或者二者的结合来实现,为了清楚地说明硬件和软件的可互换性,在上述说明中已经按照功能一般性地描述了各实施例的步骤及组成。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。本领域普通技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参见前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,该单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口、装置或单元的间接耦合或通信连接,也可以是电的,机械的或其它的形式连接。
该作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本申请实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以是两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
该集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分,或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例中方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上该,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到各种等效的修改或替换,这些修改或替换都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。该计算机程序产品包括一个或多个计算机程序指令。在计算机上加载和执行该计算机程序指令时,全部或部分地产生按照本申请实施例该的流程或功能。该计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。该计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,该计 算机程序指令可以从一个网站站点、计算机、服务器或数据中心通过有线或无线方式向另一个网站站点、计算机、服务器或数据中心进行传输。该计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。该可用介质可以是磁性介质(例如软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,DVD)、或者半导体介质(例如固态硬盘)等。
本领域普通技术人员可以理解实现上述实施例的全部或部分步骤可以通过硬件来完成,也可以通过程序来指令相关的硬件完成,该的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。
以上该仅为本申请的可选实施例,并不用以限制本申请,凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (62)

  1. 一种报文传输方法,其特征在于,应用于第一代理节点,所述方法包括:
    接收第一报文,所述第一报文包括分段路由头SRH,所述第一报文的目的地址为端点动态代理段标识SID;
    根据所述端点动态代理SID、所述第一报文和对应于第二代理节点的第一绕行SID,生成第二报文,所述第二报文包括所述第一绕行SID、控制信息和所述第一报文的SRH,所述第一绕行SID用于标识所述第二报文的目的节点为所述第二代理节点,所述控制信息用于指示所述第二代理节点存储所述第一报文的SRH,所述第二代理节点与所述第一代理节点连接到同一个业务功能节点;
    向所述第二代理节点发送所述第二报文。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述端点动态代理SID、所述第一报文和对应于第二代理节点的第一绕行SID,生成第二报文,包括:
    向所述第一报文的SRH的段列表插入所述第一绕行SID;
    对所述第一报文的SRH的剩余段数量SL进行更新;
    对所述第一报文的目的地址进行更新,得到所述第二报文,所述第二报文的SL大于所述第一报文的SL。
  3. 根据权利要求2所述的方法,其特征在于,所述对所述第一报文的SRH的剩余段数量SL进行更新,包括:
    将所述第一报文的SL加一;
    所述对所述第一报文的目的地址进行更新,包括:
    将所述第一报文的目的地址更新为所述第一绕行SID。
  4. 根据权利要求2所述的方法,其特征在于,所述方法还包括:
    向所述第一报文的SRH的段列表插入一个或多个目标SID,所述一个或多个目标SID用于指示目标转发路径,所述目标转发路径为所述第一代理节点至所述第二代理节点之间的路径;
    所述对所述第一报文的SRH的剩余段数量SL进行更新,包括:
    将所述第一报文的SL加N,所述N为大于1的正整数;
    所述对所述第一报文的目的地址进行更新,包括:
    将所述第一报文的目的地址更新为所述一个或多个目标SID中对应于下一个分段路由SR节点的SID。
  5. 根据权利要求1至4中任一项所述的方法,其特征在于,
    所述控制信息携带在所述第二报文的扩展头中;或者,
    所述控制信息携带在所述第一绕行SID中;或者,
    所述控制信息携带在所述第二报文的互联网协议IP头中;或者,
    所述控制信息携带在所述第二报文的SRH中的类型长度值TLV中。
  6. 根据权利要求1至5中任一项所述的方法,其特征在于,所述接收第一报文之后,所述方法还包括:
    在第一缓存表项中存储所述第一报文的SRH;
    根据所述第一报文,生成第三报文,所述第三报文包括所述第一报文的载荷且不包括所述第一报文的SRH;
    向所述业务功能节点发送所述第三报文。
  7. 根据权利要求6所述的方法,其特征在于,所述在第一缓存表项中存储所述第一报文的SRH,包括:
    以所述第一缓存表项的标识为索引,在所述第一缓存表项中存储所述第一报文的SRH。
  8. 根据权利要求7所述的方法,其特征在于,所述第二报文还包括所述第一缓存表项的标识,所述控制信息还用于指示所述第二代理节点存储所述第一缓存表项的标识与第二缓存表项的标识之间的映射关系,所述第二缓存表项为所述第二代理节点存储所述第一报文的SRH的缓存表项。
  9. 根据权利要求6所述的方法,其特征在于,所述第三报文包括所述第一缓存表项的标识,所述向所述业务功能节点发送所述第三报文之后,所述方法还包括:
    从所述业务功能节点接收第四报文,所述第四报文包括所述第一缓存表项的标识;
    以所述第一缓存表项的标识为索引,查询得到所述第一报文的SRH;
    根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送所述第五报文。
  10. 根据权利要求6所述的方法,其特征在于,生成所述第二报文的触发条件包括以下一种或多种:
    检测到所述第一缓存表项生成;
    检测到所述第一缓存表项更新。
  11. 根据权利要求6所述的方法,其特征在于,所述在第一缓存表项中存储所述第一报文的SRH,包括:
    以所述第一报文对应的流标识和所述端点动态代理SID为索引,在所述第一缓存表项中存储所述第一报文的SRH。
  12. 根据权利要求11所述的方法,其特征在于,所述方法还包括:
    接收配置指令,所述配置指令包括每个虚拟专用网络VPN中的业务功能节点对应的端点动态代理SID;
    根据所述配置指令,存储所述端点动态代理SID与VPN标识之间的映射关系。
  13. 根据权利要求11或12所述的方法,其特征在于,所述第三报文对应的流标识为所述第一报文对应的流标识,所述第三报文包括VPN标识,所述向所述业务功能节点发送所述第三报文之后,所述方法还包括:
    从所述业务功能节点接收第四报文,所述第四报文对应的流标识为所述第一报文对应的流标识,所述第四报文包括所述VPN标识和载荷;
    根据所述VPN标识、端点动态代理SID和所述第四报文对应的流标识,查询得到所述第一报文的SRH;
    根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送所述第五报文。
  14. 根据权利要求1至13中任一项所述的方法,其特征在于,所述第二报文还包括所述第一报文的载荷,所述控制信息还用于指示所述第二代理节点向所述业务功能节点发送所述第一报文的载荷,所述向所述第二代理节点发送所述第二报文之前,所述方法还包括:
    检测到故障事件,所述故障事件包括所述第一代理节点与所述业务功能节点之间的链路产生故障的事件或者所述第一代理节点产生故障的事件中的至少一项。
  15. 根据权利要求1至14中任一项所述的方法,其特征在于,
    所述第一代理节点通过第一链路与所述第二代理节点相连,所述向所述第二代理节点发送所述第二报文,包括:通过对应于所述第一链路的第一出接口,向所述第二代理节点发送所述第二报文;或者,
    所述第一代理节点通过第二链路与路由节点相连,所述路由节点通过第三链路与所述第二代理节点相连,所述向所述第二代理节点发送所述第二报文,包括:通过对应于所述第二链路的第二出接口,向所述路由节点发送所述第二报文,所述第二报文由所述路由节点通过所述第三链路转发至所述第二代理节点。
  16. 根据权利要求15所述的方法,其特征在于,所述向所述第二代理节点发送所述第二报文,包括:
    对所述第一链路的状态进行检测;
    如果所述第一链路处于可用状态,通过对应于所述第一链路的第一出接口,向所述第二代理节点发送所述第二报文;或者,
    如果所述第一链路处于不可用状态,通过对应于所述第二链路的第二出接口,向所述路由节点发送所述第二报文。
  17. 根据权利要求1至16中任一项所述的方法,其特征在于,
    所述向所述第二代理节点发送所述第二报文,包括:向所述第二代理节点连续发送 多个所述第二报文;或者,
    所述向所述第二代理节点发送所述第二报文之后,所述方法还包括:当未接收到第六报文时,向所述第二代理节点重传所述第二报文,所述第六报文的源地址为所述第一绕行SID,所述第六报文包括对应于所述第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,所述第六报文用于表示接收到了所述第二报文。
  18. 一种报文传输方法,其特征在于,应用于第二代理节点,所述方法包括:
    从第一代理节点接收第二报文,所述第一代理节点与所述第二代理节点连接到同一个业务功能节点,所述第二报文包括对应于第二代理节点的第一绕行段标识SID、控制信息和第一报文的分段路由头SRH,所述第一绕行SID用于标识所述第二报文的目的节点为所述第二代理节点;
    确定所述控制信息指示所述第二代理节点存储所述第一报文的SRH;
    解析所述第二报文,得到所述第一报文的SRH;
    在第二缓存表项中存储所述第一报文的SRH。
  19. 根据权利要求18所述的方法,其特征在于,所述解析所述第二报文,得到所述第一报文的SRH,包括:
    从所述第二报文的SRH中的段列表,删除所述第一绕行SID;
    将所述第二报文的SRH中的剩余段数量SL减一,得到所述第一报文的SRH。
  20. 根据权利要求18或19所述的方法,其特征在于,
    所述控制信息携带在所述第二报文的扩展头中;或者,
    所述控制信息携带在所述第一绕行SID中;或者,
    所述控制信息携带在所述第二报文的互联网协议IP头中;或者,
    所述控制信息携带在所述第二报文的SRH中的类型长度值TLV中。
  21. 根据权利要求18至20中任一项所述的方法,其特征在于,所述在第二缓存表项中存储所述第一报文的SRH,包括:
    以第二缓存表项的标识为索引,在所述第二缓存表项中存储所述第一报文的SRH。
  22. 根据权利要求21所述的方法,其特征在于,所述第二报文还包括第一缓存表项的标识,所述第一缓存表项为所述第一代理节点存储所述第一报文的SRH的缓存表项,所述方法还包括:
    确定所述控制信息还指示所述第二代理节点存储所述第一缓存表项的标识与所述第二缓存表项的标识之间的映射关系;
    存储所述第一缓存表项的标识与所述第二缓存表项的标识之间的映射关系。
  23. 根据权利要求22所述的方法,其特征在于,所述存储所述第一缓存表项的标 识与所述第二缓存表项的标识之间的映射关系之后,所述方法还包括:
    从所述业务功能节点接收第四报文,所述第四报文包括所述第一缓存表项的标识和载荷;
    根据所述第一缓存表项的标识,查询所述第一缓存表项的标识与所述第二缓存表项的标识之间的映射关系,得到所述第一缓存表项的标识对应的所述第二缓存表项的标识;
    以所述第二缓存表项的标识为索引,查询所述第二缓存表项,得到所述第一报文的SRH;
    根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送所述第五报文。
  24. 根据权利要求18至23中任一项所述的方法,其特征在于,所述第二报文对应的流标识为所述第一报文对应的流标识,所述第一报文的SRH包括端点动态代理SID,所述在第二缓存表项中存储所述第一报文的SRH,包括:
    以所述第二报文对应的流标识和所述端点动态代理SID为索引,在所述第二缓存表项中存储所述第一报文的SRH。
  25. 根据权利要求24所述的方法,其特征在于,所述方法还包括:
    接收配置指令,所述配置指令包括每个虚拟专用网络VPN中的业务功能节点对应的端点动态代理SID;
    根据所述配置指令,存储所述端点动态代理SID与VPN标识之间的映射关系。
  26. 根据权利要求24或25所述的方法,其特征在于,所述以所述第二报文对应的流标识和所述端点动态代理SID为索引,在所述第二缓存表项中存储所述第一报文的SRH之后,所述方法还包括:
    从所述业务功能节点接收第四报文,所述第四报文对应的流标识为所述第一报文对应的流标识、所述第四报文包括VPN标识和载荷;
    根据所述VPN标识、端点动态代理SID和所述第四报文对应的流标识,查询所述第二缓存表项,得到所述第一报文的SRH;
    根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送所述第五报文。
  27. 根据权利要求18至26中任一项所述的方法,其特征在于,所述从第一代理节点接收第二报文之后,所述方法还包括:
    确定所述控制信息还指示所述第二代理节点向所述业务功能节点发送所述第一报文的载荷;
    根据所述第二报文,生成第三报文,所述第三报文包括所述第一报文的载荷;
    向所述业务功能节点发送所述第三报文。
  28. 根据权利要求18至27中任一项所述的方法,其特征在于,
    所述第二代理节点通过第一链路与所述第一代理节点相连,所述从第一代理节点接收第二报文,包括:通过对应于所述第一链路的第一入接口,从所述第一代理节点接收所述第二报文;或者,
    所述第二代理节点通过第三链路与路由节点相连,所述路由节点通过第二链路与所述第一代理节点相连,所述从第一代理节点接收第二报文,包括:通过对应于所述第三链路的第二入接口,从所述路由节点接收所述第二报文,所述第二报文由所述第一代理节点通过所述第二链路发送至所述路由节点。
  29. 根据权利要求18至27中任一项所述的方法,其特征在于,
    所述从第一代理节点接收第二报文,包括:从所述第一代理节点连续接收多个所述第二报文;或者,
    所述从第一代理节点接收第二报文之后,所述方法还包括:根据第一绕行SID、所述第二代理节点对应的第二绕行SID和所述第二报文,生成第六报文,向所述第一代理节点发送所述第六报文,所述第六报文的源地址为所述第一绕行SID,所述第六报文包括对应于所述第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,所述第六报文用于表示接收到了所述第二报文。
  30. 一种第一代理节点,其特征在于,所述第一代理节点包括:
    接收模块,用于接收第一报文,所述第一报文包括分段路由头SRH,所述第一报文的目的地址为端点动态代理段标识SID;
    生成模块,用于根据所述端点动态代理SID、所述第一报文和对应于第二代理节点的第一绕行SID,生成第二报文,所述第二报文包括所述第一绕行SID、控制信息和所述第一报文的SRH,所述第一绕行SID用于标识所述第二报文的目的节点为所述第二代理节点,所述控制信息用于指示所述第二代理节点存储所述第一报文的SRH,所述第二代理节点与所述第一代理节点连接到同一个业务功能节点;
    发送模块,用于向所述第二代理节点发送所述第二报文。
  31. 根据权利要求30所述的第一代理节点,其特征在于,所述生成模块,用于向所述第一报文的SRH的段列表插入所述第一绕行SID;对所述第一报文的SRH的剩余段数量SL进行更新;对所述第一报文的目的地址进行更新,得到所述第二报文,所述第二报文的SL大于所述第一报文的SL。
  32. 根据权利要求31所述的第一代理节点,其特征在于,所述生成模块,用于将所述第一报文的SL加一将所述第一报文的目的地址更新为所述第一绕行SID。
  33. 根据权利要求31所述的第一代理节点,其特征在于,所述生成模块,用于向 所述第一报文的SRH的段列表插入一个或多个目标SID,所述一个或多个目标SID用于指示目标转发路径,所述目标转发路径为所述第一代理节点至所述第二代理节点之间的路径;将所述第一报文的SL加N,所述N为大于1的正整数;将所述第一报文的目的地址更新为所述一个或多个目标SID中对应于下一个分段路由SR节点的SID。
  34. 根据权利要求30至33中任一项所述的第一代理节点,其特征在于,
    所述控制信息携带在所述第二报文的扩展头中;或者,
    所述控制信息携带在所述第一绕行SID中;或者,
    所述控制信息携带在所述第二报文的互联网协议IP头中;或者,
    所述控制信息携带在所述第二报文的SRH中的类型长度值TLV中。
  35. 根据权利要求30至34中任一项所述的第一代理节点,其特征在于,所述第一代理节点还包括:
    存储模块,用于在第一缓存表项中存储所述第一报文的SRH;
    所述生成模块,还用于根据所述第一报文,生成第三报文,所述第三报文包括所述第一报文的载荷且不包括所述第一报文的SRH;
    所述发送模块,还用于向所述业务功能节点发送所述第三报文。
  36. 根据权利要求35所述的第一代理节点,其特征在于,所述存储模块,用于以所述第一缓存表项的标识为索引,在所述第一缓存表项中存储所述第一报文的SRH。
  37. 根据权利要求36所述的第一代理节点,其特征在于,所述第二报文还包括所述第一缓存表项的标识,所述控制信息还用于指示所述第二代理节点存储所述第一缓存表项的标识与第二缓存表项的标识之间的映射关系,所述第二缓存表项为所述第二代理节点存储所述第一报文的SRH的缓存表项。
  38. 根据权利要求35所述的第一代理节点,其特征在于,
    所述接收模块,还用于从所述业务功能节点接收第四报文,所述第四报文包括所述第一缓存表项的标识;
    所述第一代理节点还包括:查询模块,用于以所述第一缓存表项的标识为索引,查询得到所述第一报文的SRH;
    所述生成模块,还用于根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    所述发送模块,还用于发送所述第五报文。
  39. 根据权利要求35所述的第一代理节点,其特征在于,生成所述第二报文的触发条件包括以下一种或多种:
    检测到所述第一缓存表项生成;
    检测到所述第一缓存表项更新。
  40. 根据权利要求35所述的第一代理节点,其特征在于,所述存储模块,用于以所述第一报文对应的流标识和所述端点动态代理SID为索引,在所述第一缓存表项中存储所述第一报文的SRH。
  41. 根据权利要求40所述的第一代理节点,其特征在于,所述接收模块,还用于接收配置指令,所述配置指令包括每个虚拟专用网络VPN中的业务功能节点对应的端点动态代理SID;
    所述存储模块,还用于根据所述配置指令,存储所述端点动态代理SID与VPN标识之间的映射关系。
  42. 根据权利要求40或41所述的第一代理节点,其特征在于,所述第三报文对应的流标识为所述第一报文对应的流标识,所述第三报文包括VPN标识;
    所述接收模块,还用于从所述业务功能节点接收第四报文,所述第四报文对应的流标识为所述第一报文对应的流标识,所述第四报文包括所述VPN标识;
    所述第一代理节点还包括:查询模块,用于根据所述VPN标识、端点动态代理SID和所述第四报文对应的流标识,查询得到所述第一报文的SRH;
    所述查询模块,还用于根据所述第一报文对应的流标识、所述端点动态代理SID为索引,查询得到所述第一报文的SRH;
    所述生成模块,还用于根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    所述发送模块,还用于发送所述第五报文。
  43. 根据权利要求30至42中任一项所述的第一代理节点,其特征在于,所述第二报文还包括所述第一报文的载荷,所述控制信息还用于指示所述第二代理节点向所述业务功能节点发送所述第一报文的载荷,所述第一代理节点还包括:检测模块,用于检测到故障事件,所述故障事件包括所述第一代理节点与所述业务功能节点之间的链路产生故障的事件或者所述第一代理节点产生故障的事件中的至少一项。
  44. 根据权利要求30至43中任一项所述的第一代理节点,其特征在于,
    所述第一代理节点通过第一链路与所述第二代理节点相连,所述发送模块,用于通过对应于所述第一链路的第一出接口,向所述第二代理节点发送所述第二报文;或者,
    所述第一代理节点通过第二链路与路由节点相连,所述路由节点通过第三链路与所述第二代理节点相连,所述发送模块,用于通过对应于所述第二链路的第二出接口,向所述路由节点发送所述第二报文,所述第二报文由所述路由节点通过所述第三链路转发至所述第二代理节点。
  45. 根据权利要求44所述的第一代理节点,其特征在于,所述检测模块,还用于对所述第一链路的状态进行检测;
    所述发送模块,用于如果所述第一链路处于可用状态,通过对应于所述第一链路的第一出接口,向所述第二代理节点发送所述第二报文;或者,如果所述第一链路处于不可用状态,通过对应于所述第二链路的第二出接口,向所述路由节点发送所述第二报文。
  46. 根据权利要求30至45中任一项所述的第一代理节点,其特征在于,所述发送模块,用于向所述第二代理节点连续发送多个所述第二报文;或者,当未接收到第六报文时,向所述第二代理节点重传所述第二报文,所述第六报文的源地址为所述第一绕行SID,所述第六报文包括对应于所述第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,所述第六报文用于表示接收到了所述第二报文。
  47. 一种第二代理节点,其特征在于,所述第二代理节点包括:
    接收模块,用于从第一代理节点接收第二报文,所述第一代理节点与所述第二代理节点连接到同一个业务功能节点,所述第二报文包括对应于第二代理节点的第一绕行段标识SID、控制信息和第一报文的分段路由头SRH,所述第一绕行SID用于标识所述第二报文的目的节点为所述第二代理节点;
    确定模块,用于确定所述控制信息指示所述第二代理节点存储所述第一报文的SRH;
    解析模块,用于解析所述第二报文,得到所述第一报文的SRH;
    存储模块,用于在第二缓存表项中存储所述第一报文的SRH。
  48. 根据权利要求47所述的第二代理节点,其特征在于,所述解析模块,用于从所述第二报文的SRH中的段列表,删除所述第一绕行SID;将所述第二报文的SRH中的剩余段数量SL减一,得到所述第一报文的SRH。
  49. 根据权利要求47或48所述的第二代理节点,其特征在于,
    所述控制信息携带在所述第二报文的扩展头中;或者,
    所述控制信息携带在所述第一绕行SID中;或者,
    所述控制信息携带在所述第二报文的互联网协议IP头中;或者,
    所述控制信息携带在所述第二报文的SRH中的类型长度值TLV中。
  50. 根据权利要求47至49中任一项所述的第二代理节点,其特征在于,所述存储模块,用于以第二缓存表项的标识为索引,在所述第二缓存表项中存储所述第一报文的SRH。
  51. 根据权利要求50所述的第二代理节点,其特征在于,所述第二报文还包括第一缓存表项的标识,所述第一缓存表项为所述第一代理节点存储所述第一报文的SRH的缓存表项;
    所述确定模块,还用于确定所述控制信息还指示所述第二代理节点存储所述第一缓 存表项的标识与所述第二缓存表项的标识之间的映射关系;
    所述存储模块,还用于存储所述第一缓存表项的标识与所述第二缓存表项的标识之间的映射关系。
  52. 根据权利要求51所述的第二代理节点,其特征在于,
    所述接收模块,还用于从所述业务功能节点接收第四报文,所述第四报文包括所述第一缓存表项的标识和载荷;
    所述第二代理节点还包括:查询模块,用于根据所述第一缓存表项的标识,查询所述第一缓存表项的标识与所述第二缓存表项的标识之间的映射关系,得到所述第一缓存表项的标识对应的所述第二缓存表项的标识;
    所述查询模块,还用于以所述第二缓存表项的标识为索引,查询所述第二缓存表项,得到所述第一报文的SRH;
    生成模块,用于根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送模块,用于发送所述第五报文。
  53. 根据权利要求47至52中任一项所述的第二代理节点,其特征在于,所述第二报文对应的流标识为所述第一报文对应的流标识,所述第一报文的SRH包括端点动态代理SID,所述存储模块,用于以所述第一报文对应的流标识和所述端点动态代理SID为索引,在所述第二缓存表项中存储所述第一报文的SRH。
  54. 根据权利要求53所述的第二代理节点,其特征在于,
    所述接收模块,还用于接收配置指令,所述配置指令包括每个虚拟专用网络VPN中的业务功能节点对应的端点动态代理SID;
    所述存储模块,还用于根据所述配置指令,存储所述端点动态代理SID与VPN标识之间的映射关系。
  55. 根据权利要求53或54所述的第二代理节点,其特征在于,
    所述接收模块,还用于从所述业务功能节点接收第四报文,所述第四报文对应的流标识为所述第一报文对应的流标识,所述第四报文包括VPN标识和载荷;
    所述第二代理节点还包括:查询模块,用于根据所述VPN标识、端点动态代理SID和所述第四报文对应的流标识,查询得到所述第一报文的SRH;
    生成模块,用于根据所述第四报文以及所述第一报文的SRH,生成第五报文,所述第五报文包括所述第四报文的载荷以及所述第一报文的SRH;
    发送模块,用于发送所述第五报文。
  56. 根据权利要求47至55中任一项所述的第二代理节点,其特征在于,
    所述确定模块,还用于确定所述控制信息还指示所述第二代理节点向所述业务功能节点发送所述第一报文的载荷;
    所述生成模块,还用于根据所述第二报文,生成第三报文,所述第三报文包括所述第一报文的载荷;
    所述发送模块,还用于向所述业务功能节点发送所述第三报文。
  57. 根据权利要求47至56中任一项所述的第二代理节点,其特征在于,
    所述第二代理节点通过第一链路与所述第一代理节点相连,所述接收模块,用于通过对应于所述第一链路的第一入接口,从所述第一代理节点接收所述第二报文;或者,
    所述第二代理节点通过第三链路与路由节点相连,所述路由节点通过第二链路与所述第一代理节点相连,所述接收模块,用于通过对应于所述第三链路的第二入接口,从所述路由节点接收所述第二报文,所述第二报文由所述第一代理节点通过所述第二链路发送至所述路由节点。
  58. 根据权利要求47至57中任一项所述的第二代理节点,其特征在于,
    所述接收模块,用于从所述第一代理节点连续接收多个所述第二报文;或者,
    所述生成模块,还用于根据第一绕行SID、所述第二代理节点对应的第二绕行SID和所述第二报文,生成第六报文;
    所述发送模块,还用于向所述第一代理节点发送所述第六报文,所述第六报文的源地址为所述第一绕行SID,所述第六报文包括对应于所述第一代理节点的第二绕行SID,所述第二绕行SID用于标识所述第六报文的目的节点为所述第一代理节点,所述第六报文用于表示接收到了所述第二报文。
  59. 一种第一代理节点,其特征在于,所述第一代理节点包括处理器,所述处理器用于执行指令,使得所述第一代理节点执行如权利要求1至权利要求17中任一项所述的方法。
  60. 一种第二代理节点,其特征在于,所述第二代理节点包括处理器,所述处理器用于执行指令,使得所述第二代理节点执行如权利要求18至权利要求29中任一项所述的方法。
  61. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令,所述指令由处理器读取以使第一代理节点执行如权利要求1至权利要求17中任一项所述的方法。
  62. 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一条指令,所述指令由处理器读取以使第二代理节点执行如权利要求18至权利要求29中任一项所述的方法。
PCT/CN2020/127209 2019-11-06 2020-11-06 报文传输方法、代理节点及存储介质 WO2021089004A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP20884732.7A EP4040743B1 (en) 2019-11-06 2020-11-06 Message transmission method and proxy node
BR112022007852A BR112022007852A2 (pt) 2019-11-06 2020-11-06 Método de transmissão de pacote, nó de proxy, e meio de armazenamento
JP2022523379A JP7327889B2 (ja) 2019-11-06 2020-11-06 パケット伝送方法、プロキシノード、および記憶媒体
US17/737,637 US20220263758A1 (en) 2019-11-06 2022-05-05 Packet transmission method, proxy node, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201911078562.7A CN112787931B (zh) 2019-11-06 2019-11-06 报文传输方法、代理节点及存储介质
CN201911078562.7 2019-11-06

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/737,637 Continuation US20220263758A1 (en) 2019-11-06 2022-05-05 Packet transmission method, proxy node, and storage medium

Publications (1)

Publication Number Publication Date
WO2021089004A1 true WO2021089004A1 (zh) 2021-05-14

Family

ID=75747614

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/127209 WO2021089004A1 (zh) 2019-11-06 2020-11-06 报文传输方法、代理节点及存储介质

Country Status (6)

Country Link
US (1) US20220263758A1 (zh)
EP (1) EP4040743B1 (zh)
JP (1) JP7327889B2 (zh)
CN (1) CN112787931B (zh)
BR (1) BR112022007852A2 (zh)
WO (1) WO2021089004A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284774A1 (zh) * 2021-07-15 2023-01-19 华为技术有限公司 一种报文处理方法以及相关装置

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113179189A (zh) * 2021-05-26 2021-07-27 锐捷网络股份有限公司 分段路由故障检测方法、装置、第一分段路由及目的路由
CN115426305A (zh) * 2021-05-31 2022-12-02 华为技术有限公司 报文处理方法、装置及系统
CN113364677B (zh) * 2021-06-07 2022-06-07 北京工业大学 一种SRv6 Endpoint故障保护方法
CN115695338A (zh) * 2021-07-30 2023-02-03 华为技术有限公司 一种报文转发的方法及网络设备
US11863651B2 (en) * 2021-11-10 2024-01-02 At&T Intellectual Property I, L.P. Network service functions API
CN116261166A (zh) * 2021-12-09 2023-06-13 中兴通讯股份有限公司 链路检测方法、公网节点和存储介质
CN114363253B (zh) * 2021-12-23 2024-04-02 南京中新赛克科技有限责任公司 一种基于混合链路的双向ip资源的筛选方法及系统
US20230246957A1 (en) * 2022-02-03 2023-08-03 3nets.io, Inc. Tunneling extension header for segment routing
CN117376233A (zh) * 2022-06-30 2024-01-09 华为技术有限公司 数据处理方法、装置及系统
CN115499375B (zh) * 2022-07-25 2024-03-19 北京中电飞华通信有限公司 一种时敏流量调度方法和电子设备
CN116156027B (zh) * 2023-04-20 2023-07-18 中国人民解放军国防科技大学 一种支持rmt的动作执行引擎及其执行方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109561021A (zh) * 2017-09-25 2019-04-02 华为技术有限公司 一种报文转发的方法及网络设备
CN109861924A (zh) * 2017-11-30 2019-06-07 中兴通讯股份有限公司 报文的发送、处理方法及装置,pe节点,节点
CN109861926A (zh) * 2017-11-30 2019-06-07 中兴通讯股份有限公司 报文的发送、处理方法及装置、pe节点、节点
CN109981457A (zh) * 2017-12-27 2019-07-05 华为技术有限公司 一种报文处理的方法、网络节点和系统
US20190327648A1 (en) * 2018-03-14 2019-10-24 Cisco Technology, Inc. Methods and Apparatus for Providing End Marker Functionality in Mobile Networks Having SRv6-Configured Mobile User Planes

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8130747B2 (en) 2007-08-06 2012-03-06 Blue Coat Systems, Inc. System and method of traffic inspection and stateful connection forwarding among geographically dispersed network appliances organized as clusters
JP5249839B2 (ja) 2009-04-10 2013-07-31 株式会社日立製作所 アクセスゲートウェイ装置及びアクセスゲートウェイ装置におけるセッション情報複製方法
US10164875B2 (en) * 2016-02-22 2018-12-25 Cisco Technology, Inc. SR app-segment integration with service function chaining (SFC) header metadata
US10506083B2 (en) * 2017-06-27 2019-12-10 Cisco Technology, Inc. Segment routing gateway storing segment routing encapsulating header used in encapsulating and forwarding of returned native packet
CN111901235A (zh) * 2017-12-01 2020-11-06 华为技术有限公司 处理路由的方法和装置、以及数据传输的方法和装置
CN109756521B (zh) * 2019-03-21 2021-07-13 浪潮云信息技术股份公司 一种nsh报文处理方法、装置及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109561021A (zh) * 2017-09-25 2019-04-02 华为技术有限公司 一种报文转发的方法及网络设备
CN109861924A (zh) * 2017-11-30 2019-06-07 中兴通讯股份有限公司 报文的发送、处理方法及装置,pe节点,节点
CN109861926A (zh) * 2017-11-30 2019-06-07 中兴通讯股份有限公司 报文的发送、处理方法及装置、pe节点、节点
CN109981457A (zh) * 2017-12-27 2019-07-05 华为技术有限公司 一种报文处理的方法、网络节点和系统
US20190327648A1 (en) * 2018-03-14 2019-10-24 Cisco Technology, Inc. Methods and Apparatus for Providing End Marker Functionality in Mobile Networks Having SRv6-Configured Mobile User Planes

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023284774A1 (zh) * 2021-07-15 2023-01-19 华为技术有限公司 一种报文处理方法以及相关装置

Also Published As

Publication number Publication date
EP4040743A4 (en) 2023-01-11
CN112787931A (zh) 2021-05-11
BR112022007852A2 (pt) 2022-07-05
CN112787931B (zh) 2022-09-23
EP4040743A1 (en) 2022-08-10
JP2022554099A (ja) 2022-12-28
JP7327889B2 (ja) 2023-08-16
US20220263758A1 (en) 2022-08-18
EP4040743B1 (en) 2024-02-14

Similar Documents

Publication Publication Date Title
WO2021089004A1 (zh) 报文传输方法、代理节点及存储介质
WO2021089052A1 (zh) 报文传输方法、代理节点及存储介质
US11283707B2 (en) Segment routing with fast reroute for container networking
US10333836B2 (en) Convergence for EVPN multi-homed networks
CN109218178B (zh) 一种报文处理方法及网络设备
US20190014040A1 (en) Edge network node and method for configuring a service therein
US8830834B2 (en) Overlay-based packet steering
US20170085469A1 (en) Virtual port channel bounce in overlay network
US10536297B2 (en) Indirect VXLAN bridging
CN108632145B (zh) 一种报文转发方法和叶子节点设备
US10721651B2 (en) Method and system for steering bidirectional network traffic to a same service device
WO2021135420A1 (zh) 业务链的故障保护方法、装置、设备、系统及存储介质
WO2022001835A1 (zh) 发送报文的方法、装置、网络设备、系统及存储介质
WO2020182085A1 (zh) 报文的传输方法和设备
CN112737954B (zh) 报文处理方法、装置、系统、设备及存储介质
WO2022117018A1 (zh) 报文传输的方法和装置
WO2022007702A1 (zh) 一种报文处理方法及网络设备
WO2024001701A1 (zh) 数据处理方法、装置及系统
WO2022252569A1 (zh) 报文处理方法、装置及系统
WO2023125774A1 (zh) 一种vxlan报文传输方法、网络设备及系统
WO2023088145A1 (zh) 一种报文处理方法、装置及设备
WO2023125767A1 (zh) 一种报文传输方法、网络设备及系统
CN115733790A (zh) 报文转发方法、装置、设备及存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20884732

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022523379

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022007852

Country of ref document: BR

ENP Entry into the national phase

Ref document number: 2020884732

Country of ref document: EP

Effective date: 20220503

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 112022007852

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20220425