US20170257310A1 - Network service header (nsh) relaying of serviceability of a service function - Google Patents

Network service header (nsh) relaying of serviceability of a service function Download PDF

Info

Publication number
US20170257310A1
US20170257310A1 US15/058,259 US201615058259A US2017257310A1 US 20170257310 A1 US20170257310 A1 US 20170257310A1 US 201615058259 A US201615058259 A US 201615058259A US 2017257310 A1 US2017257310 A1 US 2017257310A1
Authority
US
United States
Prior art keywords
service function
service
packet
peer detection
status
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/058,259
Inventor
Prashanth Patil
K Tirumaleswar Reddy
Steven Richard Stites
James N. Guichard
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US15/058,259 priority Critical patent/US20170257310A1/en
Assigned to CISCO TECHNOLOGY, INC. reassignment CISCO TECHNOLOGY, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: REDDY, K. TIRUMALESWAR, GUICHARD, JAMES N., PATIL, PRASHANTH, STITES, STEVEN RICHARD
Publication of US20170257310A1 publication Critical patent/US20170257310A1/en
Priority to US16/558,367 priority patent/US11343178B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4633Interconnection of networks using encapsulation techniques, e.g. tunneling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • H04L47/115Identifying congestion using a dedicated packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/31Flow control; Congestion control by tagging of packets, e.g. using discard eligibility [DE] bits
    • H04L67/16
    • H04L67/2804

Definitions

  • the present disclosure relates to applying service function chains in networks.
  • Service Function Chaining enables virtualized networking functions to be implemented as part of a cloud network.
  • a Service Function Chain defines an ordered list of a plurality of service functions (e.g., firewall, compression, intrusion detection/prevention, load balancing, etc.) that may be applied to packet flows in the network.
  • a flow enters the network through a classifier node that generates a Service Function Path for that flow according to the Service Function Chain policy.
  • the classifier node encapsulates each packet of the flow with a Network Service Header that indicates the service functions to which the flow will be subjected, and the order the service functions will be applied.
  • Service Function Chaining and Network Service Headers provide a scalable, extensible, and standardized way of sharing metadata between both network nodes and service nodes within a network topology. This allows for disparate nodes that require shared context, but do not communicate directly, to share that context via metadata within the packets traversing the network or service topology.
  • FIG. 1 is a system block diagram showing a Service Function Chain network environment with monitoring of the performance of service functions, according to an example embodiment.
  • FIG. 2 is a simplified block diagram of a service function device within the Service Function Chain network environment, according to an example embodiment.
  • FIG. 3 is a simplified block diagram of a classifier network element within the Service Function Chain network environment, according to an example embodiment.
  • FIG. 4 is a ladder diagram that shows messages passed between peer service function nodes when the performance of one of the service functions is compromised, according to an example embodiment.
  • FIG. 5 is a ladder diagram that shows messages passed between nodes in a service function chain when the performance of one of the service functions is compromised, according to an example embodiment.
  • FIG. 6 is a flowchart showing the operations of a service function node that suffers from degradation in its ability to perform the service function, according to an example embodiment.
  • FIG. 7 is a flowchart showing the operations of a node in a service function path when another node is degraded in its ability to perform a service function, according to an example embodiment.
  • a service function node configured to perform at least one service function on a data flow that follows a service function path
  • degradation in performing the service function is detected.
  • the service function node generates a status indicator for the degradation in performing the service function and inserts the status indicator into a peer detection packet.
  • the peer detection packet encapsulates an inner packet with a header that indicates the service function path.
  • the service function node forwards the peer detection packet to a neighboring service function node along the service function path.
  • the Service Function Chaining provides both metadata of a data flow and steers the flow to appropriate service functions.
  • the Service Function Chaining encapsulation carries information that identifies a Service Function Path.
  • the Service Function Path comprises an ordered list of service functions that act on the packets in the data flow.
  • one or more service functions may be unavailable (e.g., the network path is broken) or overloaded (e.g., due to processing other traffic).
  • a service function may use ping-like messages, which operate at a low level.
  • GRE Generic Routing Encapsulation
  • Network Service Headers as defined by various Request for Comments published by the Internet Engineering Task Force (IETF) for example, are used to indicate the status of each service function, and a service function that receives a packet with a Network Service Header carrying this additional information may react appropriately, e.g., using a different Service Function Path or a different service function node.
  • IETF Internet Engineering Task Force
  • a source endpoint 110 sends a data flow to destination endpoint 120 through the Service Function Chain system 130 .
  • Endpoints 110 and/or 120 may include, for example, smart phones, tablets, laptop computers, desktop computers, virtual machine applications running in a datacenter, or other types of computing devices.
  • Service Function Chain system 130 comprises a service classifier node 140 , network devices (e.g., Service Function Forwarders) 150 , 160 , and 170 .
  • Network device 150 forwards packets in data flows to service functions 152 and 154 .
  • Network device 160 forwards packets in data flows to service function 162 .
  • Network device 170 forwards packets in data flows to service functions 172 and 174 .
  • all of the service function nodes attached to one Service Function Forwarder such as service functions 152 and 154 attached to network node 150 , perform the same service function.
  • the Service Function Forwarder may load balance performance of the service function by sending packets to a plurality of instances of the service function.
  • the service function nodes attached to each Service Function Forwarder may provide different service functions.
  • each Service Function Forwarder node handles all of the instances of a given service function in a Service Function Path.
  • a service function may be repeated at different Service Function Forwarders, e.g., service function node 152 may perform the same service function as service function node 162 .
  • Service function 172 includes service function (SF) degradation logic 180 to monitor the performance of the service function. Other service functions in addition to the service function 172 may also include service function degradation logic to monitor their respective performance.
  • Service classifier 140 includes Service Function Path degradation logic 190 to determine the performance of the service functions in a particular Service Function Path and handle any degradation in performance.
  • degradation in performance of a service function may include a complete failure of a service function node such that the service function cannot perform any tasks on any data flows.
  • degradation in performance of the service function may include processing the data flows with the service function more slowly than expected such that a bottleneck at the degraded service function slows the data flow throughout the entire Service Function Path.
  • the Service Function Chain system 130 is shown with one service classifier, three Service Function Forwarder (SFF) network nodes, and five service function nodes, but the techniques presented herein may be applied to Service Function Chaining systems with any number of SFF network nodes and any number of service functions. Additional network elements, either inside the Service Function Chain system 130 or outside of the system 130 may also be included to transmit the flows between source endpoint 110 and destination endpoint 120 . Additional service classifiers may also be included in the Service Function Chain system 130 , e.g., to handle return data flows from the destination endpoint 120 to the source endpoint 110 . In another example, one or more of the nodes in the Service Function Chain system 130 may be physical devices or virtual machines running in a data center.
  • SFF Service Function Forwarder
  • Dead peer detection involves an exchange of low-level packets between two nodes, i.e., peer nodes, to detect whether the nodes remain in communication with each other.
  • Peer nodes may include two service function nodes that perform neighboring service functions in the Service Function Path.
  • the service classifier and the service function node performing the first service function may also be peer nodes.
  • peer nodes in a Service Function Path use GRE keepalive messages for dead peer detection.
  • peer nodes may use an Internet Security Association and Key Management Protocol (ISAKMP) message exchange of an R-U-THERE message and an R-U-THERE-ACK response.
  • ISAKMP Internet Security Association and Key Management Protocol
  • the low-level packet exchange by peer nodes will be referred to hereinafter as peer detection messages, peer detection requests, and peer detection responses.
  • the service function degradation logic 180 will add metadata to the Network Service Header in a peer detection message.
  • the metadata may include a status indicator that allows the service function node 172 to indicate its current status.
  • the status indicator may be similar to Hypertext Transfer Protocol (HTTP) response codes, i.e., 1xx for informational codes, 2xx for success codes, 3xx for redirection codes, 4xx for client error codes, and 5xx for server error codes. Sub-codes within the code classes may provide further information describing the status of the service function node 172 .
  • the Network Service Header may be integrity protected and encrypted to ensure that the status indicator carried in the metadata of the Network Service Header is not compromised.
  • the peer detection message is received by a neighboring peer node, such as the previous service function node 162 in the Service Function Path. Additionally, Service Function Forwarder 170 and/or Service Function Forwarder 160 may receive the peer detection message with the status indicator.
  • the Network Service Header may include additional information, such as statistical information on the performance of the service function node 172 . The Service Function Forwarders may use this statistical information to make an informed load distribution decision among the instances of the same service function. Any node that receives a status indicator that a service function node is not able to perform adequately will ensure that existing flows are unaffected, especially if the service functions in the Service Function Path are stateful. Redirection to a new service function node or an alternative Service Function path will typically only be relevant for subsequent flows.
  • the service function node 172 may insert the status indicator into the Network Service Header metadata of a peer detection response.
  • the service function node 172 may send the status indicator in its own peer detection request, particularly if the service function node 172 wants to immediately notify that it can no longer service packets. This peer detection request may be sent multiple times to handle packet loss and ensure that the status indicator is received by the peer node.
  • Service function node 172 configured to perform a service function.
  • Service function node 172 includes, among other possible components, a processor 210 to process instructions relevant to processing packets in data flow, and memory 220 to store a variety of data and software instructions (e.g., service function logic 225 , service function degradation logic 180 , etc.)
  • the service function node 172 further includes a network interface unit 230 configured to communicate with other computing devices over a computer network.
  • Memory 220 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible (e.g., non-transitory) memory storage devices.
  • the processor 210 is, for example, a microprocessor or microcontroller that executes instructions for implementing the processes described herein.
  • the memory 220 may comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (e.g., by the processor 210 ) it is operable to perform the operations described herein.
  • the service function node 172 may be a physical device or a virtual (software) device. In the latter case, the service function node 172 is embodied as software running on a computer node (e.g., in a datacenter or other environment) through which traffic is directed and for which determinations are made as to how packets are to be routed into a Service Function Chain.
  • a computer node e.g., in a datacenter or other environment
  • Classifier 140 includes, among other possible components, a processor 310 to process instructions relevant to processing communication packets for a Service Function Chain system, and memory 320 to store a variety of data and software instructions (e.g., Classification logic 330 , Service Function Path degradation logic 190 , communication packets, etc.).
  • the classifier 140 also includes a network processor application specific integrated circuit (ASIC) 340 to process communication packets that flow through the classifier device 140 .
  • Network processor ASIC 340 processes communication packets be sent to and received from ports 350 , 351 , 352 , 353 , 354 , and 355 . While only six ports are shown in this example, any number of ports may be included in classifier device 140 .
  • Memory 320 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible (e.g., non-transitory) memory storage devices.
  • the processor 310 is, for example, a microprocessor or microcontroller that executes instructions for implementing the processes described herein.
  • the memory 320 may comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (e.g., by the processor 310 ) it is operable to perform the operations described herein.
  • the classifier network device 140 may be a physical device or a virtual (software) device. In the latter case, the classifier network device 140 is embodied as software running on a compute node (e.g., in a datacenter or other environment) through which traffic is directed and for which determinations are made as to how packets are to be routed into a Service Function Chain.
  • a compute node e.g., in a datacenter or other environment
  • Service function node 162 initiates a peer detection exchange with service function node 172 by sending peer detection request message 410 .
  • the peer detection reply 415 may include a status indicator which indicates that the service function node 172 is successfully performing the service function at the time that message 415 is sent.
  • the performance of the service function in service function node 172 starts to be degraded at 420 .
  • the degradation may be, for example, a slowdown in the processing of packets with the service function. Alternatively, the degradation may be a total inability of the service function node 172 to perform the service function.
  • the service function node 172 sends a peer detection reply 435 with a status indicator.
  • the status indicator is included in metadata of the Network Service Header of the peer detection reply 435 , and indicates the degradation in the performance of the service function at service function node 172 .
  • the Service Function Path includes service functions A, B, and C. Packets in this Service Function Path start at the classifier 140 and proceed to service function A performed at service function node 152 . After service function node 152 , the packets continue to service function B performed at service function node 162 and service function C performed at service function node 172 . After a service function degradation is detected, that information is propagated throughout the Service Function Path to ensure that each node can take the most appropriate action in resolving the degradation. Since the mode of encapsulation of the peer detection messages may vary between peers, the status indicator may be propagated throughout the Service Function Path using different formats. In the example of FIG. 5 , the status indicator is converted from an R-U-THERE exchange to a GRE keepalive exchange.
  • the classifier 140 and the service function node 152 are peer nodes that detect each other through a GRE peer detection exchange 510 .
  • the service function node 152 and service function node 162 are peer nodes that detect each other through GRE peer detection exchange 512 .
  • the service function node 162 and service function node 172 are peer nodes that detect each other through an R-U-THERE message exchange 514 .
  • the peer detection exchanges 510 and 512 are in a different format than peer detection exchange 514 , and are typically independent exchanges of low level peer detection request and response messages.
  • the peer detection exchanges 510 , 512 , and 514 may be repeated at intervals to allow each node to detect neighboring nodes.
  • the performance of the service function C in service function node 172 starts to be degraded at 520 .
  • the service function node 172 sends an R-U-THERE message 530 to the service function node 162 and includes a status indicator in the metadata of the Network Service Header in the R-U-THERE message 530 .
  • the status indicator indicates that service function C is degraded at service function node 172 .
  • the service function node 162 responds with an R-U-THERE-ACK response message 535 to complete the peer detection exchange.
  • the service function node 162 may initiate the R-U-THERE peer detection exchange instead of service function node 172 .
  • the service function node 172 will include the status indicator in the R-U-THERE-ACK message.
  • the service function node 162 propagates the status information back up the Service Function Path to ensure that the most appropriate action is taken by each node in the Service Function Path.
  • the service function node 162 inserts a status indicator into the metadata of the Network Service Header of GRE peer detection reply message 545 .
  • the status indicator indicates that the service function C is degraded at the service function node 172 .
  • the service function node 162 may not wait for the service function node 152 to initiate the GRE keepalive exchange and may send its own GRE keepalive message with the status indicator.
  • the service function node 152 propagates the status information up the Service Function Path by sending a GRE peer detection request 550 to the service classifier node 140 .
  • the GRE peer detection request 550 includes in a Network Service Header the status indicator that indicates that service function C is degraded at service function node 172 .
  • the service classifier 140 completes the GRE peer detection exchange with reply message 555 .
  • the service function node 152 may wait for the service classifier 140 to initiate the GRE keepalive peer detection exchange. In this case, the service function node 152 will insert the status indicator into the GRE peer detection reply message.
  • FIG. 5 focuses on GRE and R-U-THERE (e.g., Internet Protocol Security (IPSec)) peer detection encapsulation mechanisms
  • GRE and R-U-THERE e.g., Internet Protocol Security (IPSec)
  • IPSec Internet Protocol Security
  • other modes of encapsulation e.g., Virtual Extensible Local Area Network—Generic Protocol Extension (VxLAN-gpe), Ethernet, etc.
  • VxLAN-gpe Virtual Extensible Local Area Network—Generic Protocol Extension
  • Ethernet etc.
  • GRE or IPSec is used to transport the Network Service Header from a connector to the cloud network.
  • the Cloud Web Security service may relay its status back to the connector.
  • the connector may continue to tunnel into a specified Cloud Web Security data center as long as the Cloud Web Security service is functioning.
  • the connector may switch to a suggested alternative data center if it receives a redirection status indicator from the primary data center.
  • the address of the alternative data center may be included in the redirection status from the primary data center.
  • the connector may switch to a predetermined secondary data center if the Cloud Web Security service returns an error status indicator.
  • a flowchart is shown for a process 600 by which a service function node notifies a peer node of a degradation in the performance of a service function.
  • the service function node detects degradation in a service function (e.g., a partial or complete inability to process packets in a timely manner) at the node.
  • the service function node generates a status indicator that describes the degradation in step 620 .
  • the service function node inserts the status indicator into metadata of a Network Service Header in a peer detection packet.
  • the peer detection packet may be a GRE keepalive message or a response to a GRE keepalive message from a peer node.
  • the service function node forwards the peer detection packet with the status indicator to a neighboring service function node.
  • the neighboring service function node i.e., a peer node, may be the initiator or the responder in a GRE keepalive exchange.
  • the peer detection packet encapsulates an inner packet including the Network Service Header.
  • the Network Service Header will typically be used to encapsulate a payload for the Service Function Chaining system and includes an indication of the particular Service Function Path for the payload.
  • a flowchart is shown for a process 700 by which a peer node receives a status indicator of a degraded service function and reacts to the status indicator appropriately.
  • a peer node receives a peer detection packet from a service function node.
  • the peer node detects a status indicator indicating that the performance of a service function at a service function node is degraded.
  • the service function node with degraded performance may be the peer service function node from which the peer detection packet was received. Alternatively, the service function node with degraded performance may be further down the Service Function Path.
  • the peer node propagates the status of the degraded service function to a previous node in the Service Function Path, e.g., in another peer detection message, in step 740 . If the peer node is the service classifier, then the peer node/service classifier adjusts the Service Function Path in step 750 . In one example, the service classifier may adjust the Service Function Path by directing subsequent packets in the data flow to a second Service Function Path that does not include degraded service function node.
  • the techniques presented herein provide for a mechanism to convey the status of a service function using the Network Service Header of a peer detection message.
  • a service function node that receives a Network Service Header with this status information may then react appropriately, e.g., by altering the Service Function Path, or by picking an alternative service function node to provide the service function.
  • the liveliness of the service function nodes will be detected.
  • the Network Service Header metadata may convey the service function node liveliness to the service classifier, which may change the Service Function Path.
  • the Network Service Header metadata may convey the service function node liveliness to a Service Function Forwarder, which may forward data to a different instance of the service function at a different service function node.
  • the status of a service function may be relayed within the data plane without any need for a separate control plane.
  • the techniques presented herein provide for a computer-implemented method performed at a service function node in a Service Function Path.
  • the method comprises detecting degradation in performing the service function.
  • the method further comprises generating a status indicator for the degradation in performing the service function and inserting the status indicator into a peer detection packet.
  • the peer detection packet encapsulates an inner packet with a network service header that indicates the service function path.
  • the computing device forwards the peer detection packet to a neighboring service function device along the service function path.
  • the techniques presented herein provide for an apparatus comprising a network interface unit and a processor.
  • the network interface unit is configured to communicate with a plurality of (physical or virtual) service function devices in a service function path.
  • the processor is configured to perform at least one service function on a data flow that follows the service function path.
  • the processor is configured to detect degradation in performing the service function and generate a status indicator for the degradation in performing the service function.
  • the processor is further configured to insert the status indicator into a peer detection packet that encapsulates an inner packet.
  • the inner packet includes a network service header that indicates the service function path.
  • the processor is configured to cause the network interface unit to forward the peer detection packet to a neighboring service function along the service function path.
  • the techniques presented herein provide for a computer-implemented method performed at a peer node in a Service Function Path.
  • the method comprises receiving a peer detection packet from a (physical or virtual) service function device in the Service Function Path.
  • the peer detection packet comprises an inner packet with a network service header.
  • the method further comprises detecting a status indicator in the network service header.
  • the status indicator indicates degradation in performing a service function at the service function device.
  • the method also comprises adjusting the service function path to compensate for the degradation in performing the service function at the service function device.
  • a non-transitory computer readable storage media is provided that is encoded with instructions that, when executed by a processor, cause the processor to perform any of the methods described and shown herein.

Abstract

At a service function node configured to perform at least one service function on a data flow that follows a service function path, degradation in performing the service function is detected. The service function node generates a status indicator for the degradation in performing the service function and inserts the status indicator into a peer detection packet. The peer detection packet encapsulates an inner packet with a header that indicates the service function path. The service function node forwards the peer detection packet to a neighboring service function node along the service function path.

Description

    TECHNICAL FIELD
  • The present disclosure relates to applying service function chains in networks.
  • BACKGROUND
  • Service Function Chaining enables virtualized networking functions to be implemented as part of a cloud network. A Service Function Chain defines an ordered list of a plurality of service functions (e.g., firewall, compression, intrusion detection/prevention, load balancing, etc.) that may be applied to packet flows in the network. A flow enters the network through a classifier node that generates a Service Function Path for that flow according to the Service Function Chain policy. The classifier node encapsulates each packet of the flow with a Network Service Header that indicates the service functions to which the flow will be subjected, and the order the service functions will be applied.
  • Service Function Chaining and Network Service Headers provide a scalable, extensible, and standardized way of sharing metadata between both network nodes and service nodes within a network topology. This allows for disparate nodes that require shared context, but do not communicate directly, to share that context via metadata within the packets traversing the network or service topology.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a system block diagram showing a Service Function Chain network environment with monitoring of the performance of service functions, according to an example embodiment.
  • FIG. 2 is a simplified block diagram of a service function device within the Service Function Chain network environment, according to an example embodiment.
  • FIG. 3 is a simplified block diagram of a classifier network element within the Service Function Chain network environment, according to an example embodiment.
  • FIG. 4 is a ladder diagram that shows messages passed between peer service function nodes when the performance of one of the service functions is compromised, according to an example embodiment.
  • FIG. 5 is a ladder diagram that shows messages passed between nodes in a service function chain when the performance of one of the service functions is compromised, according to an example embodiment.
  • FIG. 6 is a flowchart showing the operations of a service function node that suffers from degradation in its ability to perform the service function, according to an example embodiment.
  • FIG. 7 is a flowchart showing the operations of a node in a service function path when another node is degraded in its ability to perform a service function, according to an example embodiment.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS Overview
  • At a service function node configured to perform at least one service function on a data flow that follows a service function path, degradation in performing the service function is detected. The service function node generates a status indicator for the degradation in performing the service function and inserts the status indicator into a peer detection packet. The peer detection packet encapsulates an inner packet with a header that indicates the service function path. The service function node forwards the peer detection packet to a neighboring service function node along the service function path.
  • Example Embodiments
  • Service Function Chaining provides both metadata of a data flow and steers the flow to appropriate service functions. The Service Function Chaining encapsulation carries information that identifies a Service Function Path. The Service Function Path comprises an ordered list of service functions that act on the packets in the data flow. In one example, one or more service functions may be unavailable (e.g., the network path is broken) or overloaded (e.g., due to processing other traffic). To determine if the network path to the next service function is available, a service function may use ping-like messages, which operate at a low level. In one example, a Generic Routing Encapsulation (GRE) tunnel may use a GRE keepalive message exchange.
  • The techniques described herein provide for carrying additional information regarding the status of the service function beyond a mere link-level “ping” test. Network Service Headers, as defined by various Request for Comments published by the Internet Engineering Task Force (IETF) for example, are used to indicate the status of each service function, and a service function that receives a packet with a Network Service Header carrying this additional information may react appropriately, e.g., using a different Service Function Path or a different service function node.
  • Referring now to FIG. 1, a simplified block diagram of a data flow system 100 between two endpoint devices is shown. A source endpoint 110 sends a data flow to destination endpoint 120 through the Service Function Chain system 130. Endpoints 110 and/or 120 may include, for example, smart phones, tablets, laptop computers, desktop computers, virtual machine applications running in a datacenter, or other types of computing devices. Service Function Chain system 130 comprises a service classifier node 140, network devices (e.g., Service Function Forwarders) 150, 160, and 170. Network device 150 forwards packets in data flows to service functions 152 and 154. Network device 160 forwards packets in data flows to service function 162. Network device 170 forwards packets in data flows to service functions 172 and 174.
  • In one example, all of the service function nodes attached to one Service Function Forwarder, such as service functions 152 and 154 attached to network node 150, perform the same service function. The Service Function Forwarder may load balance performance of the service function by sending packets to a plurality of instances of the service function. Alternatively, the service function nodes attached to each Service Function Forwarder may provide different service functions. In another example, each Service Function Forwarder node handles all of the instances of a given service function in a Service Function Path. Alternatively, a service function may be repeated at different Service Function Forwarders, e.g., service function node 152 may perform the same service function as service function node 162.
  • Service function 172 includes service function (SF) degradation logic 180 to monitor the performance of the service function. Other service functions in addition to the service function 172 may also include service function degradation logic to monitor their respective performance. Service classifier 140 includes Service Function Path degradation logic 190 to determine the performance of the service functions in a particular Service Function Path and handle any degradation in performance. In one example, degradation in performance of a service function may include a complete failure of a service function node such that the service function cannot perform any tasks on any data flows. Alternatively, degradation in performance of the service function may include processing the data flows with the service function more slowly than expected such that a bottleneck at the degraded service function slows the data flow throughout the entire Service Function Path.
  • In the example shown in FIG. 1, the Service Function Chain system 130 is shown with one service classifier, three Service Function Forwarder (SFF) network nodes, and five service function nodes, but the techniques presented herein may be applied to Service Function Chaining systems with any number of SFF network nodes and any number of service functions. Additional network elements, either inside the Service Function Chain system 130 or outside of the system 130 may also be included to transmit the flows between source endpoint 110 and destination endpoint 120. Additional service classifiers may also be included in the Service Function Chain system 130, e.g., to handle return data flows from the destination endpoint 120 to the source endpoint 110. In another example, one or more of the nodes in the Service Function Chain system 130 may be physical devices or virtual machines running in a data center.
  • Dead peer detection involves an exchange of low-level packets between two nodes, i.e., peer nodes, to detect whether the nodes remain in communication with each other. Peer nodes may include two service function nodes that perform neighboring service functions in the Service Function Path. The service classifier and the service function node performing the first service function may also be peer nodes. In one example, peer nodes in a Service Function Path use GRE keepalive messages for dead peer detection. Alternatively, peer nodes may use an Internet Security Association and Key Management Protocol (ISAKMP) message exchange of an R-U-THERE message and an R-U-THERE-ACK response. In general, the low-level packet exchange by peer nodes will be referred to hereinafter as peer detection messages, peer detection requests, and peer detection responses.
  • If the service function node 172 is unable to perform its service function(s) or is overloaded in capacity, then the service function degradation logic 180 will add metadata to the Network Service Header in a peer detection message. The metadata may include a status indicator that allows the service function node 172 to indicate its current status. In one example, the status indicator may be similar to Hypertext Transfer Protocol (HTTP) response codes, i.e., 1xx for informational codes, 2xx for success codes, 3xx for redirection codes, 4xx for client error codes, and 5xx for server error codes. Sub-codes within the code classes may provide further information describing the status of the service function node 172. In another example, the Network Service Header may be integrity protected and encrypted to ensure that the status indicator carried in the metadata of the Network Service Header is not compromised.
  • The peer detection message is received by a neighboring peer node, such as the previous service function node 162 in the Service Function Path. Additionally, Service Function Forwarder 170 and/or Service Function Forwarder 160 may receive the peer detection message with the status indicator. The Network Service Header may include additional information, such as statistical information on the performance of the service function node 172. The Service Function Forwarders may use this statistical information to make an informed load distribution decision among the instances of the same service function. Any node that receives a status indicator that a service function node is not able to perform adequately will ensure that existing flows are unaffected, especially if the service functions in the Service Function Path are stateful. Redirection to a new service function node or an alternative Service Function path will typically only be relevant for subsequent flows.
  • In another example, the service function node 172 may insert the status indicator into the Network Service Header metadata of a peer detection response. Alternatively, the service function node 172 may send the status indicator in its own peer detection request, particularly if the service function node 172 wants to immediately notify that it can no longer service packets. This peer detection request may be sent multiple times to handle packet loss and ensure that the status indicator is received by the peer node.
  • Referring now to FIG. 2, a simplified block diagram is shown of a service function node 172 configured to perform a service function. Service function node 172 includes, among other possible components, a processor 210 to process instructions relevant to processing packets in data flow, and memory 220 to store a variety of data and software instructions (e.g., service function logic 225, service function degradation logic 180, etc.) The service function node 172 further includes a network interface unit 230 configured to communicate with other computing devices over a computer network.
  • Memory 220 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible (e.g., non-transitory) memory storage devices. The processor 210 is, for example, a microprocessor or microcontroller that executes instructions for implementing the processes described herein. Thus, in general, the memory 220 may comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (e.g., by the processor 210) it is operable to perform the operations described herein.
  • It is to be understood that the service function node 172 may be a physical device or a virtual (software) device. In the latter case, the service function node 172 is embodied as software running on a computer node (e.g., in a datacenter or other environment) through which traffic is directed and for which determinations are made as to how packets are to be routed into a Service Function Chain.
  • Referring now to FIG. 3, a simplified block diagram is shown of a classifier network device 140 configured to perform the techniques of a classifier node. Classifier 140 includes, among other possible components, a processor 310 to process instructions relevant to processing communication packets for a Service Function Chain system, and memory 320 to store a variety of data and software instructions (e.g., Classification logic 330, Service Function Path degradation logic 190, communication packets, etc.). The classifier 140 also includes a network processor application specific integrated circuit (ASIC) 340 to process communication packets that flow through the classifier device 140. Network processor ASIC 340 processes communication packets be sent to and received from ports 350, 351, 352, 353, 354, and 355. While only six ports are shown in this example, any number of ports may be included in classifier device 140.
  • Memory 320 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible (e.g., non-transitory) memory storage devices. The processor 310 is, for example, a microprocessor or microcontroller that executes instructions for implementing the processes described herein. Thus, in general, the memory 320 may comprise one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (e.g., by the processor 310) it is operable to perform the operations described herein.
  • It is to be understood that the classifier network device 140 may be a physical device or a virtual (software) device. In the latter case, the classifier network device 140 is embodied as software running on a compute node (e.g., in a datacenter or other environment) through which traffic is directed and for which determinations are made as to how packets are to be routed into a Service Function Chain.
  • Referring now to FIG. 4, a ladder diagram is shown of peer service function nodes 162 and 172 exchanging peer detection messages with status indicators. Service function node 162 initiates a peer detection exchange with service function node 172 by sending peer detection request message 410. The peer detection reply 415 may include a status indicator which indicates that the service function node 172 is successfully performing the service function at the time that message 415 is sent.
  • The performance of the service function in service function node 172 starts to be degraded at 420. The degradation may be, for example, a slowdown in the processing of packets with the service function. Alternatively, the degradation may be a total inability of the service function node 172 to perform the service function. In response to the next peer detection request 430 from the peer service function node 162, the service function node 172 sends a peer detection reply 435 with a status indicator. In one example, the status indicator is included in metadata of the Network Service Header of the peer detection reply 435, and indicates the degradation in the performance of the service function at service function node 172.
  • Referring now to FIG. 5, a ladder diagram of peer detection messages exchanged between peers along an entire Service Function Path is shown. In this example, The Service Function Path includes service functions A, B, and C. Packets in this Service Function Path start at the classifier 140 and proceed to service function A performed at service function node 152. After service function node 152, the packets continue to service function B performed at service function node 162 and service function C performed at service function node 172. After a service function degradation is detected, that information is propagated throughout the Service Function Path to ensure that each node can take the most appropriate action in resolving the degradation. Since the mode of encapsulation of the peer detection messages may vary between peers, the status indicator may be propagated throughout the Service Function Path using different formats. In the example of FIG. 5, the status indicator is converted from an R-U-THERE exchange to a GRE keepalive exchange.
  • In normal operation, the classifier 140 and the service function node 152 are peer nodes that detect each other through a GRE peer detection exchange 510. The service function node 152 and service function node 162 are peer nodes that detect each other through GRE peer detection exchange 512. The service function node 162 and service function node 172 are peer nodes that detect each other through an R-U-THERE message exchange 514. The peer detection exchanges 510 and 512 are in a different format than peer detection exchange 514, and are typically independent exchanges of low level peer detection request and response messages. The peer detection exchanges 510, 512, and 514 may be repeated at intervals to allow each node to detect neighboring nodes.
  • The performance of the service function C in service function node 172 starts to be degraded at 520. The service function node 172 sends an R-U-THERE message 530 to the service function node 162 and includes a status indicator in the metadata of the Network Service Header in the R-U-THERE message 530. The status indicator indicates that service function C is degraded at service function node 172. The service function node 162 responds with an R-U-THERE-ACK response message 535 to complete the peer detection exchange. In another example, the service function node 162 may initiate the R-U-THERE peer detection exchange instead of service function node 172. In this case, the service function node 172 will include the status indicator in the R-U-THERE-ACK message.
  • The service function node 162 propagates the status information back up the Service Function Path to ensure that the most appropriate action is taken by each node in the Service Function Path. In response to the next GRE peer detection request 540 by the service function node 152, i.e., the previous node in the Service Function Path, the service function node 162 inserts a status indicator into the metadata of the Network Service Header of GRE peer detection reply message 545. The status indicator indicates that the service function C is degraded at the service function node 172. In another example, the service function node 162 may not wait for the service function node 152 to initiate the GRE keepalive exchange and may send its own GRE keepalive message with the status indicator.
  • The service function node 152 propagates the status information up the Service Function Path by sending a GRE peer detection request 550 to the service classifier node 140. The GRE peer detection request 550 includes in a Network Service Header the status indicator that indicates that service function C is degraded at service function node 172. The service classifier 140 completes the GRE peer detection exchange with reply message 555. In another example, the service function node 152 may wait for the service classifier 140 to initiate the GRE keepalive peer detection exchange. In this case, the service function node 152 will insert the status indicator into the GRE peer detection reply message.
  • While the description of FIG. 5 focuses on GRE and R-U-THERE (e.g., Internet Protocol Security (IPSec)) peer detection encapsulation mechanisms, other modes of encapsulation (e.g., Virtual Extensible Local Area Network—Generic Protocol Extension (VxLAN-gpe), Ethernet, etc.) may be used to relay the status of a service function in the Network Service Header. In the context of Cloud Web Security, the techniques presented may be used to relay serviceability of service functions. In one example of Cloud Web Security, GRE or IPSec is used to transport the Network Service Header from a connector to the cloud network. The Cloud Web Security service may relay its status back to the connector. The connector may continue to tunnel into a specified Cloud Web Security data center as long as the Cloud Web Security service is functioning. The connector may switch to a suggested alternative data center if it receives a redirection status indicator from the primary data center. In one example, the address of the alternative data center may be included in the redirection status from the primary data center. Alternatively, the connector may switch to a predetermined secondary data center if the Cloud Web Security service returns an error status indicator.
  • Referring now to FIG. 6, a flowchart is shown for a process 600 by which a service function node notifies a peer node of a degradation in the performance of a service function. In step 610, the service function node detects degradation in a service function (e.g., a partial or complete inability to process packets in a timely manner) at the node. The service function node generates a status indicator that describes the degradation in step 620. In step 630, the service function node inserts the status indicator into metadata of a Network Service Header in a peer detection packet. The peer detection packet may be a GRE keepalive message or a response to a GRE keepalive message from a peer node. In step 640, the service function node forwards the peer detection packet with the status indicator to a neighboring service function node. The neighboring service function node, i.e., a peer node, may be the initiator or the responder in a GRE keepalive exchange.
  • In one example, the peer detection packet encapsulates an inner packet including the Network Service Header. The Network Service Header will typically be used to encapsulate a payload for the Service Function Chaining system and includes an indication of the particular Service Function Path for the payload.
  • Referring now to FIG. 7, a flowchart is shown for a process 700 by which a peer node receives a status indicator of a degraded service function and reacts to the status indicator appropriately. In step 710, a peer node receives a peer detection packet from a service function node. In step 720, the peer node detects a status indicator indicating that the performance of a service function at a service function node is degraded. In one example, the service function node with degraded performance may be the peer service function node from which the peer detection packet was received. Alternatively, the service function node with degraded performance may be further down the Service Function Path.
  • If the peer node is not the service classifier for the Service Function Path, as determined in step 730, then the peer node propagates the status of the degraded service function to a previous node in the Service Function Path, e.g., in another peer detection message, in step 740. If the peer node is the service classifier, then the peer node/service classifier adjusts the Service Function Path in step 750. In one example, the service classifier may adjust the Service Function Path by directing subsequent packets in the data flow to a second Service Function Path that does not include degraded service function node.
  • In summary, the techniques presented herein provide for a mechanism to convey the status of a service function using the Network Service Header of a peer detection message. A service function node that receives a Network Service Header with this status information may then react appropriately, e.g., by altering the Service Function Path, or by picking an alternative service function node to provide the service function. In this way, the liveliness of the service function nodes will be detected. Additionally, the Network Service Header metadata may convey the service function node liveliness to the service classifier, which may change the Service Function Path. Further, the Network Service Header metadata may convey the service function node liveliness to a Service Function Forwarder, which may forward data to a different instance of the service function at a different service function node. In these examples, the status of a service function may be relayed within the data plane without any need for a separate control plane.
  • In one form, the techniques presented herein provide for a computer-implemented method performed at a service function node in a Service Function Path. At a network device or a computing device configured to perform at least one service function on a data flow that follows a service function path, the method comprises detecting degradation in performing the service function. The method further comprises generating a status indicator for the degradation in performing the service function and inserting the status indicator into a peer detection packet. The peer detection packet encapsulates an inner packet with a network service header that indicates the service function path. The computing device forwards the peer detection packet to a neighboring service function device along the service function path.
  • In another form, the techniques presented herein provide for an apparatus comprising a network interface unit and a processor. The network interface unit is configured to communicate with a plurality of (physical or virtual) service function devices in a service function path. The processor is configured to perform at least one service function on a data flow that follows the service function path. The processor is configured to detect degradation in performing the service function and generate a status indicator for the degradation in performing the service function. The processor is further configured to insert the status indicator into a peer detection packet that encapsulates an inner packet. The inner packet includes a network service header that indicates the service function path. The processor is configured to cause the network interface unit to forward the peer detection packet to a neighboring service function along the service function path.
  • In yet another form, the techniques presented herein provide for a computer-implemented method performed at a peer node in a Service Function Path. The method comprises receiving a peer detection packet from a (physical or virtual) service function device in the Service Function Path. The peer detection packet comprises an inner packet with a network service header. The method further comprises detecting a status indicator in the network service header. The status indicator indicates degradation in performing a service function at the service function device. The method also comprises adjusting the service function path to compensate for the degradation in performing the service function at the service function device.
  • In still another form, a non-transitory computer readable storage media is provided that is encoded with instructions that, when executed by a processor, cause the processor to perform any of the methods described and shown herein.
  • The above description is intended by way of example only. Various modifications and structural changes may be made therein without departing from the scope of the concepts described herein and within the scope and range of equivalents of the claims.

Claims (20)

What is claimed is:
1. A method comprising:
at a computing device configured to perform at least one service function on a data flow that follows a service function path, detecting a degradation in performing the service function;
generating a status indicator for the degradation in performing the service function;
inserting the status indicator into a peer detection packet, the peer detection packet encapsulating an inner packet with a header that indicates the service function path; and
forwarding the peer detection packet to a neighboring service function device along the service function path.
2. The method of claim 1, wherein the status indicator is inserted as metadata in a network service header of the inner packet of the peer detection packet.
3. The method of claim 1, further comprising inserting service function statistical information into the network service header, wherein the service function statistical information describes the performance level of the computing device in performing the at least one service function.
4. The method of claim 1, wherein the status indicator indicates one or more of a success status, a redirection status, or a server error status.
5. The method of claim 1, further comprising:
receiving a peer detection request packet from the neighboring service function device; and
replying to the peer detection request packet with the peer detection packet including the status code.
6. The method of claim 1, wherein the peer detection packet comprises a Generic Routing Encapsulation (GRE) keepalive notification packet, a GRE response packet, an Internet Security Association and Key Management Protocol (ISAKMP) R-U-THERE message, or an ISAKMP R-U-THERE-ACK message.
7. An apparatus comprising:
a network interface unit configured to communicate with a plurality of service function devices in a service function path; and
a processor configured to:
perform at least one service function on a data flow that follows the service function path;
detect a degradation in performing the service function;
generate a status indicator for the degradation in performing the service function;
insert the status indicator into a peer detection packet, the peer detection packet encapsulating an inner packet with a header that indicates the service function path; and
cause the network interface unit to forward the peer detection packet to a neighboring service function device along the service function path.
8. The apparatus of claim 7, wherein the processor is configured to insert the status indicator as metadata in a network service header of the inner packet of the peer detection packet.
9. The apparatus of claim 7, wherein the status indicator indicates one or more of a success status, a redirection status, or a server error status.
10. The apparatus of claim 7, wherein the processor is further configured to:
receive a peer detection request packet, via the network interface unit, from the neighboring service function device; and
cause the network interface unit to reply to the peer detection request packet with the peer detection packet including the status indicator.
11. The apparatus of claim 7, wherein the peer detection packet comprises a Generic Routing Encapsulation (GRE) keepalive notification packet, a GRE response packet, an Internet Security Association and Key Management Protocol (ISAKMP) R-U-THERE message, or an ISAKMP R-U-THERE-ACK message.
12. A method comprising:
receiving a peer detection packet from a service function device in a service function path, the peer detection packet comprising an inner packet with a header;
detecting an status indicator in the header, the status indicator indicating a degradation in performing a service function at the service function device; and
adjusting the service function path to compensate for the degradation in performing the service function at the service function device.
13. The method of claim 12, wherein the status indicator is detected as metadata in a network service header.
14. The method of claim 12, wherein the status indicator indicates one or more of a success status, a redirection status, or a server error status.
15. The method of claim 14, further comprising:
responsive to the status indicator indicating a redirection status, adjusting the service function path by redirecting future data flows to an alternative data center as indicated in the redirection status; and
responsive to the status indicator indicating a server error status, adjusting the service function path by redirecting the future data flows to a predetermined secondary data center.
16. The method of claim 12, wherein adjusting the service function path comprises sending a new peer detection packet to a previous device in the service function path.
17. The method of claim 16, wherein the previous device is a previous service function device, a service function classifier device, or service function forwarder device.
18. The method of claim 12, wherein adjusting the service function path comprises classifying a data flow into a new service function path that avoids the service function device.
19. The method of claim 12, wherein adjusting the service function path comprises providing the service function from a different service function device.
20. The method of claim 19, further comprising determining the different service function device based on service function statistical information in the header.
US15/058,259 2016-03-02 2016-03-02 Network service header (nsh) relaying of serviceability of a service function Abandoned US20170257310A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/058,259 US20170257310A1 (en) 2016-03-02 2016-03-02 Network service header (nsh) relaying of serviceability of a service function
US16/558,367 US11343178B2 (en) 2016-03-02 2019-09-03 Network service header (NSH) relaying of serviceability of a service function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US15/058,259 US20170257310A1 (en) 2016-03-02 2016-03-02 Network service header (nsh) relaying of serviceability of a service function

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/558,367 Division US11343178B2 (en) 2016-03-02 2019-09-03 Network service header (NSH) relaying of serviceability of a service function

Publications (1)

Publication Number Publication Date
US20170257310A1 true US20170257310A1 (en) 2017-09-07

Family

ID=59724429

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/058,259 Abandoned US20170257310A1 (en) 2016-03-02 2016-03-02 Network service header (nsh) relaying of serviceability of a service function
US16/558,367 Active US11343178B2 (en) 2016-03-02 2019-09-03 Network service header (NSH) relaying of serviceability of a service function

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/558,367 Active US11343178B2 (en) 2016-03-02 2019-09-03 Network service header (NSH) relaying of serviceability of a service function

Country Status (1)

Country Link
US (2) US20170257310A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180241680A1 (en) * 2017-01-30 2018-08-23 Sandvine Incorporated Ulc System and method for traffic steering and analysis
US10158565B2 (en) * 2016-08-26 2018-12-18 Cisco Technology, Inc. Network services across non-contiguous subnets of a label switched network separated by a non-label switched network
US20190140950A1 (en) * 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining sfc
US20190222521A1 (en) * 2016-09-30 2019-07-18 Nokia Solutions And Networks Oy Controlling service function chaining
CN112988434A (en) * 2019-12-13 2021-06-18 中国银联股份有限公司 Service fuse, service fusing method and computer-readable storage medium
US11095546B2 (en) * 2016-05-30 2021-08-17 Huawei Technologies Co., Ltd. Network device service quality detection method and apparatus
US11343178B2 (en) 2016-03-02 2022-05-24 Cisco Technology, Inc. Network service header (NSH) relaying of serviceability of a service function

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220150160A1 (en) * 2020-11-06 2022-05-12 Juniper Networks, Inc. Backup service function notification and synchronization

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7895425B2 (en) 2007-08-03 2011-02-22 Cisco Technology, Inc. Operation, administration and maintenance (OAM) in a service insertion architecture (SIA)
KR20100099838A (en) 2009-03-04 2010-09-15 삼성전자주식회사 Apparatus and method for transmitting coexistence beacon protocol packet in corgnitive radio wireless communication system
US20130346593A1 (en) 2012-06-22 2013-12-26 Nokia Corporation Method and apparatus for providing transition to an alternate service based on performance degradation of an initial service
US9444675B2 (en) * 2013-06-07 2016-09-13 Cisco Technology, Inc. Determining the operations performed along a service path/service chain
US9825856B2 (en) 2014-01-06 2017-11-21 Futurewei Technologies, Inc. Service function chaining in a packet network
CN105471725B (en) 2014-08-05 2019-01-22 新华三技术有限公司 Pass through the method for routing and device of autonomous system
US9621520B2 (en) 2015-03-19 2017-04-11 Cisco Technology, Inc. Network service packet header security
US20170257310A1 (en) 2016-03-02 2017-09-07 Cisco Technology, Inc. Network service header (nsh) relaying of serviceability of a service function

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11343178B2 (en) 2016-03-02 2022-05-24 Cisco Technology, Inc. Network service header (NSH) relaying of serviceability of a service function
US11095546B2 (en) * 2016-05-30 2021-08-17 Huawei Technologies Co., Ltd. Network device service quality detection method and apparatus
US20190140950A1 (en) * 2016-07-01 2019-05-09 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining sfc
US11075839B2 (en) * 2016-07-01 2021-07-27 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining SFC
US11671364B2 (en) 2016-07-01 2023-06-06 Huawei Technologies Co., Ltd. Method, apparatus, and system for forwarding packet in service function chaining SFC
US10158565B2 (en) * 2016-08-26 2018-12-18 Cisco Technology, Inc. Network services across non-contiguous subnets of a label switched network separated by a non-label switched network
US10728142B2 (en) * 2016-08-26 2020-07-28 Cisco Technology, Inc. Network services across non-contiguous subnets of a label switched network separated by a non-label switched network
US20190222521A1 (en) * 2016-09-30 2019-07-18 Nokia Solutions And Networks Oy Controlling service function chaining
US11671372B2 (en) * 2016-09-30 2023-06-06 Nokia Solutions And Networks Oy Controlling service function chaining
US20180241680A1 (en) * 2017-01-30 2018-08-23 Sandvine Incorporated Ulc System and method for traffic steering and analysis
US10778586B2 (en) * 2017-01-30 2020-09-15 Sandvince Corporation System and method for traffic steering and analysis
CN112988434A (en) * 2019-12-13 2021-06-18 中国银联股份有限公司 Service fuse, service fusing method and computer-readable storage medium

Also Published As

Publication number Publication date
US20200007438A1 (en) 2020-01-02
US11343178B2 (en) 2022-05-24

Similar Documents

Publication Publication Date Title
US11343178B2 (en) Network service header (NSH) relaying of serviceability of a service function
US20200259834A1 (en) Fast heartbeat liveness between packet processing engines using media access control security (macsec) communication
CN109391560B (en) Network congestion notification method, proxy node and computer equipment
US20180123910A1 (en) Minimally invasive monitoring of path quality
CN107078963B (en) Route tracing in virtual extensible local area networks
EP3506565B1 (en) Packet loss detection for user datagram protocol (udp) traffic
US20170054640A1 (en) Device and method for establishing connection in load-balancing system
CN113326228B (en) Message forwarding method, device and equipment based on remote direct data storage
US20220029900A1 (en) Detecting sources of computer network failures
US10027627B2 (en) Context sharing between endpoint device and network security device using in-band communications
US11463345B2 (en) Monitoring BGP routes of a device in a network
US20200076724A1 (en) Path management for segment routing based mobile user-plane using seamless bfd
US20130054817A1 (en) Disaggregated server load balancing
US20230216788A1 (en) Systems and methods for securing network paths
US9106546B1 (en) Explicit congestion notification in mixed fabric network communications
CN111641545B (en) Tunnel detection method and device, equipment and storage medium
Rajaboevich et al. Analysis of methods for measuring available bandwidth and classification of network traffic
CN113452663B (en) Network Service Control Based on Application Characteristics
US8855141B2 (en) Methods, systems, and computer readable media for utilizing metadata to detect user datagram protocol (UDP) packet traffic loss
US9455911B1 (en) In-band centralized control with connection-oriented control protocols
US8660143B2 (en) Data packet interception system
US20230379363A1 (en) Proxy detection systems and methods
KR101892272B1 (en) Apparatus and method of failure classification based on bidirectional forwarding detection protocol
US10469377B2 (en) Service insertion forwarding
KR101466944B1 (en) Method for controlling application data and network device thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: CISCO TECHNOLOGY, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PATIL, PRASHANTH;REDDY, K. TIRUMALESWAR;STITES, STEVEN RICHARD;AND OTHERS;SIGNING DATES FROM 20160224 TO 20160225;REEL/FRAME:037868/0815

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION