CN108353027B - Software defined network system and method for detecting port fault - Google Patents

Software defined network system and method for detecting port fault Download PDF

Info

Publication number
CN108353027B
CN108353027B CN201580084571.0A CN201580084571A CN108353027B CN 108353027 B CN108353027 B CN 108353027B CN 201580084571 A CN201580084571 A CN 201580084571A CN 108353027 B CN108353027 B CN 108353027B
Authority
CN
China
Prior art keywords
network node
control entity
flow control
virtual
node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201580084571.0A
Other languages
Chinese (zh)
Other versions
CN108353027A (en
Inventor
格尔·萨吉
伊兰·甘佩尔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Cloud Computing Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of CN108353027A publication Critical patent/CN108353027A/en
Application granted granted Critical
Publication of CN108353027B publication Critical patent/CN108353027B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/28Routing or path finding of packets in data switching networks using route fault recovery
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/42Centralised routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/28Flow control; Congestion control in relation to timing considerations

Abstract

The present invention relates to a software defined network system 100 comprising a first network node 101 comprising at least one virtual communication port 102. The system 100 further comprises a second network node 103 comprising at least one further virtual communication port 104, wherein the first network node 101 and the second network node 103 are adapted to exchange data via their virtual communication ports 102 and 104. The system 100 further comprises a flow control entity 105 locally positioned between the first network node 101 and the second network node 103, wherein the flow control entity 105 is configured to generate a matching flow 106 for virtual port failure detection.

Description

Software defined network system and method for detecting port fault
Technical Field
The present invention relates to a Software Defined Network (SDN) system, a method for detecting virtual port faults in an SDN system, and a computer program product implementing the method when executed on a computing device. In particular, the present invention suggests detecting a failure of a virtual port in the SDN, preferably using locally implemented mechanisms, when traffic patterns in the virtual port change or traffic stops completely.
Background
SDN is a computer networking method that allows network administrators to manage network traffic through low-level functional abstraction. This is achieved by: the system that decides where to send the traffic (control plane) is decoupled from the underlying system that forwards the traffic to the selected destination (data plane). This network topology simplifies networking. SDN requires some method for the control plane to communicate with the data plane. One such mechanism, called OpenFlow, is often mistaken for an SDN, while other mechanisms may also apply to the concept.
SDN is thus an architecture that is said to be directly programmable, agile, dynamic, manageable, cost effective, adaptable, seeking high bandwidth, dynamic properties suitable for today's applications. The SDN architecture decouples network control and forwarding functions, making network control directly programmable, with underlying infrastructure abstracted according to applications and network services.
SDN architecture is especially programmatically configured, which means that SDN enables network instances to configure, manage, secure, and optimize network resources very quickly through dynamic, automated SDN procedures.
The SDN architecture is particularly centrally managed, meaning that network intelligence is focused on software-based SDN control elements, such as forwarding elements, that maintain a global view of the network, appearing as a single logical switch in the application and policy engines.
When a computer is networked, generally, when a traffic pattern changes or traffic stops completely in a virtual port, which may be understood as application failure and/or unresponsiveness, service detection for a failure of a virtual communication port is required. Whenever a process failure is suspected (S-transition), a notification should be issued by the failure detection. Fault detection affects cluster and network management as well as application deployment and distributed computing.
There are some implementations of fault detection services, however none of these implementations are widely accepted. Existing local solutions are generally specialized implementations that do not consider standardized interfaces or interactions with other services and standards in the global system. Many problems make distributed services unacceptable, and not providing standard interfaces and interacting with existing services and infrastructure is certainly a factor.
Detecting such port failures typically uses a so-called failure detection software agent in the hypervisor element. The concept of a software agent provides a convenient and powerful way to describe complex software entities that can act autonomically to some extent to accomplish tasks on behalf of a host. The software agent attempts to understand how to poll application health or poll a specific port state of the application. Thus, the software agent needs to poll each specific application on a port of the SDN, which is not generic and difficult to manage. The proxy requires a proprietary protocol and creates a large amount of overhead in the SDN. These agents do not have a global profile of SDN. Using a proxy is not possible for fault detection services to virtual ports of the SDN.
Another known architecture for providing port fault detection service is based on Simple Network Management Protocol (SNMP). SNMP allows interoperability with network nodes and other management services, employs existing standards that improve the expectations of wide acceptance, employs standards for monitoring, allows the use of existing tools, and thus makes the fault detection service high quality. This SNMP-based fault detection typically installs counters to observe data traffic in the SDN. SNMP is sometimes considered an insecure protocol and therefore it is usually deactivated in SDN, so SNMP functionality is not available in SDN. SNMP is not supported in the virtual switch. SNMP employs a polling mechanism, which causes large delays and is therefore inconvenient.
Yet another fault detection service is available via OpenFlow and is called Flow Exports, please refer to "superflow" of toporonchi, a. and Ganjali, y. in pages 3-3 of conference record of enterprise network research of the usenex Association, 2010 inter-network administration conference: the HyperFlow mechanism described in OpenFlow's distributed control plane "(Tootoronchian, A., & Ganjlil, Y." HyperFlow: A distributed control plane for OpenFlow ", in Proceedings of the 2010internet management reference on Research on internet networking pp.3-3, USENIX Association). The Flow Exports incurs a large amount of data overhead and thus employs expensive processing resources. These Flow Exports have lower granularity of fault detection and are difficult to measure and maintain for better latency.
Disclosure of Invention
In view of the above-mentioned problems and disadvantages, the present invention aims to improve the prior art, in particular the fault detection service as described above. The present invention aims to disperse fault detection and improve the efficiency and speed of fault detection services. Which can adapt the failure detection service to a specific local traffic pattern and network node to further improve the failure detection service. The present invention seeks to avoid incurring a significant amount of overhead. A non-invasive method is needed. Accordingly, the present invention is directed to improving overall system performance and reducing system latency. The present invention is also intended to overcome all the above mentioned drawbacks.
The object of the invention is achieved by the solution presented in the attached independent claims. Advantageous implementations of the invention are further defined in the dependent claims.
In particular, the present invention preferably utilizes local failure detection on the virtual ports of the network node to know whether a virtual port is unresponsive or unloaded. The inventive local flow control entity performs a learning analysis in a first stage to learn the flow characteristics to determine the optimal granularity. Granularity is here related to parallel computation, meaning the amount of computation with respect to communication, i.e. the ratio of computation to traffic. Any abnormal behavior, i.e. data traffic not passed to/from the network node, is then triggered in some way.
A first aspect of the invention provides an SDN system comprising a first network node comprising at least one virtual communication port. The system of the invention further comprises a second network node comprising at least one further virtual communication port, wherein the first network node and said second network node are adapted to exchange data via their virtual communication ports. The system of the present invention further comprises a flow control entity locally disposed between the first network node and the second network node, wherein the flow control entity is configured to generate a matching flow for virtual port failure detection.
In the system of the first aspect, a new type of local flow control entity is introduced. The entity is locally located between the first network node and the second network node of the system. The entity is preferably a virtual entity in the SDN. The entity may be provisioned on the SDN controller as an SDN application.
The flow control entity controls the first network node and the second network node in a local manner. In this regard, whether the inventive flow control entity is configured manually or automatically depends on the way SDN is established and the SDN controller employed. Thus, the speed and efficiency of at least one data exchange between said first node and said second node is not negatively affected, and in particular the port failure detection traffic is greatly improved.
The network node may be a connection point, a redistribution point or a communication endpoint of the terminal device. In data communication, a network node may be a Data Communication Equipment (DCE), such as a modem, a hub, a bridge or a switch; or may be a Data Terminal Equipment (DTE), such as a digital telephone handset, a printer, or a host computer, such as a router, workstation, or server. The definition of a network node depends on the network and protocol layers involved. The network node may preferably be an active electronic device attached to the system, capable of creating, receiving or transmitting information through physical, logical or virtual communication channels. Thus, passive distribution points such as patch panels or patch panels are not network nodes of the present invention.
Preferably, the network node is a virtual switch such as a virtual multilayer network switch, and is intended to realize effective network automation through programming extension and support standard management interfaces and protocols such as NetFlow, sFlow, SPAN, RSPAN, CLI, LACP 802.1ag and OpenFlow; or a virtual machine, which is a simulation of a particular computer system that operates based on the computer architecture and functionality of a real or hypothetical computer. Their implementation may involve dedicated hardware, software or a combination of both.
The matching flow of the present invention is a data entry generated by said flow control entity and input to at least one network node to observe the (virtual) communication port behavior. The data entry preferably includes a virtual communication port number and/or a network address. The matching flow is used to detect a complete loss of traffic and thus port failure. With this matching flow, a loss of data traffic between the virtual port of the first network node and another virtual port of the second network node may be detected, preferably based on an idle timeout value.
The flow control entity has a full coverage topology. Thus, the entity can see the message path and can query the data traffic in a local manner along the path. The flow control entity may be used to detect virtual port failures and loads. This entity is typically supported in the virtual switch and is easily integrated with the SDN controller. The flow control entity is used to install flows without further action. To adapt to the flow control entity parameters, it can perform analysis and learning scenarios. Thus, an analysis of traffic patterns and application ports can be easily obtained, and an optimal granularity of the failure detection mechanism is obtained, since it can find the problem of the virtual communication port closest to the network node.
Port failure detection is preferably responsible for the detection of any network node failure or crash in the SDN. The failure detection issues a notification whenever a connection failure is suspected (S-transition) or is again trusted as not failing (T-transition). Optionally, some information about the level of suspicion is attached to these notifications. In SDNs, a failure on a virtual port is typically detected when the traffic pattern in the virtual port changes or stops completely, which means that an application in a network node fails or does not respond.
Communication ports in an SDN are preferred as endpoints in an operating system for a variety of communications. Which is preferably a virtual concept identifying a service or process. Each network node may include multiple virtual communication ports to communicate with different or the same network node through different applications.
In a first implementation form of the system according to the first aspect, the system is a virtual overlay system in a system further comprising a physical system topology and a software infrastructure having a logical system topology.
A virtual overlay network is a computer network that is built on top of another network. Network nodes in a virtual overlay network may be considered to be connected by virtual or logical links, e.g., over a number of physical links, each corresponding to a path in the underlying (physical and/or logical) network. For example, distributed systems such as peer-to-peer networks and client server applications are virtual overlay networks. Therefore, the function of the system is greatly improved, and the safety of connection is ensured through logic and virtual entities.
In SDN, the network may be divided into three different layers: an application plane, a control plane, and a data plane. The application plane may include an SDN application that sends its network requirements and desired network behavior to an SDN control plane, i.e., an SDN controller, in an explicit, direct, and programmatic manner. The SDN controller is a logical centralized entity responsible for: (i) moving requirements from the SDN application to the SDN data plane; (ii) an abstract/virtual overlay of the network, the SDN control plane, is provided to the SDN application. The SDN controller is decoupled from the SDN data plane. An SDN data plane, e.g., a data path, is a logical network topology, controlled by an SDN controller. The logical representation may include all or a subset of the physical resources, and thus the SDN data plane is the logical network device that forwards the actual data traffic over the physical system topology.
The OpenFlow protocol may be used between communications when the SDN control plane standardizes the programmable and configurable manners of network nodes. OpenFlow controls traffic flows from a controller for a plurality of switches, acting in a centralized manner as an interface between the controller and a physical or virtual network element. Therefore, SDN controllers for traffic and networking management, such as forwarding elements, have been implemented in the SDN of the present invention.
In a second implementation form of the system according to the first aspect as such or according to the first implementation form of the first aspect, the matching flow generated by the local flow control entity is input as a flow entry to a forwarding element of the system.
The SDN controller, such as the forwarding element, is preferably a centralized entity in an OpenFlow compliant network system. OpenFlow is a communication protocol that accesses the forwarding plane of a network switch or router through a network. OpenFlow causes an SDN controller to determine a path of a network packet through a network system of a switch.
In a third implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the matching flow generated by the local flow control entity includes a destination address of the first network node or the second network node, and further includes a connection test packet.
Each network node in an SDN has a physical, logical, and/or virtual address, typically one for each communication port that the network node owns.
For example, the address may be a Media Access Control address (MAC address), which is a unique identifier allocated to a network port for communication on a physical network segment. The MAC address is used as a network address for most IEEE 802 network technologies such as ethernet and WiFi. Logically, the MAC address is used in the MAC protocol sublayer of the OSI reference model.
The address may also be an Internet Protocol address (IP address), which is a digital label assigned to each network node participating in the SDN that uses an Internet Protocol for communication. The IP address fulfills two main functions: host or network interface identification and location addressing.
The flow control entity is configured to detect a possible failure scenario based on the destination address when it is determined that the observed virtual communication port has no traffic.
The flow control entity can import traffic into various points along the path between the first network node and the second network node and control connection test packets to the appropriate destination side by the SDN application.
In a fourth implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the flow control entity is located locally on the first node side to detect a virtual port failure of the first node or the second node. This implementation facilitates detecting a port failure on the first node side in a path between the first node and the second node. Fault detection is to or from or both. In some cases, it may be advantageous to observe traffic from the first node, for example, when the first node is a preferred traffic generator and a port failure is likely to occur there.
Additionally or alternatively, the flow control entity is locally placed at the second node side to detect a virtual port failure of the first node or the second node. This implementation facilitates detecting a port failure on the second node side in a path between the first node and the second node. Fault detection is to or from or both. In some cases, it may be advantageous to observe traffic from the second node, for example, when the second node is a preferred traffic obtainer and a port failure is likely to occur there.
Additionally or alternatively, the flow control entity is locally placed in a virtual element between the first node and the second node to detect a virtual port failure of the first node or the second node. This implementation helps to detect port failures that occur due to path problems. Fault detection is to or from or both. In some cases, for example, when the virtual element is a preferred traffic distributor and a port failure is likely to occur there, it may be advantageous to observe traffic on the path between the first node and the second node.
In a fifth implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the matching flow generated by the local flow control entity comprises a default idle timeout value, and a virtual port failure is detected when the idle timeout value expires.
The idle timeout is a network parameter related to an executed event designed to occur at the end of a predetermined runtime. A timing element is initiated to make observations of the data traffic pattern. If the timer value is greater than or equal to the default timeout value, the local flow control entity assumes that data traffic between the first node and the second node is lost. Thus, the failure detection is based on strict timing rules, and upon port failure detection, the flow control entity generates a notification message.
The default timeout value may be statically set by a user of the system. If a learning procedure occurs, the flow control entity polls for a matching flow and identifies a minimum time for data traffic to travel from the first node to the second node. The minimum timeout is identified by learning the normal traffic pattern and learning the peak value of the idle timeout. Upon detection of the minimum idle timeout, failure detection may be applied to the matching flow in the second phase of the failure detection service.
In a sixth implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the local flow control entity adjusts the default idle timeout value in dependence on a specific data pattern of the system.
Thus, when a matching flow expires, the flow control entity adjusts the timeout value and performs a learning procedure, including learning the behavior and traffic pattern of the application port. The flow control entity thus learns the traffic pattern and can detect a failure according to each specific scenario between specific network nodes. Thus, it is possible to detect a failure occurring when a network node transmits data traffic from its virtual communication port due to a load state or an abnormality. An unresponsive application may be detected or no traffic inflow detected, for example, because the network node is shut down.
Thus, several ways of detecting a failure on a virtual port are described, e.g. it may be detected whether no traffic has occurred or whether some applications in the network node have lost their traffic, e.g. based on the OSI layer.
In a seventh implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the matching flow generated by the local flow control entity is used to measure a data exchange delay time value between the first node and the second node. Thus, a traffic pattern between the first node and the second node may be identified. Thus, the learning period of the flow control entity is provided before the actual fault detection service. During this learning period the flow control entity is always polling for matching flows and identifying a minimum timeout value, so it can detect failures by learning the general traffic pattern and identifying the peak of idle timeout.
In an eighth implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the matching stream generated by the local flow control entity is used to detect a traffic loss, or is used to input a connection test packet between the first node and the second node at a specific layer in an Open System Interconnection (OSI) model.
The connection test packet is a packet configured by a flow control entity of the SDN. The message is based on the OSI layer configuration at which port behavior should be observed.
Preferably, the connection test message is an Address Resolution Protocol (ARP) message. ARP is a telecommunications protocol used to resolve network layer addresses to link layer addresses, a key function of multiple access networks. The ARP message is used to detect connectivity on OSI layer-2 between said first node and said second node. If the flow control entity does not get a response to ARP, it can be assumed that there is no traffic between the first node and the second node, e.g. because one of the nodes is off or busy or in a failure mode.
Preferably, the connection test message is a Ping message. Ping is a computer network management software tool for testing the reachability of network nodes in SDN and measuring the round trip time for messages to be sent from a source node to a destination node and back. The Ping message is used to detect connectivity on OSI layer-3 between the first node and the second node.
Preferably, the connection test message is a Hypertext Transfer Protocol (HTTP) message. HTTP messages are application protocols for distributed, collaborative, hypermedia information systems. The HTTP message is preferably an HTTP request message. The flow control entity checks the response to the HTTP request based on the HTTP check response message. The HTTP message is used to detect connectivity at OSI layer-4 between the first node and the second node.
Preferably, the connection test messages comprise several connection test messages of different OSI layers in order to detect the port behavior and its connectivity on different OSI layers. Identifying whether a port failure occurred only at a particular OSI layer and on which particular OSI layer the port failure occurred or whether the observed application responded at each of the observed OSI layers.
The flow control entity is used to input traffic in various points along the path to control the arrival of messages on the other side by the SDN application. Thus, it can fix the point at which traffic has been lost, and can test various traffic types and patterns.
In the event of a failure, the observed traffic itself or the application will not send data traffic for various reasons. This is detected by the failure detection service. Since the network node may still send management data or other data traffic through other applications, it is important to observe the state of the entire network port. This may be done via lower layer connection test messages, e.g. Ping or ARP in the matching flow of the present invention.
In a ninth implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the local flow control entity is configured to associate the physical system topology with the software infrastructure having the logical system topology.
Thus, the failures caused by the physical or logical entities of the virtual system detected by the above described failure detection scheme are investigated. If the failure can only occur due to a physical or logical problem, the flow control entity notifies separately.
In a tenth form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the local flow control entity is configured to detect a misconfiguration of the system.
Preferably, the wrong configuration is an inappropriate network parameter that prevents a certain service from being applied properly. For example, configuration identification includes setting and maintaining baselines that define the SDN or subsystem architecture, components, and any deployment that occurs at any point in time. It is the basis for identifying, recording, and subsequently tracking changes to any part of the system through design, development, testing, and final delivery. It also includes evaluation of the change request, the change plan, and its subsequent approval or disapproval. Configuration identification is the process of controlling modifications to system design, hardware, firmware, software, and files. It also includes the process of recording and reporting configuration item descriptions. Once there is a suspected problem, verification of the baseline configuration and approved modification can be quickly determined. It also includes independent review of hardware and software to assess compliance with established performance requirements, applicable standards, and assigned functional product baselines.
In an eleventh implementation form of the system according to the first aspect as such or any of the preceding implementation forms of the first aspect, the local flow control entity is configured to input traffic related to an Object Access Method (OAM) into the matching flow.
OAM is an access method intended to store a large number of large files, such as pictures. These large files may cause problems, default idle timeout may expire before these data transfers or network resources may be blocked in an unexpected manner. After the matching flow is input through the OAM, the SDN application can now control the packet to reach other suitable network nodes.
A second aspect of the present invention provides a method for detecting a virtual port failure in a software-defined networking system. The method comprises the following steps: generating a matching flow by a local flow control entity, wherein the flow control entity is disposed between a first network node and a second network node in the system; inputting the generated matching flow in a forwarding element of the system; establishing the matching flow between the first node and the second node, wherein the matching flow comprises a connection test packet using a default idle timeout value; detecting a virtual port failure upon expiration of the timeout value.
In a first implementation form of the method according to the second aspect, the method further comprises the following steps: and if the timeout value is not expired when the connection test message is received, reporting that the connection test message is successfully received.
In a second implementation form of the method according to the second aspect as such or according to the first implementation form of the second aspect, the system is a virtual overlay system in a system further comprising a physical system topology and a software infrastructure having a logical system topology.
In a third implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the flow control entity is located locally on the first node side to detect a virtual port failure of the first node or the second node; and/or the flow control entity is locally placed at the second node side to detect a virtual port failure of the first node or the second node; and/or the flow control entity is locally placed in a virtual element between the first node and the second node to detect a virtual port failure of the first node or the second node.
In a fourth implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, a virtual port failure is detected upon expiration of the idle timeout value.
In a fifth implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the local flow control entity adjusts the default idle timeout value in dependence on a specific data pattern of the system.
In a sixth implementation form of the method according to any of the preceding implementation forms of the second aspect, the matching flow generated by the local flow control entity is used to measure a data exchange delay time value between the first node and the second node.
In a seventh implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the matching flow generated by the local flow control entity is used to detect a traffic loss, or is used to input a connection test packet between the first node and the second node at a specific layer in an open system interconnection model.
In an eighth implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the local flow control entity is configured to associate the physical system topology with the software infrastructure having the logical system topology.
In a ninth implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the local flow control entity is configured to input traffic related to the object access method into the matching flow.
In a tenth implementation form of the method according to the second aspect as such or any of the preceding implementation forms of the second aspect, the local flow control entity is configured to detect a faulty configuration of the system.
The method of the second aspect achieves all the above-mentioned advantages of the system of the first aspect.
A third aspect of the present invention provides a computer program product for, when executed on a computing device, implementing the method of detecting a virtual port failure according to the second aspect and any of its implementation forms.
All the advantages thereof are achieved by the computer program product implementing the method.
It should be noted that all devices, elements, units and means described in the present application may be implemented in software or hardware elements or any combination thereof. The steps performed by the various entities described in this application and the functions to be performed by the various entities described are intended to mean that the various entities are used to perform the various steps and functions. Even if in the following description of specific embodiments a specific function or step formed entirely by an external entity is not reflected in the description of specific details elements of that entity performing that specific step or function, it should be clear to the skilled person that these methods and functions can be implemented in individual software or hardware elements or any combination thereof.
Drawings
The foregoing aspects and many of the attendant aspects of this invention will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings, wherein:
FIG. 1 illustrates the basic system of an embodiment of the present invention;
FIG. 2 shows a system of a first embodiment of the invention;
FIG. 3 shows a system of a second embodiment of the invention;
FIG. 4 shows a system of a third embodiment of the invention;
FIG. 5 shows a system of a fourth embodiment of the invention;
FIG. 6 shows a method flow diagram of an embodiment of the invention.
Detailed Description
Fig. 1 shows a basic system 100 of an embodiment of the invention. System 100 is an SDN system 100 comprising a first network node 101 comprising at least one virtual communication port 102. The first network node 101 may be a DCE, e.g., a modem, hub, bridge, or switch; or may be a DTE such as a digital telephone handset, printer or higher level computer such as a router, workstation or server.
SDN system 100 further comprises a second network node 103 comprising at least one further virtual communication port 104. The second network node 103 may be a DCE, e.g., a modem, hub, bridge or switch; or may be a DTE such as a digital telephone microphone, a printer, or a host computer such as a router, workstation, or server.
The first network node 101 and said second network node 103 are adapted to exchange data via their virtual communication ports 102 and 104. Virtual communication ports 102 and 104 are dedicated network connections for each network node in SDN 100. It provides network nodes 101 and 103 and applications on network nodes 101 and 103 with all the necessary performance, reliability and security that can be expected from physical communication ports, but with increased virtual flexibility. Unlike physical ports, virtual communication ports 102 and 104 are customized according to the requirements of network nodes 101 and 103, respectively. Bandwidth may be freely allocated between network nodes 101 and 103, as may security of service (QoS) parameters. Network nodes 101 and 103 accurately obtain their required resources, respectively. First network node 101 and second network node 103 may be Virtual Switches (VS) in SDN 100.
Virtual communication ports 102 and 104 provide a powerful way for SDN to control network node behavior without proprietary network node extensions or proxies. SDN 100 actually requires less network node intelligence because it controls the physical access points to which network nodes 101 and 103 are connected and allocates resources among virtual communication ports 102 and 104. Instead of network nodes 101 and 103 being expected to accommodate SDN 100, virtual ports 102 and 104 accommodate network nodes 101 and 103, respectively. Each virtual communication port 102 and 104 provides an additional line of security. Nodes 101 and 103 are each defined in their respective virtual networks and set access rights to match node capabilities to user roles. The access authority is customized based on the firewall of each user, and the SDN is prevented from being subjected to authority expansion and internal threat.
SDN system 100 further comprises a flow control entity 105, locally interposed between the first node 101 and the second node 103. The flow control entity 105 is used to generate a matching flow 106 for virtual port failure detection.
Local flow control entity 105 is used to configure the matching flows for virtual ports 102 and 104, respectively, in system 100. The matching flow may be used for incoming or outgoing communications, which gets the highest priority but does nothing. The match stream 106 includes a default idle timeout value that may also be changed during the learning period. When matching flow 106 expires, a failure detection message is generated, informing SDN 100 of the detection of the virtual port failure.
Flow control entity 105 learns the traffic pattern and can detect a fault based thereon. Thus, the default timeout values in the matching flow 106 may vary with the data traffic patterns found by the local flow control entity 105 in the learning procedure. For example, in the learning phase, flow control entity 105 may adjust the timeout value when matching flow 106 expires while data traffic has been delivered. The adjustment is repeated a predetermined number of times to find a minimum timeout value that needs to be set for data transmission with a propagation loss typical of the communication scenario, the propagation loss being based on the application portal and the traffic pattern of the application portal.
Flow control entity 105 detects a failure in traffic due to load or anomalous transmission ports 102 and 104. The flow control entity 105 detects an unresponsive application based on the detected application port failure. For example, when the network nodes 101 and 103 are switched off, the flow control entity 105 detects that no traffic is coming in. If a better detection metric is used, the particle size is still good.
The flow control entity 105 has a full coverage topology, meaning that the entity can see the packet path between the first node 101 and the second node 103 and can query the packet along the path. The flow control entity 105 may import OAM traffic in various points along the path between the first node 101 and the second node 103 to control the arrival of packets on the respective other side by the SDN application. Flow control entity 105 can fix the point of traffic loss and can test various traffic types and patterns. The flow control entity 105 is aware of the Content Management System (CMS) configuration. The content management system is a computer application that allows content distribution, editing and modification, organization, deletion, and maintenance from a central interface. Flow control entity 105 may detect an incorrect configuration, such as a firewall, or a load balancer, or a Dynamic Host Configuration Protocol (DHCP), or a Network Address Translation (NAT) and/or a Domain Name System (DNS) traffic configuration.
The matching flow 106 comprises the destination addresses of the network nodes 101 and 103 to be observed. The destination address may be, for example, an OpenFlow port number. Thus detecting a loss of flow. Additionally or alternatively, the destination address may be a traffic TCP/UDP port number. Which may be configured through a cloud/service orchestration. A learning mechanism using Deep Packet Inspection (DPI for short) can be applied to detect services behind network nodes 101 and 103 and can automatically apply matching flow 106.
Fig. 2 shows a system 100 according to a first embodiment of the invention. Wherein the system 100 is a virtual overlay system 203. The virtual overlay system 203 overlays the logical topology 202 and the physical topology 201. Flow control entity 105 may associate physical topology 201 with a software infrastructure having logical topology 202 through the inventive port failure detection service. The matching stream 106 in the stream control entity 106 is used to detect different fault scenarios.
The matching flow 106 detects that no traffic is available on the particular ports 102 and 104. This is accomplished by a default idle timeout value. When the installation completes matching the flow 106, a timer event is started. If no traffic is available on the observed ports 102 and 104 at the set default time, the matching flow 106 indicates that the timeout value has expired. Thus, a port failure is detected when a timer value is reached that is greater than or equal to the timeout value.
Furthermore, the matching flow 106 is used to investigate on which OSI layer a port is inactive. Thus, the matching flow 106 preferably configures at least one connection test packet.
The first connection test message may include an ARP message. If no response is obtained based on sending the ARP message, the flow control entity 105 detects a port failure at data link layer 2 of the OSI model.
Another connection test message may include a Ping message. If no response is obtained based on sending the Ping message, the flow control entity 105 detects a port failure at network layer 3 of the OSI model.
Another connection test message may comprise an HTTP message. If no response is received based on sending the HTTP request, the flow control entity 105 detects a port failure at the transport layer 4 of the OSI model.
In a preferred embodiment, the connection test messages are merged. The matching flow 106 thus comprises connection-specific test messages configured by the flow control entity 105, which messages comprise a plurality of different OSI-model-layer messages to observe whether one of the connection test messages has not obtained a response. It may therefore occur that ports 102 and 104 may provide connectivity on data link layer 2 due to the flow control entity 105 obtaining an ARP response. It may also occur that ports 102 and 104 may also provide connectivity at network layer 3, since the flow control entity 105 has obtained that network nodes 101 and 103 have responded to Ping. It may also occur that ports 102 and 104 do not provide connectivity at transport layer 4 because network nodes 101 and 103 do not respond to HTTP requests. Thus, flow control entity 105 detects a port failure in either port 102 or port 104 and designates the failure as a layer 4 failure.
Fig. 3 shows a system 100 according to a second embodiment of the invention. Here, the components of the physical topology 201 and the logical topology 202 are described in more detail.
Physical topology 201 may include a first computing device 300 and a second computing device 312. These computing devices 300 and 312 are physically interconnected by a physical network 313, which may be a wired connection or a wireless connection. Computing devices 300 and 312 may be any physical network node, such as a host computer, a client, a server, a handset, or a distribution point in network 201, such as a physical switch or a physical router. Computing devices 300 and 312 may be network nodes 101 and 103 as described above.
The logical topology 202 is shown in a pipeline. A first Virtual Machine (VM) 301 is installed and processed in a first computing device 300, and is logically connected to a second VM 311, which is installed and processed on a second computing device 312.
A plurality of logical and virtual instances are placed on logical paths between the first VM 301 and the second VM 311 of the topology 202.
For example, Security Groups (SGs) 302 and 310 are installed to control traffic between VMs 301 and 311 and particular subnets. Unlike firewalls, which are controlled at the Operating System (OS) level, SGs are controlled at the network level, independently of the OS running at VMs 301 and 311. In SGs 302 and 310, access control rules are defined, such as source IP address, destination IP address, port, protocol, and/or proprietary actions such as allow or deny.
For example, Virtual Switches (VS) 303 and 309 are installed to control traffic between VMs 301 and 311. VS303 and 309 are software programs that allow VMs 301 and 311 to communicate with each other and to check for different messages. VS303 and 309 may be understood as virtual ports 102 and 104 as described above.
For example, virtual router 304 is installed to route data traffic between VMs 301 and 311. For example, a Network Access Control List (NACL) 305 is provided in a logical path between the VMs 301 and 311. For example, a firewall as a service (FWaaS) 306 is provided in a logical path between the VMs 301 and 311. For example, load balance as a service (LBaaS) 307 is provided in the logical path between the VMs 301 and 311. For example, a Virtual Private Network (VPN) 307 is provided in the logical path between the VMs 301 and 311.
Examples 300 through 312 are merely examples. More logical and virtual instances may be used in the logical topology 202 and the virtual overlay system 203.
In the virtual overlay system 203, some of the instances 300 to 312 are referenced. As previously described, the first node 101 is used: referring to the first computing device 300, the second node 103: referring to the second computing device 312, virtual port 102: first VS303, virtual port 104: to the second VS309, the flow control entity 105 and the matching flow 106. Further, a router 304 is interposed between the first node 101 and the second node 102. The router 304 is equipped with FWaaS 306.
The first VM 301 is installed and run on a first computing device 300, hereinafter referred to as a first node 101. The virtual port 102, i.e., the first VS303, includes an SG 302 and a tunnel bridge (not referenced).
Second VM 311 is installed and run on second computing device 312, hereinafter referred to as second node 103. The virtual port 104, i.e. the second VS 310, comprises an LBaaS 307 and also a tunnel bridge (not referenced).
An example of port failure detection is now described with the specific embodiment of fig. 3. First, the local flow control entity 105 generates a matching flow 106 comprising connection test messages, e.g. Ping messages, and the destination address of the second VM 311. The generated matching flow 106 is input by the flow control entity 105 to the first VM 301.
Second, VM 311 generates an installation success message to flow control entity 105 indicating that the rule for observing the incoming connection test packet from VM 301 has been installed in second node 103 of VM 311.
Again, the connection test packet itself is input to the first node 101 of the VM 301.
Finally, the flow control entity 105 detects whether the VM 311 on the second node 103 has responded to the connection test packet of the VM 301 on the first node 101. If the Ping message has been used as a connection test message, the Ping response should be detected by the local flow control entity 105 and a notification message should be generated.
Fig. 4 shows a system 100 according to a third embodiment of the invention. In contrast to fig. 3, fig. 4 shows the logical and virtual overlay system 203 instead of the destination address in the corresponding example. To avoid unnecessary repetition, only the differences between fig. 3 and fig. 4 are described below. The following IPv4 addresses are merely examples of destination addresses. Additionally or alternatively, destination addresses or MAC addresses conforming to IPv6 may be used.
The router 304 is virtually connected to the WAN for access via IPv4 address 172.16.150.31. Router 304 is virtually accessed by VS303 via IPv4 address 10.0.0.1. VM 301 is also accessed by VS303 through IPv4 address 10.0.0.11.
Router 304 is also virtually connected to VS309 for access via IPv4 address 10.30.0.1. VM 311 is also virtually accessed by VS309 via IPv4 address 10.30.0.21.
The smart top of switches VS303 and VS309 may be managed by flow control entity 105, benefiting from the same fault detection mechanism. The delay can be measured and all physical paths between the first node 101 and the second node 102 can be detected by fault detection and matching the connection test packets of the flow 106. In addition, pluggable alarm mechanisms may be introduced.
Furthermore, wrong configurations can now be easily detected. In particular, the traffic configuration of FWaaS 306, LBaaS 307, DHCP, NAT and/or DNS may be detected by the inventive failure detection mechanism. At the same time, misconfigurations of the SG and the router are also detected.
Fig. 5 shows a system 100 according to a fourth embodiment of the invention. Wherein the illustrated system 100 conforms to the OpenFlow standard. The matching flow 106 generated by the local flow control entity 105 is input as a flow entry 502 into the forwarding element 501 of the OpenFlow system 100. The local flow control entity 105 is obtained in the hypervisor instance of the virtual overlay system 203. The first node 101 and the second node 103 comprise virtual ports 102 and 104, respectively, the failure detection of which is as follows.
The first VM 301 is installed and run on the first node 101. Second VM 311 is installed and run on second node 103. First, the local flow control entity 105 generates a matching flow 106 comprising a connection test message, e.g. an ARP message, and the destination address of the second VM 311. The generated matching flow 106 is input into the first VM 301 by the flow control entity 105. The flow expiration and/or flow reconfiguration of the matching flow 106 is provided as a flow entry 502 to the forwarding element 501.
Second, VM 311 generates an installation success message to flow control entity 105 indicating that the rule for observing the incoming connection test packet from VM 301 has been installed in second node 103 of VM 311.
Again, the connection test packet itself is input to the first node 101 of the VM 301.
Finally, the flow control entity 105 detects whether the VM 311 on the second node 103 has responded to the connection test packet of the VM 301 on the first node 101. If the ARP message has been used as a connection test message, the ARP response should be detected by the local flow control entity 105 and a notification message should be generated. If the flow expires before the ARP response is obtained, forwarding element 501 generates a respective notification.
Fig. 6 illustrates a method 1000 for detecting a virtual port failure in the software-defined networking system 100 according to an embodiment of the present invention. In a first step 1001 of the method 1000, a local flow control entity 105 generates a matching flow 106, which is arranged between a first network node 101 and a second network node 103 of the system 100. In a second step 1002, the generated matching stream 106 is input into the forwarding element 501 of the system 100. In a third step 1003 a matching flow 106 is established between the first node 101 and the second node 103, wherein the matching flow 106 comprises a connection test message using a default idle timeout value. In a fourth step 1004, a virtual port failure is detected when the timeout value expires.
Optionally, in a fifth step 1005 (dashed line in fig. 6), if the timeout value is not expired when the connection test packet is received, reporting that the connection test packet is successfully received.
In summary, with the proposed system 100 and method 1000, the present invention provides a failure detection mechanism that utilizes an idle timeout mechanism to detect virtual port failures and/or load conditions. The fault detection services are typically supported by the VS and are easily integrated with the local flow control entity 105. This approach is non-intrusive in that flow control entity 105 has installed a matching flow 106 without further action, and flow control entity 105 learns to adjust the timeout to a minimum timeout value by different timeout values. For the learning procedure, traffic patterns and application ports are observed. Since the problems are all located closest to the virtual port rather than centrally, the best granularity of fault detection is obtained.
The flow control entity 105 in all described embodiments may learn the average traffic pattern between the first node 101 and the second node 103. An optimal granularity timeout value is configured for the entity based on the average traffic pattern.
If matching flow 106 expires, flow control entity 105 detects a port failure, which may be inbound or outbound or both. A safety measure may be provided to reset the timer and wait for a second expiration before generating a notification/report on port failure detection.
Flow control entity 105 is then used to inform the user that a port failure has been detected. Furthermore, an automatic remediation and/or flow control entity 105 may be provided to reroute the fault detected traffic, e.g., by means of an OpenFlow standardisation procedure.
The results of the fault detection may trigger automatic remediation, either locally or from the flow control entity 105. The result may be that all port traffic is blocked if an application failure mode is detected. Additionally or alternatively, blocking all traffic may trigger an upper layer mechanism, e.g., LBaaS 307, to divert traffic to a different network node.
The inventive concept describes various ways to detect faults on the virtual ports 102 and 104. It may be detected that no traffic is available on the observed ports 102 and 104, or that some application traffic is lost, for example, based on OSI layer based connection test messages.
In the event of a failure, the observed traffic itself or the application will not send data traffic for various reasons. This is detected by the failure detection service. Since the network nodes 101 and 103 may still send management data or other data traffic through other applications, it is important to detect the full port status. This may be accomplished through lower layer connection test messages, such as Ping or ARP in the matching flow 106 of the present invention.
To identify a failed application from among multiple applications, a user may want to know what applications and/or traffic are actually running on a particular port 102 and 104. Therefore, the flows to be matched can be set by DPI integration.
If a loss of traffic is detected, it can be assumed that the primary traffic in network nodes 101 and 103 is the data traffic of the application. When matching flow 106 expires, the input of various connection test messages may be initiated, examining the specific ports 102 and 104 of network nodes 101 and 103 to detect which specific application failed.
It should be noted that the system 100 is not limited to two network nodes 101 and 103. More network nodes or intermediate network nodes may be employed in system 100.
The invention has been described in connection with various embodiments and implementations as examples. Other variations will be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the independent claims. In the claims and the description the term "comprising" does not exclude other elements or steps and the "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

Claims (16)

1. A software-defined networking system (100), comprising:
a first network node (101) comprising at least one virtual communication port (102);
a second network node (103) comprising at least one further virtual communication port (104), wherein the first network node (101) and the second network node (103) are configured to exchange data via their virtual communication ports (102 and 104);
a flow control entity (105) locally interposed between the first network node (101) and the second network node (103), wherein the flow control entity (105) is configured to generate a matching flow (106) for virtual port failure detection;
-the matching flow (106) generated by the local flow control entity (105) comprises a default idle timeout value;
detecting a virtual port failure when the idle timeout value expires.
2. The system (100) of claim 1, wherein:
the system (100) is a virtual overlay system (203) in a system further comprising a physical system topology (201) and a software infrastructure having a logical system topology (202).
3. The system (100) according to claim 1 or 2, wherein:
the matching flow (106) generated by the flow control entity (105) is input as a flow entry (502) into a forwarding element (501) of the system (100).
4. The system (100) according to claim 1 or 2, wherein:
the matching flow (106) generated by the local flow control entity (105) comprises a destination address of the first network node (101) or the second network node (102) and further comprises a connection test packet.
5. The system (100) according to claim 1 or 2, wherein:
the flow control entity (105) is locally placed at the first network node (101) side to detect a virtual port failure of the first network node (101) or the second network node (103).
6. The system (100) according to claim 1 or 2, wherein:
the flow control entity (101) is locally placed at the second network node (103) side to detect a virtual port failure of the first network node (101) or the second network node (103).
7. The system (100) according to claim 1 or 2, wherein:
the flow control entity (105) is locally placed in a virtual element between the first network node (101) and the second network node (103) to detect a virtual port failure of the first network node (101) or the second network node (103).
8. The system (100) of claim 1, wherein:
the local flow control entity (105) adjusts the default idle timeout value according to a specific data pattern of the system (100).
9. The system (100) according to claim 1 or 2, wherein:
the matching flow (106) generated by the local flow control entity (105) is used for measuring a data exchange delay time value between the first network node (101) and the second network node (103).
10. The system (100) according to claim 1 or 2, wherein:
the matching flow (106) generated by the local flow control entity (105) is used for detecting the traffic loss or inputting a connection test message between the first network node (101) and the second network node (103) at a specific layer in an open system interconnection model.
11. The system (100) according to claim 2, wherein:
the flow control entity (105) is configured to associate the physical system topology (201) with the software infrastructure having the logical system topology (202).
12. The system of claim 2, wherein:
the local flow control entity (105) is configured to input an object access method related traffic into the matching flow (106).
13. The system according to claim 1 or 2, characterized in that:
the local flow control entity (105) is configured to detect a misconfiguration of the system (100).
14. A method (1000) for detecting virtual port failure in a software defined network system (100), the method (1000) comprising the steps of:
-a local flow control entity (105) generating (1001) a matching flow (106), wherein the flow control entity (105) is interposed between a first network node (101) and a second network node (103) in the system (100);
inputting (1002), in a forwarding element (501) of the system (100), the generated matching flow (106);
establishing (1003) the matching flow (106) between the first network node (101) and the second network node (103), wherein the matching flow (106) comprises a connection test packet using a default idle timeout value;
detecting (1004) a virtual port failure upon expiration of the timeout value;
-the matching flow (106) generated by the local flow control entity (105) comprises a default idle timeout value;
detecting a virtual port failure when the idle timeout value expires.
15. The method (1000) of claim 14, further comprising the steps of:
if the timeout value is not due when the connection test message is received, reporting (1005) that the connection test message is successfully received.
16. A computer-readable storage medium for storing a computer program which, when executed on a computing device, carries out the method (1000) of detecting a virtual port failure according to claim 14 or 15.
CN201580084571.0A 2015-11-13 2015-11-13 Software defined network system and method for detecting port fault Active CN108353027B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/076517 WO2017080611A1 (en) 2015-11-13 2015-11-13 Software defined network system for port failure detection

Publications (2)

Publication Number Publication Date
CN108353027A CN108353027A (en) 2018-07-31
CN108353027B true CN108353027B (en) 2020-12-15

Family

ID=54707750

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201580084571.0A Active CN108353027B (en) 2015-11-13 2015-11-13 Software defined network system and method for detecting port fault

Country Status (2)

Country Link
CN (1) CN108353027B (en)
WO (1) WO2017080611A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110891284B (en) * 2018-09-07 2021-11-23 维沃移动通信有限公司 Method for acquiring and sending periodic traffic pattern information, base station and communication network element
CN111835682B (en) 2019-04-19 2021-05-11 上海哔哩哔哩科技有限公司 Connection control method, system, device and computer readable storage medium
EP3748562A1 (en) 2019-05-08 2020-12-09 EXFO Solutions SAS Timeline visualization & investigation systems and methods for time lasting events
CN115208759B (en) * 2022-07-14 2024-02-23 中国电信股份有限公司 Fault analysis system and method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6032194A (en) * 1997-12-24 2000-02-29 Cisco Technology, Inc. Method and apparatus for rapidly reconfiguring computer networks
CN104506408A (en) * 2014-12-31 2015-04-08 杭州华三通信技术有限公司 Data transmission method and device based on SDN
US9038151B1 (en) * 2012-09-20 2015-05-19 Wiretap Ventures, LLC Authentication for software defined networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6032194A (en) * 1997-12-24 2000-02-29 Cisco Technology, Inc. Method and apparatus for rapidly reconfiguring computer networks
US9038151B1 (en) * 2012-09-20 2015-05-19 Wiretap Ventures, LLC Authentication for software defined networks
CN104506408A (en) * 2014-12-31 2015-04-08 杭州华三通信技术有限公司 Data transmission method and device based on SDN

Also Published As

Publication number Publication date
CN108353027A (en) 2018-07-31
WO2017080611A1 (en) 2017-05-18

Similar Documents

Publication Publication Date Title
US9178807B1 (en) Controller for software defined networks
US10425320B2 (en) Methods, systems, and computer readable media for network diagnostics
US10291473B2 (en) Routing policy impact simulation
US9100298B2 (en) Host visibility as a network service
US11595483B2 (en) Devices, systems and methods for internet and failover connectivity and monitoring
US20040049714A1 (en) Detecting errant conditions affecting home networks
US11418955B2 (en) System and methods for transit path security assured network slices
US7865591B2 (en) Facilitating DHCP diagnostics in telecommunication networks
CN108353027B (en) Software defined network system and method for detecting port fault
EP3646533B1 (en) Inline stateful monitoring request generation for sdn
US11563665B2 (en) Detecting web probes versus regular traffic through a proxy including encrypted traffic
US20170104630A1 (en) System, Method, Software, and Apparatus for Computer Network Management
EP4120654A1 (en) Adaptable software defined wide area network application-specific probing
CN115733727A (en) Network management system and method for enterprise network and storage medium
CN112751947B (en) Communication system and method
US9553788B1 (en) Monitoring an interconnection network
US10756966B2 (en) Containerized software architecture for configuration management on network devices
US11949663B2 (en) Cloud-based tunnel protocol systems and methods for multiple ports and protocols
US9256416B1 (en) Methods and apparatus for automatic session validation for distributed access points
EP4080850A1 (en) Onboarding virtualized network devices to cloud-based network assurance system
US11765059B2 (en) Leveraging operation, administration and maintenance protocols (OAM) to add ethernet level intelligence to software-defined wide area network (SD-WAN) functionality
Syafei et al. Centralized Dynamic Host Configuration Protocol and Relay Agent for Smart Wireless Router
Vadivelu et al. Design and performance analysis of complex switching networks through VLAN, HSRP and link aggregation
Apostolidis Network management aspects in SDN
Campanile Investigating black holes in segment routing networks: identification and detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220218

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.