US20190190851A1 - Method and device for monitoring traffic in a network - Google Patents

Method and device for monitoring traffic in a network Download PDF

Info

Publication number
US20190190851A1
US20190190851A1 US16/211,177 US201816211177A US2019190851A1 US 20190190851 A1 US20190190851 A1 US 20190190851A1 US 201816211177 A US201816211177 A US 201816211177A US 2019190851 A1 US2019190851 A1 US 2019190851A1
Authority
US
United States
Prior art keywords
network
switch
traffic
mirror
switches
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US16/211,177
Other versions
US10659393B2 (en
Inventor
Ming-Hung Hsu
Tzi-cker Chiueh
Yu-Wei Lee
Yi-An Chen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Industrial Technology Research Institute ITRI
Original Assignee
Industrial Technology Research Institute ITRI
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Industrial Technology Research Institute ITRI filed Critical Industrial Technology Research Institute ITRI
Assigned to INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE reassignment INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CHIUEH, TZI-CKER, CHEN, YI-AN, HSU, MING-HUNG, LEE, YU-WEI
Publication of US20190190851A1 publication Critical patent/US20190190851A1/en
Application granted granted Critical
Publication of US10659393B2 publication Critical patent/US10659393B2/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/40Bus networks
    • H04L12/407Bus networks with decentralised control
    • H04L12/413Bus networks with decentralised control with random access, e.g. carrier-sense multiple-access with collision detection [CSMA-CD]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/28Data switching networks characterised by path configuration, e.g. LAN [Local Area Networks] or WAN [Wide Area Networks]
    • H04L12/46Interconnection of networks
    • H04L12/4641Virtual LANs, VLANs, e.g. virtual private networks [VPN]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0213Standardised network management protocols, e.g. simple network management protocol [SNMP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/12Discovery or management of network topologies
    • H04L41/122Discovery or management of network topologies of virtualised topologies, e.g. software-defined networks [SDN] or network function virtualisation [NFV]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/40Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/02Capturing of monitoring data
    • H04L43/028Capturing of monitoring data by filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/20Arrangements for monitoring or testing data switching networks the monitoring system or the monitored elements being virtualised, abstracted or software-defined entities, e.g. SDN or NFV
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/15Interconnection of switching modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/20Support for services
    • H04L49/208Port mirroring
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/20Hop count for routing purposes, e.g. TTL
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks

Definitions

  • the disclosure relates to network communication technologies, and more particularly, it relates to a method and a device for monitoring traffic in a network.
  • Network traffic mirroring is a method of monitoring network traffic that forwards a copy of incoming and outgoing traffic from one port of a network device, such as a switch, to another port of the network device from which the mirrored network traffic may be studied.
  • Network traffic mirroring provides a service that duplicates network traffic as it passes through a device, and may duplicate all or a portion of the network traffic.
  • Network traffic mirroring may be used for network troubleshooting, network security and performance monitoring, and security audits.
  • a network administrator may use mirroring as a diagnostic tool or debugging feature, such as a tool for investigating network intrusions or network attacks. Network mirroring may be performed and managed locally or remotely.
  • the method and device for dynamically monitoring traffic in a network may intelligently select a mirror switch in a way that is low-cost and that does not impose a high extra network load. This will enable the network operator to know the sources of network traffic or application services that cause traffic congestion or abnormalities in the network.
  • the disclosure is directed to a method for monitoring traffic in a network used in a communication device, wherein the network is formed by switches and hosts.
  • the method comprises: collecting link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies; detecting a plurality of physical link loads of the physical network topology; obtaining a target path between two of the hosts or between the switches by analyzing the virtual network topologies; selecting one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and receiving mirror traffic transmitted from the mirror switch and performing packet payload analysis on the mirror traffic.
  • LLDP link layer discovery protocol
  • VLAN virtual local area network
  • NIC host network interface card
  • the step of obtaining the target path between one of the hosts and another host or between the switches by analyzing the virtual network topologies further comprises: selecting a highest physical link load from the physical link loads and obtaining a first physical link corresponding to the highest physical link load by analyzing the physical network topology; and obtaining the target path by analyzing packet types of packets passing through the first physical link and the traffic load of each packet type.
  • the step of detecting the physical link loads of the physical network topology is implemented by performing a simple network management protocol (SNMP).
  • SNMP simple network management protocol
  • the step of obtaining the target path by analyzing the packet types of packets passing through the first physical link and the traffic load of each packet type comprises: receiving header information transmitted by a first switch and a second switch, wherein the header information is generated according to headers of sampled packets encapsulated by the first switch and the second switch, and the first physical link is connected to the first switch and the second switch; obtaining the packet types of packets passing through the first physical link and the traffic load of each packet type according to the header information; and selecting a physical path corresponding to the packet type with the highest traffic load as the target path.
  • the step of selecting one of the switches on the target path as the mirror switch comprises: obtaining a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches; and selecting the mirror switch from the candidate switches according to the link load corresponding to each candidate switch or the hop count corresponding to each candidate switch; wherein the hop count is the number of links between each candidate switch and the communication device.
  • the method before receiving the mirror traffic transmitted from the mirror switch, the method further comprises: setting a plurality of filtering rules on an OpenFlow switch; filtering the mirror traffic according to the filtering rules; and receiving the filtered mirror traffic filtered by the OpenFlow switch.
  • the mirror switch before the mirror switch transmits the mirror traffic, the mirror switch adds a virtual local area network (VLAN) tag field to headers of packets of the mirror traffic.
  • VLAN virtual local area network
  • the mirror switch before the mirror switch transmits the mirror traffic, the mirror switch adds a class of service (CoS) field to headers of packets of the mirror traffic.
  • CoS class of service
  • the switches are OpenFlow switches.
  • the network is an Ethernet network.
  • the disclosure is directed to a communication device for monitoring traffic in a network, wherein the network is formed by switches and hosts.
  • the communication device comprises: a control circuit, a processor and a memory.
  • the control circuit is installed in the control circuit.
  • the memory is installed in the control circuit and operatively coupled to the processor.
  • the processor is configured to execute program codes stored in the memory to: collect link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies; detect a plurality of physical link loads of the physical network topology; analyze the virtual network topologies to obtain a target path between two of the hosts or between the switches; select one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and receive mirror traffic transmitted from the mirror switch and perform packet payload analysis on the mirror traffic.
  • LLDP link layer discovery protocol
  • VLAN virtual local area network
  • NIC host network interface card
  • FIGS. 1A-1E show exemplary schematic diagrams illustrating an exemplary embodiment of a network system according the disclosure.
  • FIG. 2 shows a simplified functional block diagram of a communication device according to the disclosure.
  • FIG. 3 is a simplified block diagram of the program code shown in FIG. 2 in accordance with the disclosure.
  • FIG. 4 is a flow chart of an exemplary embodiment of a method for monitoring traffic in a network according to the disclosure.
  • FIG. 5 is a schematic diagram of selecting a mirror switch on a target path by the cloud management device according to the disclosure.
  • FIG. 6 is a schematic diagram of an OpenFlow switch used in a network system in accordance with the disclosure.
  • FIG. 7A is a schematic diagram of an OpenFlow switch used as a physical network switch in a network system in accordance with the disclosure.
  • FIG. 7B is a schematic diagram of an OpenFlow switch used to filter the mirror traffic in a network system in accordance with the disclosure.
  • FIG. 8 is an alternative schematic diagram illustrating a network system in accordance with the disclosure.
  • FIG. 9 is a schematic diagram illustrating a network system according to the disclosure.
  • Exemplary embodiments of the disclosure are supported by standard documents disclosed for at least one of wireless network systems including an Institute of Electrical and Electronics Engineers (IEEE) 802 system, a 3rd Generation Partnership Project (3GPP) system, a 3GPP Long Term Evolution (LTE) system, an LTE-Advanced (LTE-A) system, and a 3GPP2 system.
  • 3GPP 3rd Generation Partnership Project
  • LTE Long Term Evolution
  • LTE-A LTE-Advanced
  • 3GPP2 3rd Generation Partnership Project 2
  • steps or parts, which are not described to clearly reveal the technical idea of the disclosure in the exemplary embodiments of the disclosure may be supported by the above documents. All terminology used herein may be supported by at least one of the above-mentioned documents.
  • FIG. 1A shows an exemplary schematic diagram illustrating a network system 100 according to an exemplary embodiment of the disclosure.
  • the network system 100 at least comprises switches 120 A to 120 F and physical hosts H 1 to H 8 .
  • Each switch 120 A to 120 F all has a plurality of connecting ports. Each switch 120 A to 120 F respectively connects to the physical hosts H 1 to H 8 via the connecting ports on the switch.
  • the physical hosts H 1 to H 8 may belong to different virtual local area networks (VLANs).
  • the tenants 150 A and 150 B outside the network system 100 may connect to and rent the physical hosts H 1 to H 8 through the network 140 .
  • the network system 100 is a physical machine leasing service (PHLS) system.
  • PHLS physical machine leasing service
  • an Ethernet network is used in the network system 100 .
  • the network system 100 may further comprise a cloud management device 110 .
  • the cloud management device 110 is configured to handle services such as configuration between the tenants 150 A and 150 B and the physical hosts H 1 to H 8 and network virtualization.
  • the cloud management device 110 can manage host network interface card (NIC) information and host-tenant mapping information.
  • the host NIC information may be mapping relationship between a host and a switch port. For example, which ports on the switches 120 A to 120 F are respectively connected to which network interface cards of the physical hosts H 1 to H 8 .
  • the host-tenant mapping information may be the mapping information between a host and a tenant. For example, which hosts are leased to the tenant A, and which hosts respectively belong to which virtual local area networks, and so on.
  • the cloud management device 110 may further collect, through a simple network management protocol (SNMP), link layer discovery protocol (LLDP) information and VLAN information stored by each switch in a management information base (MIB).
  • SNMP simple network management protocol
  • LLDP link layer discovery protocol
  • MIB management information base
  • the cloud management device 110 obtains a physical network topology and a plurality of virtual network topologies according to the LLDP information, the VLAN information, the host NIC information, and the host-tenant mapping information. As shown in FIG. 1B , the cloud management device 110 establishes three virtual network topologies, namely, VLAN-X, VLAN-Y and VLAN-Z.
  • the cloud management device 110 detects all physical link loads of the physical network topology. Specifically, the cloud management device 110 may periodically poll the ports of the switches 120 A to 120 F according to the physical network topology by using the SNMP to detect all physical link loads between the switches 120 A to 120 F. As shown in FIG. 1C , the cloud management device 110 can detect the three links with the three highest physical link loads. For example, the physical link load from the switch 120 A to the switch 120 E is 922 Mbps, the physical link load from the switch 120 C to the switch 120 A is 634 Mbps, and the physical link load from the switch 120 D to the switch 120 A is 486 Mbps.
  • the cloud management device 110 obtains a target path between one of the physical hosts H 1 to H 8 and another physical host or between the switches 120 A to 120 F by analyzing the virtual network topology. The detailed process of obtaining the target path by the cloud management device will be described below with reference to FIGS. 1C and 1 D.
  • the cloud management device 110 may select the highest physical link load among all physical link loads and may obtain a first physical link corresponding to the highest physical link load. The cloud management device 110 then obtains a target path by analyzing the first physical link. Specifically, the cloud management device 110 obtains a first physical link (the physical link from the switch 120 A to the switch 120 E) corresponding to the highest physical link load in FIG. 1C . Each of the switches 120 A- 120 F samples the received packet at a sampling frequency, and extracts the header of the sampled packet, encapsulates a specific protocol header in header information, and then sends the header information to the cloud management device 110 , wherein the specific protocol can be, for example, sFlow, NetFlow, IPFIX, OpenFlow and so on. In another exemplary embodiment, the cloud management device 110 can dynamically change the packet sampling frequency of each switch.
  • the cloud management device 110 obtains the packet types of the packets passing through the first physical link and a transmission load of each packet type according to the header information of the sampled packets received from the switches 120 A- 120 F, wherein the packet type of each packet is determined according to a VLAN-ID, a source IP address, a destination IP address, a source MAC address, a destination MAC address, and a TCP/UDP port number. Packets with a plurality of different packet types can be regarded as a packet type group. In the exemplary embodiment, packets with all packet types with the same source IP address and the same destination IP address are regarded as the same packet type group.
  • packets of all packet types whose source host is H 1 and whose destination host is H 6 are regarded as the same packet type group.
  • the classification conditions of packet type group can be adjusted according to system management requirements.
  • the transmission load of the packet type group corresponding to the path from the physical host H 1 to H 6 is 510 Mbps.
  • the transmission load of the packet type group corresponding to the path from the physical hosts H 3 to H 5 is 283 Mbps.
  • the transmission load of the packet type group corresponding to the path from the physical hosts H 2 to H 6 is 140 Mbps.
  • the cloud management device 110 selects a packet type group with the largest transmission load as the target packet type group. Then, the cloud management device 110 obtains what the virtual path that the target packet type group passes through is according to the virtual network topology, and uses the physical path corresponding to the virtual path (that is, the path from the physical host H 1 to H 6 ) as the target path.
  • the cloud management device 110 may obtain physical link loads of a plurality of switch ports corresponding to the target path or a hop count according to the target path. The cloud management device 110 can then select one of the switches on the target path as a mirror switch according to the physical link load corresponding to the target path or the hop count, and may receive mirror traffic transmitted by the mirror switch to perform a packet payload analysis on the mirror traffic. As shown in FIG. 1E , it is assumed that the cloud management device 110 selects the packets from the physical hosts H 1 to H 6 as a target packet type group, and the switch 120 C on the target path (H 1 ⁇ 120 C ⁇ 120 A ⁇ 120 E ⁇ H 6 ) corresponding to the target packet type group is used as a mirror switch.
  • the cloud management device 110 can control the switch 120 C to start the mirroring mechanism for the port of 120 C- 120 A (or the port of H 1 - 120 C), and monitors the mirror traffic from the mirror switch 120 C to analyze the traffic of the applications executed in the physical hosts H 1 to H 6 .
  • how the cloud management device 110 selects a mirror switch on the target path will be further explained below.
  • FIG. 2 shows a simplified functional block diagram of a communication device 200 according to one exemplary embodiment of the disclosure.
  • the communication device 200 can be utilized for realizing the cloud management device 110 or the physical hosts H 1 to H 8 in the network system 100 of FIGS. 1A to 1E , and the communication device 200 may be used in the LTE system, the LTE-A system or other system which is approximate to the two systems described above.
  • the communication device 200 may include an input device 202 , an output device 204 , a control circuit 206 , a central processing unit (CPU) 208 , a memory 210 , a program code 212 , and a transceiver 214 .
  • CPU central processing unit
  • the control circuit 206 executes the program code 212 in the memory 210 through the CPU 208 , thereby controlling the operation of the wireless communication device 200 .
  • the communication device 200 can receive signals input by a user through the input device 202 , such as a keyboard or keypad, and can output images and sounds through the output device 204 , such as a monitor or speakers.
  • the transceiver 214 is used to receive and transmit wireless signals wirelessly, deliver received signals to the control circuit 206 , and output signals generated by the control circuit 206 .
  • FIG. 3 is a simplified block diagram of the program code 212 shown in FIG. 2 in accordance with one exemplary embodiment of the disclosure.
  • the program code 212 includes an application layer 300 , a Layer 3 portion 302 , and a Layer 2 portion 304 , and is coupled to a Layer 1 portion 306 .
  • the Layer 3 portion 302 generally performs radio resource control.
  • the Layer 2 portion 304 generally performs link control.
  • the Layer 1 portion 306 generally performs physical connections.
  • FIG. 4 is a flow chart of a method for monitoring traffic in a network according to an exemplary embodiment of the disclosure.
  • the method is used in a communication device, wherein the network is formed by switches and hosts.
  • the communication device collects link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies.
  • LLDP link layer discovery protocol
  • VLAN virtual local area network
  • NIC host network interface card
  • the communication device detects a plurality of physical link loads of the physical network topology.
  • the communication device obtains a target path between two of the hosts or between the switches by analyzing the virtual network topologies.
  • step S 420 the communication device selects one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count.
  • step S 425 the communication device receives mirror traffic transmitted from the mirror switch and performs the packet payload analysis on the mirror traffic.
  • FIG. 5 is a schematic diagram of selecting a mirror switch on a target path by the cloud management device 110 according to an exemplary embodiment of the disclosure.
  • the target packet type group is one or more packet types from H 7 to H 4 (that is, packets of all packet types whose source host is H 7 and whose destination host is H 4 ), and the target path corresponding to the target packet type group is from the host H 7 to host H 4 (H 7 ⁇ 120 F ⁇ 120 B ⁇ 120 D ⁇ H 4 ).
  • the cloud management device 110 may first obtain a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches.
  • the plurality of candidate switches forming the target path are the switch 120 F, the switch 120 B, and the switch 120 D.
  • the overhead of each candidate switch can be expressed as follows:
  • N is represented as a switch
  • L N is the link load of the input port corresponding to the switch
  • H N is a hop count corresponding to the candidate switch N.
  • the hop count is the number of links between the switch and the cloud management device 110 .
  • the overhead of each link on the target path can be: the overhead of the link from the host H 7 to the switch 120 F is 534 Mbps, the overhead of the link from the switch 120 F to the switch 120 B is 942 Mbps, and the overhead of the link from the switch 120 B to the switch 120 D is 417 Mbps.
  • the overhead of each candidate switch can be calculated according to the formula (1), as shown in Table 1.
  • the cloud management device 110 selects the switch 120 D as a mirror switch. The cloud management device 110 then receives the mirror traffic transmitted by the switch 120 D and performs the packet payload analysis on the mirror traffic.
  • FIG. 6 is a schematic diagram of an OpenFlow switch 610 used in a network system 600 in accordance with an exemplary embodiment of the disclosure.
  • switch 120 C is a mirror switch.
  • the OpenFlow switch 610 can be used to filter the traffic to reduce the traffic received by the cloud management device 110 before the cloud management device 110 receives the mirror traffic transmitted by the mirror switch.
  • the cloud management device 110 can set a plurality of filtering rules in the OpenFlow switch 610 .
  • one of the filtering rules may be whether the value of the VLAN tag field in the packet is a virtual network identifier which belongs to the target packet type group.
  • the OpenFlow switch 610 determines that the value of the VLAN tag field in the packet is the virtual network identifier which belongs to the target packet type group, the OpenFlow switch 610 transmits the packet to the cloud management device 110 .
  • the OpenFlow switch 610 determines that the value of the VLAN tag field in the packet is not the virtual network identifier which belongs to the target packet type group, the OpenFlow switch 610 discards the packet. Therefore, the mirror traffic received by the cloud management device 110 is the filtered traffic that has been filtered by the OpenFlow switch 610 .
  • the cloud management device 110 can perform the packet payload analysis on the filtered traffic.
  • FIG. 7A is a schematic diagram of an OpenFlow switch used as a physical network switch in a network system 700 in accordance with an exemplary embodiment of the disclosure.
  • the switches 120 A ⁇ 120 F in the network system 700 can be replaced by the OpenFlow switches 720 A ⁇ 720 F.
  • the cloud management device 110 can use the OpenFlow protocol to detect all link loads.
  • the OpenFlow switches 720 A ⁇ 720 F have to support the technique of packet sampling and report the sampled packets to the cloud management device 110 .
  • the cloud management device 110 can also control the OpenFlow switches 720 A ⁇ 720 F to implement the mechanism of the mirror transmission.
  • FIG. 7B is a schematic diagram of an OpenFlow switch used to filter the mirror traffic in a network system 700 in accordance with an exemplary embodiment of the disclosure.
  • an OpenFlow switch 710 can be added in FIG. 7B to filter the mirror traffic.
  • each of the OpenFlow switches 720 A- 720 F is configured to set a class of service (CoS) field in the header of the packet in the mirror traffic as a preset value X after receiving the packet, such that the OpenFlow switch 710 can filter the received traffic based on the value of the CoS field.
  • CoS class of service
  • each of the OpenFlow switches 720 A- 720 F may add a VLAN tag field to the header of the packet in the mirror traffic after receiving the packet, such that the OpenFlow switch 710 can filter the received traffic based on the value of the VLAN tag field.
  • the value of the newly added VLAN tag field may be different from that of the VLAN tag corresponding to the plurality of virtual networks.
  • the OpenFlow switch 720 C is a mirror switch selected by the cloud management device 110 .
  • the cloud management device 110 can use the OpenFlow switch 710 to filter the mirror traffic to reduce the mirror traffic received by the cloud management device 110 before receiving the mirror traffic transmitted by the mirror switch.
  • the cloud management device 110 can set a plurality of filtering rules in the OpenFlow switch 710 .
  • one of the filtering rules may be whether the value of the CoS field in the packet is a preset value X.
  • the OpenFlow switch 710 determines that the value of the CoS field in the packet is the preset value X, the OpenFlow switch 710 transmits the packet to the cloud management device 110 .
  • the OpenFlow switch 710 determines that the value of the CoS field in the packet is not the preset value X, the OpenFlow switch 710 discards the packet. Therefore, the mirror traffic received by the cloud management device 110 is the filtered traffic that has been filtered by the OpenFlow switch 710 .
  • the cloud management device 110 can perform the packet payload analysis on the filtered traffic.
  • FIG. 8 is an alternative schematic diagram illustrating a network system 800 in accordance with an exemplary embodiment of the disclosure.
  • the packet payload analyzers 862 and 864 may be further included in FIG. 8 , wherein the packet payload analyzers 862 and 864 are configured to perform the packet payload analysis on the mirror traffic to alleviate the work load of the cloud management device 810 .
  • the packet payload analyzers 862 and 864 can be disposed at different locations in the network system 800 .
  • the cloud management device 810 can determine which packet payload analyzer performs the packet payload analysis according to a forwarding path of the mirror traffic and the locations of the packet payload analyzers 862 and 864 . For example, in FIG.
  • the cloud management device 810 selects the switch 120 C as the mirror switch. Since the distance between the mirror switch 120 C and the packet payload analyzer 862 is less than that between the mirror switch 120 C and the packet payload analyzer 864 , the cloud management device 810 can instruct the mirror switch 120 C to transmit the mirror traffic to the packet payload analyzer 862 for performing the packet payload analysis.
  • the cloud management device 810 can request the packet payload analyzer 862 to analyze the target packet type group whose source host is H 1 and whose destination host is H 6 to determine the actual traffic used by different network applications.
  • FIG. 9 is a schematic diagram illustrating a network system 900 according to an exemplary embodiment of the disclosure.
  • a sample analyzer 970 may be further included in FIG. 9 , wherein the sample analyzer 970 samples the packets received by the switch 120 A and the switch 120 E at a sampling frequency and analyzes the packet types of the packets passing through each physical link and the traffic load of each physical link to alleviate the work load of the cloud management device 910 .
  • the sample analyzer 970 can dynamically change the sampling frequency of each switch.
  • the cloud management device 910 can request the sample analyzer 970 to provide the results of analyzing the sampled packets to obtain the result of the load analysis for each link.
  • CPU 308 could execute the program code 312 to perform all of the above-described actions and steps or others described herein.
  • a switch with low overhead can be intelligently selected as a mirror switch through the method and apparatus for monitoring the traffic in a network.
  • the mirror switch can dynamically initiate the port mirroring mechanism to reduce the cost of analyzing the packet load and may enable the network operator to know the traffic source or the application service that causes link congestion.
  • concurrent channels may be established based on pulse repetition frequencies.
  • concurrent channels may be established based on pulse position or offsets.
  • concurrent channels may be established based on time hopping sequences.
  • concurrent channels may be established based on pulse repetition frequencies, pulse positions or offsets, and time hopping sequences.
  • the various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented within or performed by an integrated circuit (“IC”), an access terminal, or an access point.
  • the IC may comprise a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, discrete gate or transistor logic, discrete hardware components, electrical components, optical components, mechanical components, or any combination thereof designed to perform the functions described herein, and may execute codes or instructions that reside within the IC, outside of the IC, or both.
  • a general purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine.
  • a processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • a software module e.g., including executable instructions and related data
  • other data may reside in a data memory such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable storage medium known in the art.
  • a sample storage medium may be coupled to a machine such as, for example, a computer/processor (which may be referred to herein, for convenience, as a “processor”) such the processor can read information (e.g., code) from and write information to the storage medium.
  • a sample storage medium may be integral to the processor.
  • the processor and the storage medium may reside in an ASIC.
  • the ASIC may reside in user equipment.
  • the processor and the storage medium may reside as discrete components in user equipment.
  • any suitable computer-program product may comprise a computer-readable medium comprising codes relating to one or more of the aspects of the disclosure.
  • a computer program product may comprise packaging materials.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A method for monitoring traffic in a network is provided. The method is used in a communication device, wherein the network is formed by switches and hosts. The method includes: collecting LLDP information, VLAN information, host NIC information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies; detecting a plurality of physical link loads of the physical network topology; obtaining a target path between two of the hosts or between the switches by analyzing the virtual network topologies; selecting one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and receiving mirror traffic transmitted from the mirror switch, and performing packet payload analysis on the mirror traffic.

Description

    CROSS REFERENCE TO RELATED APPLICATIONS
  • The application is based on, and claims priority from Taiwan Patent Application Serial Number 106143934, filed on Dec. 14, 2017, the entire disclosure of which is hereby incorporated by reference herein in its entity.
  • BACKGROUND Technical Field
  • The disclosure relates to network communication technologies, and more particularly, it relates to a method and a device for monitoring traffic in a network.
  • Description of the Related Art
  • The mirroring of network traffic is a common feature found in many network relay devices, such as network switches. Network traffic mirroring, or port mirroring, is a method of monitoring network traffic that forwards a copy of incoming and outgoing traffic from one port of a network device, such as a switch, to another port of the network device from which the mirrored network traffic may be studied. Network traffic mirroring provides a service that duplicates network traffic as it passes through a device, and may duplicate all or a portion of the network traffic. Network traffic mirroring may be used for network troubleshooting, network security and performance monitoring, and security audits. A network administrator may use mirroring as a diagnostic tool or debugging feature, such as a tool for investigating network intrusions or network attacks. Network mirroring may be performed and managed locally or remotely.
  • Current techniques for mirroring data are limited in that they are static. The traffic mirroring has to be manually established and configured. In a system where multiple flows of traffic are to be monitored, multiple traffic mirroring must be set-up and configured. Forwarding traffic mirroring requires bandwidth and, as the distance between the network device and the remote analysis device increases, the additional network load caused by forwarding traffic mirroring also increases. This is inefficient because in some instances a network operator may only want to have certain traffic mirrored, or to mirror traffic only if certain criterion is met.
  • Therefore, there is a need for a method and device for dynamically monitoring traffic in a network wherein it is not required to manually establish and configure mirror switches. Rather, the method and device for dynamically monitoring traffic in a network may intelligently select a mirror switch in a way that is low-cost and that does not impose a high extra network load. This will enable the network operator to know the sources of network traffic or application services that cause traffic congestion or abnormalities in the network.
  • SUMMARY
  • In one of exemplary embodiments, the disclosure is directed to a method for monitoring traffic in a network used in a communication device, wherein the network is formed by switches and hosts. The method comprises: collecting link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies; detecting a plurality of physical link loads of the physical network topology; obtaining a target path between two of the hosts or between the switches by analyzing the virtual network topologies; selecting one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and receiving mirror traffic transmitted from the mirror switch and performing packet payload analysis on the mirror traffic.
  • In one of exemplary embodiments, the step of obtaining the target path between one of the hosts and another host or between the switches by analyzing the virtual network topologies further comprises: selecting a highest physical link load from the physical link loads and obtaining a first physical link corresponding to the highest physical link load by analyzing the physical network topology; and obtaining the target path by analyzing packet types of packets passing through the first physical link and the traffic load of each packet type.
  • In one of exemplary embodiments, the step of detecting the physical link loads of the physical network topology is implemented by performing a simple network management protocol (SNMP).
  • In one of exemplary embodiments, the step of obtaining the target path by analyzing the packet types of packets passing through the first physical link and the traffic load of each packet type comprises: receiving header information transmitted by a first switch and a second switch, wherein the header information is generated according to headers of sampled packets encapsulated by the first switch and the second switch, and the first physical link is connected to the first switch and the second switch; obtaining the packet types of packets passing through the first physical link and the traffic load of each packet type according to the header information; and selecting a physical path corresponding to the packet type with the highest traffic load as the target path.
  • In one of exemplary embodiments, the step of selecting one of the switches on the target path as the mirror switch comprises: obtaining a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches; and selecting the mirror switch from the candidate switches according to the link load corresponding to each candidate switch or the hop count corresponding to each candidate switch; wherein the hop count is the number of links between each candidate switch and the communication device.
  • In some embodiments, before receiving the mirror traffic transmitted from the mirror switch, the method further comprises: setting a plurality of filtering rules on an OpenFlow switch; filtering the mirror traffic according to the filtering rules; and receiving the filtered mirror traffic filtered by the OpenFlow switch.
  • In one of exemplary embodiments, before the mirror switch transmits the mirror traffic, the mirror switch adds a virtual local area network (VLAN) tag field to headers of packets of the mirror traffic.
  • In one of exemplary embodiments, before the mirror switch transmits the mirror traffic, the mirror switch adds a class of service (CoS) field to headers of packets of the mirror traffic.
  • In one of exemplary embodiments, the switches are OpenFlow switches.
  • In one of exemplary embodiments, the network is an Ethernet network.
  • In one of exemplary embodiments, the disclosure is directed to a communication device for monitoring traffic in a network, wherein the network is formed by switches and hosts. The communication device comprises: a control circuit, a processor and a memory. The control circuit is installed in the control circuit. The memory is installed in the control circuit and operatively coupled to the processor. The processor is configured to execute program codes stored in the memory to: collect link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies; detect a plurality of physical link loads of the physical network topology; analyze the virtual network topologies to obtain a target path between two of the hosts or between the switches; select one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and receive mirror traffic transmitted from the mirror switch and perform packet payload analysis on the mirror traffic.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The accompanying drawings are included to provide a further understanding of the disclosure, and are incorporated in and constitute a part of the disclosure. The drawings illustrate implementations of the disclosure and, together with the description, serve to explain the principles of the disclosure. It should be appreciated that the drawings are not necessarily to scale as some components may be shown out of proportion to the size in actual implementation in order to clearly illustrate the concept of the disclosure.
  • FIGS. 1A-1E show exemplary schematic diagrams illustrating an exemplary embodiment of a network system according the disclosure.
  • FIG. 2 shows a simplified functional block diagram of a communication device according to the disclosure.
  • FIG. 3 is a simplified block diagram of the program code shown in FIG. 2 in accordance with the disclosure.
  • FIG. 4 is a flow chart of an exemplary embodiment of a method for monitoring traffic in a network according to the disclosure.
  • FIG. 5 is a schematic diagram of selecting a mirror switch on a target path by the cloud management device according to the disclosure.
  • FIG. 6 is a schematic diagram of an OpenFlow switch used in a network system in accordance with the disclosure.
  • FIG. 7A is a schematic diagram of an OpenFlow switch used as a physical network switch in a network system in accordance with the disclosure.
  • FIG. 7B is a schematic diagram of an OpenFlow switch used to filter the mirror traffic in a network system in accordance with the disclosure.
  • FIG. 8 is an alternative schematic diagram illustrating a network system in accordance with the disclosure.
  • FIG. 9 is a schematic diagram illustrating a network system according to the disclosure.
  • DETAILED DESCRIPTION
  • Various aspects of the disclosure are described more fully hereinafter with reference to the accompanying drawings. This disclosure may, however, be embodied in many different forms and should not be construed as limited to any specific structure or function presented throughout this disclosure. Rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art. Based on the teachings herein one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure disclosed herein, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, the scope of the disclosure is intended to cover such an apparatus or method which is practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth herein. It should be understood that any aspect of the disclosure disclosed herein may be embodied by one or more elements of a claim.
  • The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any aspect described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects. Furthermore, like numerals refer to like elements throughout the several views, and the articles “a” and “the” includes plural references, unless otherwise specified in the description.
  • Exemplary embodiments of the disclosure are supported by standard documents disclosed for at least one of wireless network systems including an Institute of Electrical and Electronics Engineers (IEEE) 802 system, a 3rd Generation Partnership Project (3GPP) system, a 3GPP Long Term Evolution (LTE) system, an LTE-Advanced (LTE-A) system, and a 3GPP2 system. In particular, steps or parts, which are not described to clearly reveal the technical idea of the disclosure, in the exemplary embodiments of the disclosure may be supported by the above documents. All terminology used herein may be supported by at least one of the above-mentioned documents.
  • FIG. 1A shows an exemplary schematic diagram illustrating a network system 100 according to an exemplary embodiment of the disclosure. As shown in FIG. 1A, the network system 100 at least comprises switches 120A to 120F and physical hosts H1 to H8.
  • Each switch 120A to 120F all has a plurality of connecting ports. Each switch 120A to 120F respectively connects to the physical hosts H1 to H8 via the connecting ports on the switch. The physical hosts H1 to H8 may belong to different virtual local area networks (VLANs). The tenants 150A and 150B outside the network system 100 may connect to and rent the physical hosts H1 to H8 through the network 140. In an exemplary embodiment, the network system 100 is a physical machine leasing service (PHLS) system. In another exemplary embodiment, an Ethernet network is used in the network system 100.
  • In addition, the network system 100 may further comprise a cloud management device 110. The cloud management device 110 is configured to handle services such as configuration between the tenants 150A and 150B and the physical hosts H1 to H8 and network virtualization.
  • The cloud management device 110 can manage host network interface card (NIC) information and host-tenant mapping information. The host NIC information may be mapping relationship between a host and a switch port. For example, which ports on the switches 120A to 120F are respectively connected to which network interface cards of the physical hosts H1 to H8. The host-tenant mapping information may be the mapping information between a host and a tenant. For example, which hosts are leased to the tenant A, and which hosts respectively belong to which virtual local area networks, and so on.
  • The cloud management device 110 may further collect, through a simple network management protocol (SNMP), link layer discovery protocol (LLDP) information and VLAN information stored by each switch in a management information base (MIB).
  • The cloud management device 110 obtains a physical network topology and a plurality of virtual network topologies according to the LLDP information, the VLAN information, the host NIC information, and the host-tenant mapping information. As shown in FIG. 1B, the cloud management device 110 establishes three virtual network topologies, namely, VLAN-X, VLAN-Y and VLAN-Z.
  • The cloud management device 110 detects all physical link loads of the physical network topology. Specifically, the cloud management device 110 may periodically poll the ports of the switches 120A to 120F according to the physical network topology by using the SNMP to detect all physical link loads between the switches 120A to 120F. As shown in FIG. 1C, the cloud management device 110 can detect the three links with the three highest physical link loads. For example, the physical link load from the switch 120A to the switch 120E is 922 Mbps, the physical link load from the switch 120C to the switch 120A is 634 Mbps, and the physical link load from the switch 120D to the switch 120A is 486 Mbps.
  • The cloud management device 110 obtains a target path between one of the physical hosts H1 to H8 and another physical host or between the switches 120A to 120F by analyzing the virtual network topology. The detailed process of obtaining the target path by the cloud management device will be described below with reference to FIGS. 1C and 1D.
  • The cloud management device 110 may select the highest physical link load among all physical link loads and may obtain a first physical link corresponding to the highest physical link load. The cloud management device 110 then obtains a target path by analyzing the first physical link. Specifically, the cloud management device 110 obtains a first physical link (the physical link from the switch 120A to the switch 120E) corresponding to the highest physical link load in FIG. 1C. Each of the switches 120A-120F samples the received packet at a sampling frequency, and extracts the header of the sampled packet, encapsulates a specific protocol header in header information, and then sends the header information to the cloud management device 110, wherein the specific protocol can be, for example, sFlow, NetFlow, IPFIX, OpenFlow and so on. In another exemplary embodiment, the cloud management device 110 can dynamically change the packet sampling frequency of each switch.
  • The cloud management device 110 obtains the packet types of the packets passing through the first physical link and a transmission load of each packet type according to the header information of the sampled packets received from the switches 120A-120F, wherein the packet type of each packet is determined according to a VLAN-ID, a source IP address, a destination IP address, a source MAC address, a destination MAC address, and a TCP/UDP port number. Packets with a plurality of different packet types can be regarded as a packet type group. In the exemplary embodiment, packets with all packet types with the same source IP address and the same destination IP address are regarded as the same packet type group. For example, packets of all packet types whose source host is H1 and whose destination host is H6 are regarded as the same packet type group. However, the classification conditions of packet type group can be adjusted according to system management requirements. As shown in FIG. 1D, the transmission load of the packet type group corresponding to the path from the physical host H1 to H6 is 510 Mbps. The transmission load of the packet type group corresponding to the path from the physical hosts H3 to H5 is 283 Mbps. The transmission load of the packet type group corresponding to the path from the physical hosts H2 to H6 is 140 Mbps.
  • The cloud management device 110 selects a packet type group with the largest transmission load as the target packet type group. Then, the cloud management device 110 obtains what the virtual path that the target packet type group passes through is according to the virtual network topology, and uses the physical path corresponding to the virtual path (that is, the path from the physical host H1 to H6) as the target path.
  • The cloud management device 110 may obtain physical link loads of a plurality of switch ports corresponding to the target path or a hop count according to the target path. The cloud management device 110 can then select one of the switches on the target path as a mirror switch according to the physical link load corresponding to the target path or the hop count, and may receive mirror traffic transmitted by the mirror switch to perform a packet payload analysis on the mirror traffic. As shown in FIG. 1E, it is assumed that the cloud management device 110 selects the packets from the physical hosts H1 to H6 as a target packet type group, and the switch 120C on the target path (H1 120C→ 120A→120E→H6) corresponding to the target packet type group is used as a mirror switch. The cloud management device 110 can control the switch 120C to start the mirroring mechanism for the port of 120C-120A (or the port of H1-120C), and monitors the mirror traffic from the mirror switch 120C to analyze the traffic of the applications executed in the physical hosts H1 to H6. In addition, how the cloud management device 110 selects a mirror switch on the target path will be further explained below.
  • FIG. 2 shows a simplified functional block diagram of a communication device 200 according to one exemplary embodiment of the disclosure. As shown in FIG. 2, the communication device 200 can be utilized for realizing the cloud management device 110 or the physical hosts H1 to H8 in the network system 100 of FIGS. 1A to 1E, and the communication device 200 may be used in the LTE system, the LTE-A system or other system which is approximate to the two systems described above. The communication device 200 may include an input device 202, an output device 204, a control circuit 206, a central processing unit (CPU) 208, a memory 210, a program code 212, and a transceiver 214. The control circuit 206 executes the program code 212 in the memory 210 through the CPU 208, thereby controlling the operation of the wireless communication device 200. The communication device 200 can receive signals input by a user through the input device 202, such as a keyboard or keypad, and can output images and sounds through the output device 204, such as a monitor or speakers. The transceiver 214 is used to receive and transmit wireless signals wirelessly, deliver received signals to the control circuit 206, and output signals generated by the control circuit 206.
  • FIG. 3 is a simplified block diagram of the program code 212 shown in FIG. 2 in accordance with one exemplary embodiment of the disclosure. In this exemplary embodiment, the program code 212 includes an application layer 300, a Layer 3 portion 302, and a Layer 2 portion 304, and is coupled to a Layer 1 portion 306. The Layer 3 portion 302 generally performs radio resource control. The Layer 2 portion 304 generally performs link control. The Layer 1 portion 306 generally performs physical connections.
  • FIG. 4 is a flow chart of a method for monitoring traffic in a network according to an exemplary embodiment of the disclosure. The method is used in a communication device, wherein the network is formed by switches and hosts. In step 5405, the communication device collects link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies. In step S410, the communication device detects a plurality of physical link loads of the physical network topology. In step S415, the communication device obtains a target path between two of the hosts or between the switches by analyzing the virtual network topologies. In step S420, the communication device selects one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count. In step S425, the communication device receives mirror traffic transmitted from the mirror switch and performs the packet payload analysis on the mirror traffic.
  • FIG. 5 is a schematic diagram of selecting a mirror switch on a target path by the cloud management device 110 according to an exemplary embodiment of the disclosure. In FIG. 5, it is assumed that the target packet type group is one or more packet types from H7 to H4 (that is, packets of all packet types whose source host is H7 and whose destination host is H4), and the target path corresponding to the target packet type group is from the host H7 to host H4 (H7 120F→ 120B→120D→H4). The cloud management device 110 may first obtain a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches. For example, the plurality of candidate switches forming the target path are the switch 120F, the switch 120B, and the switch 120D. The overhead of each candidate switch can be expressed as follows:

  • Overhead(N)=L N ×H N   (1)
  • wherein N is represented as a switch, LN is the link load of the input port corresponding to the switch and HN is a hop count corresponding to the candidate switch N. The hop count is the number of links between the switch and the cloud management device 110.
  • In this example, it is assumed that the overhead of each link on the target path can be: the overhead of the link from the host H7 to the switch 120F is 534 Mbps, the overhead of the link from the switch 120F to the switch 120B is 942 Mbps, and the overhead of the link from the switch 120B to the switch 120D is 417 Mbps. The overhead of each candidate switch can be calculated according to the formula (1), as shown in Table 1.
  • TABLE 1
    Candidate switch Overhead
    Switch
    120F 534 × 2 = 1068
    Switch 120B 942 × 3 = 2826
    Switch 120D 417 × 2 = 834 
  • In Table 1, since the switch 120D has a minimum overhead 834, the cloud management device 110 selects the switch 120D as a mirror switch. The cloud management device 110 then receives the mirror traffic transmitted by the switch 120D and performs the packet payload analysis on the mirror traffic.
  • FIG. 6 is a schematic diagram of an OpenFlow switch 610 used in a network system 600 in accordance with an exemplary embodiment of the disclosure. In the exemplary embodiment, it is assumed that switch 120C is a mirror switch. As shown in FIG. 6, the OpenFlow switch 610 can be used to filter the traffic to reduce the traffic received by the cloud management device 110 before the cloud management device 110 receives the mirror traffic transmitted by the mirror switch. The cloud management device 110 can set a plurality of filtering rules in the OpenFlow switch 610. In the exemplary embodiment, one of the filtering rules may be whether the value of the VLAN tag field in the packet is a virtual network identifier which belongs to the target packet type group. When the OpenFlow switch 610 determines that the value of the VLAN tag field in the packet is the virtual network identifier which belongs to the target packet type group, the OpenFlow switch 610 transmits the packet to the cloud management device 110. When the OpenFlow switch 610 determines that the value of the VLAN tag field in the packet is not the virtual network identifier which belongs to the target packet type group, the OpenFlow switch 610 discards the packet. Therefore, the mirror traffic received by the cloud management device 110 is the filtered traffic that has been filtered by the OpenFlow switch 610. The cloud management device 110 can perform the packet payload analysis on the filtered traffic.
  • FIG. 7A is a schematic diagram of an OpenFlow switch used as a physical network switch in a network system 700 in accordance with an exemplary embodiment of the disclosure. As shown in FIG. 7A, the switches 120120F in the network system 700 can be replaced by the OpenFlow switches 720720F. In the exemplary embodiment, instead of the SNMP protocol, the cloud management device 110 can use the OpenFlow protocol to detect all link loads. In addition, the OpenFlow switches 720720F have to support the technique of packet sampling and report the sampled packets to the cloud management device 110. In another exemplary embodiment, the cloud management device 110 can also control the OpenFlow switches 720720F to implement the mechanism of the mirror transmission.
  • FIG. 7B is a schematic diagram of an OpenFlow switch used to filter the mirror traffic in a network system 700 in accordance with an exemplary embodiment of the disclosure. Unlike the network system of FIG. 7A, an OpenFlow switch 710 can be added in FIG. 7B to filter the mirror traffic. In another exemplary embodiment, each of the OpenFlow switches 720A-720F is configured to set a class of service (CoS) field in the header of the packet in the mirror traffic as a preset value X after receiving the packet, such that the OpenFlow switch 710 can filter the received traffic based on the value of the CoS field. In another exemplary embodiment, each of the OpenFlow switches 720A-720F may add a VLAN tag field to the header of the packet in the mirror traffic after receiving the packet, such that the OpenFlow switch 710 can filter the received traffic based on the value of the VLAN tag field. In addition, the value of the newly added VLAN tag field may be different from that of the VLAN tag corresponding to the plurality of virtual networks.
  • As shown in FIG. 7B, it is assumed that the OpenFlow switch 720C is a mirror switch selected by the cloud management device 110. The cloud management device 110 can use the OpenFlow switch 710 to filter the mirror traffic to reduce the mirror traffic received by the cloud management device 110 before receiving the mirror traffic transmitted by the mirror switch. The cloud management device 110 can set a plurality of filtering rules in the OpenFlow switch 710. In the exemplary embodiment, one of the filtering rules may be whether the value of the CoS field in the packet is a preset value X. When the OpenFlow switch 710 determines that the value of the CoS field in the packet is the preset value X, the OpenFlow switch 710 transmits the packet to the cloud management device 110. When the OpenFlow switch 710 determines that the value of the CoS field in the packet is not the preset value X, the OpenFlow switch 710 discards the packet. Therefore, the mirror traffic received by the cloud management device 110 is the filtered traffic that has been filtered by the OpenFlow switch 710. The cloud management device 110 can perform the packet payload analysis on the filtered traffic.
  • FIG. 8 is an alternative schematic diagram illustrating a network system 800 in accordance with an exemplary embodiment of the disclosure. Unlike the network system of FIG. 1E, the packet payload analyzers 862 and 864 may be further included in FIG. 8, wherein the packet payload analyzers 862 and 864 are configured to perform the packet payload analysis on the mirror traffic to alleviate the work load of the cloud management device 810. In the exemplary embodiment, the packet payload analyzers 862 and 864 can be disposed at different locations in the network system 800. The cloud management device 810 can determine which packet payload analyzer performs the packet payload analysis according to a forwarding path of the mirror traffic and the locations of the packet payload analyzers 862 and 864. For example, in FIG. 8, it is assumed that the cloud management device 810 selects the switch 120C as the mirror switch. Since the distance between the mirror switch 120C and the packet payload analyzer 862 is less than that between the mirror switch 120C and the packet payload analyzer 864, the cloud management device 810 can instruct the mirror switch 120C to transmit the mirror traffic to the packet payload analyzer 862 for performing the packet payload analysis. The cloud management device 810 can request the packet payload analyzer 862 to analyze the target packet type group whose source host is H1 and whose destination host is H6 to determine the actual traffic used by different network applications.
  • FIG. 9 is a schematic diagram illustrating a network system 900 according to an exemplary embodiment of the disclosure. Unlike the network system of FIG. 1D, a sample analyzer 970 may be further included in FIG. 9, wherein the sample analyzer 970 samples the packets received by the switch 120A and the switch 120E at a sampling frequency and analyzes the packet types of the packets passing through each physical link and the traffic load of each physical link to alleviate the work load of the cloud management device 910. In another exemplary embodiment, the sample analyzer 970 can dynamically change the sampling frequency of each switch. The cloud management device 910 can request the sample analyzer 970 to provide the results of analyzing the sampled packets to obtain the result of the load analysis for each link.
  • In addition, the CPU 308 could execute the program code 312 to perform all of the above-described actions and steps or others described herein.
  • As shown above, a switch with low overhead can be intelligently selected as a mirror switch through the method and apparatus for monitoring the traffic in a network. The mirror switch can dynamically initiate the port mirroring mechanism to reduce the cost of analyzing the packet load and may enable the network operator to know the traffic source or the application service that causes link congestion.
  • Various aspects of the disclosure have been described above. It should be apparent that the teachings herein may be embodied in a wide variety of forms and that any specific structure, function, or both being disclosed herein is merely representative. Based on the teachings herein one skilled in the art should appreciate that an aspect disclosed herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented or such a method may be practiced using other structure, functionality, or structure and functionality in addition to or other than one or more of the aspects set forth herein. As an example of some of the above concepts, in some aspects concurrent channels may be established based on pulse repetition frequencies. In some aspects concurrent channels may be established based on pulse position or offsets. In some aspects concurrent channels may be established based on time hopping sequences. In some aspects concurrent channels may be established based on pulse repetition frequencies, pulse positions or offsets, and time hopping sequences.
  • Those with skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
  • Those skilled in the art will further appreciate that the various illustrative logical blocks, modules, processors, means, circuits, and algorithm steps described in connection with the aspects disclosed herein may be implemented as electronic hardware (e.g., a digital implementation, an analog implementation, or a combination of the two, which may be designed using source coding or some other technique), various forms of program or design code incorporating instructions (which may be referred to herein, for convenience, as “software” or a “software module”), or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in ways that vary for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
  • In addition, the various illustrative logical blocks, modules, and circuits described in connection with the aspects disclosed herein may be implemented within or performed by an integrated circuit (“IC”), an access terminal, or an access point. The IC may comprise a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or another programmable logic device, discrete gate or transistor logic, discrete hardware components, electrical components, optical components, mechanical components, or any combination thereof designed to perform the functions described herein, and may execute codes or instructions that reside within the IC, outside of the IC, or both. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
  • It should be understood that any specific order or hierarchy of steps in any disclosed process is an example of a sample approach. It should be understood that the specific order or hierarchy of steps in the processes may be rearranged while remaining within the scope of the disclosure. The accompanying method claims present elements of the various steps in a sample order, and are not meant to be limited to the specific order or hierarchy presented.
  • The steps of a method or algorithm described in connection with the aspects disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module (e.g., including executable instructions and related data) and other data may reside in a data memory such as RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM, or any other form of computer-readable storage medium known in the art. A sample storage medium may be coupled to a machine such as, for example, a computer/processor (which may be referred to herein, for convenience, as a “processor”) such the processor can read information (e.g., code) from and write information to the storage medium. A sample storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in user equipment. In the alternative, the processor and the storage medium may reside as discrete components in user equipment. Moreover, in some aspects any suitable computer-program product may comprise a computer-readable medium comprising codes relating to one or more of the aspects of the disclosure. In some aspects a computer program product may comprise packaging materials.
  • Use of ordinal terms such as “first”, “second”, “third”, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having the same name (but for use of the ordinal term) to distinguish the claim elements.
  • While the disclosure has been described by way of example and in terms of exemplary embodiment, it should be understood that the disclosure is not limited thereto. Those who are skilled in this technology can still make various alterations and modifications without departing from the scope and spirit of this disclosure. Therefore, the scope of the disclosure shall be defined and protected by the following claims and their equivalents.

Claims (20)

What is claimed is:
1. A method for monitoring traffic in a network, applied to a communication device, wherein the network is formed by switches and hosts, and the method comprises:
collecting link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies;
detecting a plurality of physical link loads of the physical network topology;
obtaining a target path between two of the hosts or between the switches by analyzing the virtual network topologies;
selecting one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and
receiving mirror traffic transmitted from the mirror switch and performing packet payload analysis on the mirror traffic.
2. The method for monitoring traffic in a network as claimed in claim 1, wherein the step of obtaining the target path between two of the hosts or between the switches by analyzing the virtual network topologies further comprises:
selecting a highest physical link load from the physical link loads and obtaining a first physical link corresponding to the highest physical link load by analyzing the physical network topology; and
obtaining the target path by analyzing packet types of packets passing through the first physical link and the traffic load of each packet type.
3. The method for monitoring traffic in a network as claimed in claim 1, wherein the step of detecting the physical link loads of the physical network topology is implemented by performing a simple network management protocol (SNMP).
4. The method for monitoring traffic in a network as claimed in claim 2, wherein the step of obtaining the target path by analyzing the packet types of packets passing through the first physical link and the traffic load of each packet type comprises:
receiving header information transmitted by a first switch and a second switch, wherein the header information is generated according to headers of sampled packets encapsulated by the first switch and the second switch, and the first physical link is connected to the first switch and the second switch;
obtaining the packet types of packets passing through the first physical link and the traffic load of each packet type according to the header information; and
selecting a physical path corresponding to the packet type with the highest traffic load as the target path.
5. The method for monitoring traffic in a network as claimed in claim 1, wherein the step of selecting one of the switches on the target path as the mirror switch comprises:
obtaining a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches; and
selecting the mirror switch from the candidate switches according to the link load corresponding to each candidate switch or the hop count corresponding to each candidate switch;
wherein the hop count is the number of links between each candidate switch and the communication device.
6. The method for monitoring traffic in a network as claimed in claim 1, wherein before receiving the mirror traffic transmitted from the mirror switch, the method further comprises:
setting a plurality of filtering rules on an OpenFlow switch;
filtering the mirror traffic according to the filtering rules; and
receiving the filtered mirror traffic filtered by the OpenFlow switch.
7. The method for monitoring traffic in a network as claimed in claim 1, wherein before the mirror switch transmits the mirror traffic, the mirror switch adds a virtual local area network (VLAN) tag field to headers of packets of the mirror traffic.
8. The method for monitoring traffic in a network as claimed in claim 1, wherein before the mirror switch transmits the mirror traffic, the mirror switch adds a class of service (CoS) field to headers of packets of the mirror traffic.
9. The method for monitoring traffic in a network as claimed in claim 1, wherein the switches are OpenFlow switches.
10. The method for monitoring traffic in a network as claimed in claim 1, wherein the network is an Ethernet network.
11. A communication device for monitoring traffic in a network, wherein the network is formed by switches and hosts and the communication device comprises:
a control circuit;
a processor installed in the control circuit; and
a memory installed in the control circuit and operatively coupled to the processor;
wherein the processor is configured to execute program codes stored in the memory to:
collect link layer discovery protocol (LLDP) information, virtual local area network (VLAN) information, host network interface card (NIC) information and host-tenant mapping information to obtain a physical network topology and a plurality of virtual network topologies;
detect a plurality of physical link loads of the physical network topology;
obtain a target path between two of the hosts or between the switches by analyzing the virtual network topologies;
select one of the switches on the target path to serve as a mirror switch according to the physical link load corresponding to the target path or a hop count; and
receive mirror traffic transmitted from the mirror switch and perform packet payload analysis on the mirror traffic.
12. The communication device for monitoring traffic in a network as claimed in claim 11, wherein using the processor to obtain the target path between two of the hosts or between the switches by analyzing the virtual network topologies further comprises:
selecting the highest physical link load from the physical link loads and obtaining a first physical link corresponding to the highest physical link load by analyzing the physical network topology; and
obtaining the target path by analyzing packet types of packets passing through the first physical link and the traffic load of each packet type.
13. The communication device for monitoring traffic in a network as claimed in claim 11, wherein the processor detects the physical link loads of the physical network topology by performing a simple network management protocol (SNMP).
14. The communication device for monitoring traffic in a network as claimed in claim 12, wherein using the processor to obtain the target path by analyzing the packet types of packets passing through the first physical link and the traffic load of each packet type further comprises:
receiving header information transmitted by a first switch and a second switch, wherein the header information is generated according to headers of sampled packets encapsulated by the first switch and the second switch, and the first physical link is connected to the first switch and the second switch;
obtaining the packet types of packets passing through the first physical link and the traffic load of each packet type according to the header information; and
selecting a physical path corresponding to the packet type with the highest traffic load as the target path.
15. The communication device for monitoring traffic in a network as claimed in claim 11, wherein using the processor to select one of the switches on the target path as the mirror switch further comprises:
obtaining a plurality of candidate switches forming the target path and link loads corresponding to the candidate switches; and
selecting the mirror switch from the candidate switches according to the link load corresponding to each candidate switch or the hop count corresponding to each candidate switch;
wherein the hop count is the number of links between each candidate switch and the communication device.
16. The communication device for monitoring traffic in a network as claimed in claim 11, wherein before receiving the mirror traffic transmitted from the mirror switch, the processor is further configured to execute program codes to execute:
setting a plurality of filtering rules on an OpenFlow switch;
filtering the mirror traffic according to the filtering rules; and
receiving the filtered mirror traffic filtered by the OpenFlow switch.
17. The wireless communication device for monitoring traffic in a network as claimed in claim 11, wherein before the mirror switch transmits the mirror traffic, the mirror switch adds a virtual local area network (VLAN) tag field to headers of packets of the mirror traffic.
18. The wireless communication device for monitoring traffic in a network as claimed in claim 11, wherein before the mirror switch transmits the mirror traffic, the mirror switch adds a class of service (CoS) field to headers of packets of the mirror traffic.
19. The wireless communication device for monitoring traffic in a network as claimed in claim 11, wherein the switches are OpenFlow switches.
20. The wireless communication device for monitoring traffic in a network as claimed in claim 11, wherein the network is an Ethernet network.
US16/211,177 2017-12-14 2018-12-05 Method and device for monitoring traffic in a network Active 2038-12-25 US10659393B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
TW106143934A 2017-12-14
TW106143934A TWI664838B (en) 2017-12-14 2017-12-14 Method and device for monitoring traffic in a network
TW106143934 2017-12-14

Publications (2)

Publication Number Publication Date
US20190190851A1 true US20190190851A1 (en) 2019-06-20
US10659393B2 US10659393B2 (en) 2020-05-19

Family

ID=66815333

Family Applications (1)

Application Number Title Priority Date Filing Date
US16/211,177 Active 2038-12-25 US10659393B2 (en) 2017-12-14 2018-12-05 Method and device for monitoring traffic in a network

Country Status (4)

Country Link
US (1) US10659393B2 (en)
JP (1) JP6609024B2 (en)
CN (1) CN109962825B (en)
TW (1) TWI664838B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190158591A1 (en) * 2013-03-15 2019-05-23 Extreme Networks, Inc. Device and related method for dynamic traffic mirroring
US20190230009A1 (en) * 2018-01-23 2019-07-25 Arista Networks, Inc. Accelerated network traffic sampling using an accelerated line card
CN110784513A (en) * 2019-09-18 2020-02-11 深圳云盈网络科技有限公司 Data mirroring method based on data frame of link layer
CN111144504A (en) * 2019-12-30 2020-05-12 成都科来软件有限公司 Software image flow identification and classification method based on PCA algorithm
CN111343167A (en) * 2020-02-19 2020-06-26 北京天融信网络安全技术有限公司 Information processing method based on network and electronic equipment
US10756989B2 (en) 2018-01-23 2020-08-25 Arista Networks, Inc. Accelerated network traffic sampling for a non-accelerated line card
CN111654452A (en) * 2020-05-08 2020-09-11 杭州迪普科技股份有限公司 Message processing method and device
US10938680B2 (en) 2018-01-23 2021-03-02 Arista Networks, Inc. Accelerated network traffic sampling using a network chip
WO2021072130A1 (en) * 2019-10-10 2021-04-15 Cisco Technology, Inc. Dynamic discovery of service nodes in a network
CN112910686A (en) * 2021-01-14 2021-06-04 上海牙木通讯技术有限公司 Flow analysis system, method of operating flow analysis system, and computer-readable storage medium
CN112995231A (en) * 2021-05-19 2021-06-18 金锐同创(北京)科技股份有限公司 Network port detection method and device
CN113949669A (en) * 2021-10-15 2022-01-18 湖南八零二三科技有限公司 Vehicle-mounted network switching device and system capable of automatically configuring and analyzing according to flow
CN114221859A (en) * 2022-01-06 2022-03-22 烽火通信科技股份有限公司 Method and system for generating tenant network physical link connectivity topology
CN114422297A (en) * 2022-01-05 2022-04-29 北京天一恩华科技股份有限公司 Multi-scene virtual network traffic monitoring method, system, terminal and medium
CN114531380A (en) * 2020-10-30 2022-05-24 中国移动通信有限公司研究院 Mirror image quality checking method and device and electronic equipment
US11431656B2 (en) * 2020-05-19 2022-08-30 Fujitsu Limited Switch identification method and non-transitory computer-readable recording medium
CN115277504A (en) * 2022-07-11 2022-11-01 京东科技信息技术有限公司 Network traffic monitoring method, device and system
CN115297033A (en) * 2022-07-20 2022-11-04 上海量讯物联技术有限公司 Internet of things terminal flow auditing method and system
CN115941534A (en) * 2022-12-08 2023-04-07 贵州电网有限责任公司 Network storm source tracing method for local area network of power system

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10992515B1 (en) * 2019-06-10 2021-04-27 Cisco Technology, Inc. Link state tracking for virtual interfaces
CN111884881B (en) * 2020-07-28 2022-02-18 苏州浪潮智能科技有限公司 Monitoring method, device and system for Ethernet switching network and switch

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7860026B2 (en) * 2007-03-07 2010-12-28 Hewlett-Packard Development Company, L.P. Network switch deployment
US8615008B2 (en) * 2007-07-11 2013-12-24 Foundry Networks Llc Duplicating network traffic through transparent VLAN flooding
EP2582100A4 (en) * 2010-06-08 2016-10-12 Nec Corp Communication system, control apparatus, packet capture method and program
CN102158348A (en) * 2011-01-30 2011-08-17 北京星网锐捷网络技术有限公司 Network topology discovery method, device and network equipment
US8621057B2 (en) * 2011-03-07 2013-12-31 International Business Machines Corporation Establishing relationships among elements in a computing system
US20120290711A1 (en) 2011-05-12 2012-11-15 Fluke Corporation Method and apparatus to estimate application and network performance metrics and distribute those metrics across the appropriate applications, sites, servers, etc
US9008080B1 (en) * 2013-02-25 2015-04-14 Big Switch Networks, Inc. Systems and methods for controlling switches to monitor network traffic
US9584393B2 (en) * 2013-03-15 2017-02-28 Extreme Networks, Inc. Device and related method for dynamic traffic mirroring policy
US9172627B2 (en) * 2013-03-15 2015-10-27 Extreme Networks, Inc. Device and related method for dynamic traffic mirroring
US8626912B1 (en) * 2013-03-15 2014-01-07 Extrahop Networks, Inc. Automated passive discovery of applications
US9203711B2 (en) * 2013-09-24 2015-12-01 International Business Machines Corporation Port mirroring for sampling measurement of network flows
US20150256413A1 (en) * 2014-03-06 2015-09-10 Sideband Networks Inc. Network system with live topology mechanism and method of operation thereof
JP2015171128A (en) 2014-03-11 2015-09-28 富士通株式会社 Packet acquisition method, packet acquisition device, and packet acquisition program
US20170250869A1 (en) * 2014-09-12 2017-08-31 Andreas Richard Voellmy Managing network forwarding configurations using algorithmic policies
CN104283791B (en) * 2014-10-09 2018-04-06 新华三技术有限公司 Three etale topologies in a kind of SDN determine method and apparatus
US10469342B2 (en) 2014-10-10 2019-11-05 Nicira, Inc. Logical network traffic analysis
US9813323B2 (en) * 2015-02-10 2017-11-07 Big Switch Networks, Inc. Systems and methods for controlling switches to capture and monitor network traffic
CN106375384B (en) 2016-08-28 2019-06-18 北京瑞和云图科技有限公司 The management system and control method of image network flow in a kind of virtual network environment
US10419327B2 (en) * 2017-10-12 2019-09-17 Big Switch Networks, Inc. Systems and methods for controlling switches to record network packets using a traffic monitoring network
CN108111423B (en) * 2017-12-28 2020-11-17 迈普通信技术股份有限公司 Traffic transmission management method and device and network shunting equipment

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10735511B2 (en) * 2013-03-15 2020-08-04 Extreme Networks, Inc. Device and related method for dynamic traffic mirroring
US20190158591A1 (en) * 2013-03-15 2019-05-23 Extreme Networks, Inc. Device and related method for dynamic traffic mirroring
US10965555B2 (en) * 2018-01-23 2021-03-30 Arista Networks, Inc. Accelerated network traffic sampling using an accelerated line card
US10756989B2 (en) 2018-01-23 2020-08-25 Arista Networks, Inc. Accelerated network traffic sampling for a non-accelerated line card
US10938680B2 (en) 2018-01-23 2021-03-02 Arista Networks, Inc. Accelerated network traffic sampling using a network chip
US20190230009A1 (en) * 2018-01-23 2019-07-25 Arista Networks, Inc. Accelerated network traffic sampling using an accelerated line card
CN110784513A (en) * 2019-09-18 2020-02-11 深圳云盈网络科技有限公司 Data mirroring method based on data frame of link layer
CN114521322A (en) * 2019-10-10 2022-05-20 思科技术公司 Dynamic discovery of service nodes in a network
WO2021072130A1 (en) * 2019-10-10 2021-04-15 Cisco Technology, Inc. Dynamic discovery of service nodes in a network
US11799753B2 (en) 2019-10-10 2023-10-24 Cisco Technology, Inc. Dynamic discovery of service nodes in a network
US11088934B2 (en) 2019-10-10 2021-08-10 Cisco Technology, Inc. Dynamic discovery of service nodes in a network
CN111144504A (en) * 2019-12-30 2020-05-12 成都科来软件有限公司 Software image flow identification and classification method based on PCA algorithm
CN111343167A (en) * 2020-02-19 2020-06-26 北京天融信网络安全技术有限公司 Information processing method based on network and electronic equipment
CN111654452A (en) * 2020-05-08 2020-09-11 杭州迪普科技股份有限公司 Message processing method and device
US11431656B2 (en) * 2020-05-19 2022-08-30 Fujitsu Limited Switch identification method and non-transitory computer-readable recording medium
CN114531380A (en) * 2020-10-30 2022-05-24 中国移动通信有限公司研究院 Mirror image quality checking method and device and electronic equipment
CN112910686A (en) * 2021-01-14 2021-06-04 上海牙木通讯技术有限公司 Flow analysis system, method of operating flow analysis system, and computer-readable storage medium
CN112995231A (en) * 2021-05-19 2021-06-18 金锐同创(北京)科技股份有限公司 Network port detection method and device
CN113949669A (en) * 2021-10-15 2022-01-18 湖南八零二三科技有限公司 Vehicle-mounted network switching device and system capable of automatically configuring and analyzing according to flow
CN114422297A (en) * 2022-01-05 2022-04-29 北京天一恩华科技股份有限公司 Multi-scene virtual network traffic monitoring method, system, terminal and medium
CN114221859A (en) * 2022-01-06 2022-03-22 烽火通信科技股份有限公司 Method and system for generating tenant network physical link connectivity topology
CN115277504A (en) * 2022-07-11 2022-11-01 京东科技信息技术有限公司 Network traffic monitoring method, device and system
CN115297033A (en) * 2022-07-20 2022-11-04 上海量讯物联技术有限公司 Internet of things terminal flow auditing method and system
CN115941534A (en) * 2022-12-08 2023-04-07 贵州电网有限责任公司 Network storm source tracing method for local area network of power system

Also Published As

Publication number Publication date
CN109962825A (en) 2019-07-02
TWI664838B (en) 2019-07-01
JP2019106705A (en) 2019-06-27
US10659393B2 (en) 2020-05-19
TW201929490A (en) 2019-07-16
CN109962825B (en) 2021-01-01
JP6609024B2 (en) 2019-11-20

Similar Documents

Publication Publication Date Title
US10659393B2 (en) Method and device for monitoring traffic in a network
JP5958570B2 (en) Network system, controller, switch, and traffic monitoring method
US8218449B2 (en) System and method for remote monitoring in a wireless network
US7573859B2 (en) System and method for remote monitoring in a wireless network
US10243862B2 (en) Systems and methods for sampling packets in a network flow
US7525922B2 (en) Duplex mismatch testing
EP2671352B1 (en) System and method for aggregating and estimating the bandwidth of multiple network interfaces
US9391895B2 (en) Network system and switching method thereof
US20170111813A1 (en) Network monitor
EP2557731B1 (en) Method and system for independently implementing fault location by intermediate node
US20120281528A1 (en) Autonomic network management system
US9077618B2 (en) Service level mirroring in ethernet network
KR20140072343A (en) Method for handling fault in softwate defined networking networks
EP3576356A1 (en) Devices for analyzing and mitigating dropped packets
US20140022886A1 (en) Proxy maintenance endpoint at provider edge switch
Shirali-Shahreza et al. Empowering software defined network controller with packet-level information
Gökarslan et al. Towards a urllc-aware programmable data path with p4 for industrial 5g networks
US20090285103A1 (en) Apparatus for controlling tunneling loop detection
US6658012B1 (en) Statistics for VLAN bridging devices
Janakaraj et al. Towards in-band telemetry for self driving wireless networks
Theoleyre et al. Operations, Administration and Maintenance (OAM) features for RAW
EP3788748B1 (en) First network node, second network node, and methods performed thereby for tracing a packet in a pipeline
Ghazisaeedi et al. Mobile core traffic balancing by openflow switching system
Mustafiz et al. Analysis of QoS in software defined wireless network with spanning tree protocol
Brockelsby et al. Augmenting Campus Wireless Architectures with SDN

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: INDUSTRIAL TECHNOLOGY RESEARCH INSTITUTE, TAIWAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, MING-HUNG;CHIUEH, TZI-CKER;LEE, YU-WEI;AND OTHERS;SIGNING DATES FROM 20181107 TO 20181117;REEL/FRAME:048680/0980

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4