CN117278459A - Method and device for determining unloading speed of flow table, storage medium and electronic equipment - Google Patents

Method and device for determining unloading speed of flow table, storage medium and electronic equipment Download PDF

Info

Publication number
CN117278459A
CN117278459A CN202311294467.7A CN202311294467A CN117278459A CN 117278459 A CN117278459 A CN 117278459A CN 202311294467 A CN202311294467 A CN 202311294467A CN 117278459 A CN117278459 A CN 117278459A
Authority
CN
China
Prior art keywords
flow
flow table
network card
intelligent network
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311294467.7A
Other languages
Chinese (zh)
Inventor
薛博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Metabrain Intelligent Technology Co Ltd
Original Assignee
Suzhou Metabrain Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Metabrain Intelligent Technology Co Ltd filed Critical Suzhou Metabrain Intelligent Technology Co Ltd
Priority to CN202311294467.7A priority Critical patent/CN117278459A/en
Publication of CN117278459A publication Critical patent/CN117278459A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/38Flow based routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/32Flooding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • H04L63/1458Denial of Service

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The embodiment of the application provides a method and a device for determining a flow table unloading speed, a storage medium and electronic equipment, wherein the method comprises the following steps: determining the number of flow tables for sending flow tables to be offloaded to an intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and an OVS-datapath of the intelligent network card, and is used for sending the flow tables to be offloaded to the intelligent network card; and determining the flow table unloading speed of the intelligent network card based on the number of the flow tables. By the method and the device, the problem that the change trend of the total number of the flow tables cannot be monitored only by checking the flow table information in the related technology is solved, and the effect of monitoring the change trend of the total number of the flow tables is achieved.

Description

Method and device for determining unloading speed of flow table, storage medium and electronic equipment
Technical Field
The embodiment of the application relates to the field of computers, in particular to a method and a device for determining a flow table unloading speed, a storage medium and electronic equipment.
Background
In the related art, when a large amount of new traffic suddenly flows into the intelligent network card for a certain period of time, such as when a DDos attack is encountered, most of the traffic is routed through the low Path. And simultaneously, a large number of new flow tables are issued to the intelligent network card in a netlink mode. The currently known monitoring method is to display the flow table directly inside the intelligent network card with ovs-dpctl dump-flows type=offloaded command. However, only specific flow table information can be viewed by this method.
Accordingly, the related art has a problem that only the flow table information can be checked, and the trend of the total number of flow tables cannot be monitored.
In view of the above problems in the related art, no effective solution has been proposed at present.
Disclosure of Invention
The embodiment of the application provides a method and a device for determining the unloading speed of a flow table, a storage medium and electronic equipment, which are used for at least solving the problem that the flow table information can only be checked and the change trend of the total number of the flow tables can not be monitored in the related technology.
According to an embodiment of the present application, there is provided a method for determining a flow table unloading speed, including: determining the number of flow tables for sending flow tables to be offloaded to an intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be offloaded to the intelligent network card; and determining the flow table unloading speed of the intelligent network card based on the flow table quantity.
According to another embodiment of the present application, there is provided a device for determining an unloading speed of a flow table, including: the first determining module is used for determining the number of flow tables for sending flow tables to be unloaded to the intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and an OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be unloaded to the intelligent network card; and the second determining module is used for determining the flow table unloading speed of the intelligent network card based on the flow table quantity.
According to a further embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the present application, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the number of the flow tables to be unloaded is determined, wherein the flow tables to be unloaded are sent to the intelligent network card through target inter-process communication, the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be unloaded to the intelligent network card; and determining the flow table unloading speed of the intelligent network card according to the number of the flow tables. The flow table unloading speed can be determined by detecting the number of the flow tables to be unloaded sent through the communication between the target processes, so that the change trend of the total number of the flow tables is monitored, the problem that the change trend of the total number of the flow tables can not be monitored only by checking the flow table information in the related technology can be solved, and the effect of monitoring the change trend of the total number of the flow tables is achieved.
Drawings
FIG. 1 is a schematic diagram of a cloud computing host of a generic network card;
FIG. 2 is a schematic diagram of a cloud computing host of an intelligent network;
FIG. 3 is a schematic diagram of an original flow through an intelligent network card;
fig. 4 is a hardware block diagram of a mobile terminal according to a method for determining a flow table unloading speed according to an embodiment of the present application;
FIG. 5 is a flow chart of a method of determining a flow table offload speed in accordance with an embodiment of the present application;
FIG. 6 is a flow chart of a method of determining the unloading speed of a flow table according to an embodiment of the invention;
FIG. 7 is a flow chart of a method of determining the unloading speed of a flow table according to an embodiment of the invention;
fig. 8 is a block diagram of the configuration of the flow table unloading speed determination device according to the embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
With the popularity of virtualized networks and the application of virtualized networks in cloud computing, more and more enterprises, organizations, are beginning to migrate their own traffic to virtualized or superset systems. Some networks of virtualized or superset systems are virtual networks based on ovs and openflow implementations. The OpenFlow is a network communication protocol, which belongs to a data link layer, and can control a forwarding plane (forwarding plane) of a network switch or router, thereby changing a network path taken by a network data packet.
Virtual networks are quite different from traditional physical networks. Conventional physical networks are typically composed of specialized network devices such as switches, routers, gateways, etc. They are said to be "professional routers" because their hardware devices are all specially designed for their own functions. Taking the switch as an example, the purpose of the switch is to forward the message, so that the chip of the switch can realize hardware forwarding to improve forwarding performance. In contrast, in order to realize various functions as the physical device and ensure the expandability of the network, the virtualized network functions in the cloud computing are realized by a general chip on the host, and the method is a software forwarding mode. Whereas the performance of hardware forwarding is much better than the performance of software forwarding.
Because the virtualized network functions in the cloud computing are realized through the universal chip on the host computer, the method means that when in work, a part of the chip on the host computer of the cloud computing is used by the virtualized network, and the rest can be really calculated as available computing power. It is expected that the computational effort required to implement traffic forwarding when the traffic in the virtual network is relatively large will increase, eventually resulting in all the computational effort being used to implement the forwarding function of the virtualized network, resulting in no additional computational effort that can be practically used. The most important resource for cloud computing is computing power, so this problem must be faced and solved. A smart network card may typically be used. The intelligent network card is provided with a forwarding chip, the virtual network forwarding function on the cloud computing can be given to the intelligent network card by using the intelligent network card, the chip on the cloud computing host can be specially used for computing, and meanwhile, the network performance can be improved. How can the virtual network forwarding function on the traditional cloud computing be handed over to the intelligent network card? For virtual networks based on ovs and openflow implementations, one approach is by way of hardware offloading.
Fig. 1 is a schematic diagram of a cloud computing host of a common network card, and as shown in fig. 1, OVS mainly comprises ovsdb-server and OVS-vswitchd processes located in a user space, and OVS datapath located in a kernel space. In the whole OVS architecture, the controller converts various network topologies and network functions into data of the OVS and OpenFlow rules, and the data and the OpenFlow rules are respectively issued to ovsdb-server and OVS-vswitchd processes. Wherein the ovsdb-server process is specially responsible for the functions of the database including the OVS control plane and the data of the data plane. It provides data to ovs-vswitchd process, ovs-vswitchd matches the network packet with the data provided by ovsdb-server and with its own proprietary OpenFlow rules. After matching the corresponding rule, OVS-vswitchd writes one (or more) forwarding rules (which are embodied as flow tables, different from OpenFlow rules here, closer to forwarding rules such as mac tables in real networks) into OVS datapath through netlink. Thus, when the first network data packet arrives at the OVS datapath, if there is no corresponding rule in the OVS datapath, the above-mentioned matching flow will be followed; when the subsequent network data packet of the same network data flow arrives at the OVS datapath, the datapath can directly complete forwarding because of the forwarding rule, and the inquiry to OVS-vswitchd is not needed. Thus, looking up the path of OpenFlow for forwarding through OVS-vswitchd is called slow path, and the path of direct forwarding through OVS datapath is called fastpath to inevitably consume a certain amount of computation.
Fig. 2 is a schematic diagram of a cloud computing host of an intelligent network, and as shown in fig. 2, the principle of hardware offloading is that the fastpath function in the OVS function is not implemented by the cloud computing host any more, but is implemented in an intelligent network card. Of course, it is also necessary that the intelligent network card must support the OVS hardware offload function in general, that is, the function of the OVS datapath of the OVS can be implemented on the intelligent network card (precisely, the fastpath of the OVS datapath, because the flowpath function of the OVS datapath is actually implemented on the host). In practice, the cloud computing host and the intelligent network card both support OVS, however, the cloud computing host is only responsible for the functions of the ovsdb-server and the OVS-vswitchd part, and the intelligent network card is only responsible for the functions of the OVS-datapath part, and the cooperation of the two is a complete OVS function. Only the two reasonably distribute the working contents according to the advantages of the two, thereby achieving the purpose of reasonably utilizing the resources. Some commands of ovs are also supported in the intelligent network card to see the flow table that has been offloaded into the intelligent network card, for example: ovs-dppctl dump-flows type = offloaded can look at the flow table already offloaded onto the network card. In this way, in the process of forwarding the message, the flow corresponding to the existing flow table is directly forwarded at the intelligent network card, so that the computational power consumed by the cloud host CPU in the aspect of forwarding the flow of the virtual network is greatly reduced. However, after the OVS fastpath is offloaded, the first packet of the data stream is still forwarded by software, that is, the first packet is still processed in the cloud computing host. Generating a Fastpath forwarding flow table in the forwarding process and configuring the Fastpath forwarding flow table to a hardware network card through a TC interface of TC-flow by using a netlink protocol.
With continued reference to fig. 2, there are two main types of flows in fig. 2: 1. flow through low Path. 2. The flow of Fast Path is walked. The traffic going through the low Path is usually the first packet traffic, i.e. contains five tuples: the flow of source IP, destination IP, source port, destination port and protocol first passes through the flow of the network card. At this time, the intelligent network card does not have a corresponding flow table, and the flow is required to be sent into a ovs-vsitcd module of the host to generate the flow table, and then the flow is sent into a datapath of the intelligent network card in a hardware unloading mode. Wherein the hardware unloading is realized by means of netlink. The traffic walking Fast Path is typically non-first packet traffic, i.e. traffic through which the same five-tuple traffic has passed before. Since there has been traffic passing before, there is already a corresponding flow table in the ovs-datapath of the intelligent network card, then the matching rule can be forwarded directly in the flow table of the intelligent network card at the time of traffic arrival to complete the offloading. When a large amount of new traffic suddenly floods the intelligent network card, such as encountering a DDos attack, most of the traffic must travel the low Path at this time. And simultaneously, a large number of new flow tables are issued to the intelligent network card in a netlink mode.
The currently known monitoring methods are: the flow table can be displayed directly inside the intelligent network card with ovs-dpctl dump-flows type = offfloded command, but this approach has drawbacks: only specific flow table information can be checked, but the change trend of the total number of the flow tables cannot be reflected. The schematic diagram of the original flow flowing through the intelligent network card can be seen in fig. 3, and as shown in fig. 3, the flow includes: and matching the data packet with the flow rule unloaded on the intelligent network card in the ovs-datapath of the intelligent network card after receiving the data packet, and forwarding according to the flow rule successfully matched under the condition of successful matching. In the event of a match failure, a new flow rule, i.e., a target flow rule, is generated by ovs-vsitcd, which is reported to the host. ovs-vsitcd performs hardware offloading, namely, issues the flow table to ovs-datapath of the intelligent network card through the netlink. In order to solve the problem, the following embodiments are proposed.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 4 is a block diagram of a hardware structure of the mobile terminal according to the method for determining the unloading speed of the flow table in the embodiment of the present application. As shown in fig. 4, the mobile terminal may include one or more processors 102 (only one is shown in fig. 4) (the processor 102 may include, but is not limited to, a microprocessor MCU, a programmable logic device FPGA, etc. processing means) and a memory 104 for storing data, wherein the mobile terminal may further include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 4 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 4, or have a different configuration than shown in fig. 4.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a method for determining a flow table unloading speed in the embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for determining a flow table unloading speed is provided, and fig. 5 is a flowchart of a method for determining a flow table unloading speed according to an embodiment of the present application, as shown in fig. 5, where the flowchart includes the following steps:
step S502, determining the number of flow tables for sending flow tables to be offloaded to an intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be offloaded to the intelligent network card;
and step S504, determining the flow table unloading speed of the intelligent network card based on the flow table quantity.
In the above embodiment, the target inter-process communication may be netlink. When a large number of new data flows are in, when a large number of flows are all low paths, a large number of flow tables are issued to the intelligent network card by the cloud computing host through the network link, that is, when a new flow is increased, the flow through the network link is also increased, and when the new flow is reduced, the flow through the network link is also reduced. This means that the purpose of monitoring the hardware offloading of the intelligent network card can be achieved by monitoring the netlink traffic of the hardware offloading. The target inter-process communication may be a netlink communication between OVS-vswitchd of the cloud computing host and OVS-datapath of the intelligent network card. A function of monitoring the offloading of the intelligent network card may be added to the netlink communication between OVS-vswitchd of the cloud computing host and OVS-datapath of the intelligent network card to monitor the netlink traffic of the hardware offload. And determining the flow table unloading speed of the intelligent network card according to the detected number of the flow tables.
The execution subject of the above steps may be, but is not limited to, a cloud computing host.
According to the method and the device, the number of the flow tables to be unloaded is determined, wherein the flow tables to be unloaded are sent to the intelligent network card through target inter-process communication, the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be unloaded to the intelligent network card; and determining the flow table unloading speed of the intelligent network card according to the number of the flow tables. The flow table unloading speed can be determined by detecting the number of the flow tables to be unloaded sent through the communication between the target processes, so that the change trend of the total number of the flow tables is monitored, the problem that the change trend of the total number of the flow tables can not be monitored only by checking the flow table information in the related technology can be solved, and the effect of monitoring the change trend of the total number of the flow tables is achieved.
In one exemplary embodiment, determining the number of flow tables to send to the intelligent network card flow tables to be offloaded via the target inter-process communication includes: the method comprises the steps that a control target process obtains flow table information in a target file according to a preset period, wherein the target file is a preset file and is used for storing flow table information of flow to be unloaded into an intelligent network card; and determining the number of the flow table information as the number of the flow tables. In this embodiment, a target file may be preset to store flow table information of the flow table to be offloaded. And determining the number of the flow tables according to the flow table information in the target file. When the flow table information is acquired, the target process may be controlled to acquire the flow table information in a predetermined period. And determining the flow table unloading speed according to the preset period and the acquired flow table information. The target process may be a monitor process. The predetermined period may be a predetermined period, which may be 10s, 20s, 1min, 10min, etc., and the present invention is not limited thereto.
In the above embodiment, the flow table information to be offloaded is stored in a preset target file, and the control target process obtains the flow table information in the target file according to a predetermined period, so that the number of the flow tables to be offloaded in the predetermined period can be accurately determined according to the flow table information, and the effect of accurately monitoring the number of the flow tables to be offloaded is achieved.
In an exemplary embodiment, before the control target process obtains the flow table information in the target file according to a predetermined period, the method further includes: under the condition that a target flow table to be unloaded is sent to the intelligent network card, determining target flow table information of the target flow table; and writing the target flow table information of the target flow table into the target file. In this embodiment, the target flow table information of the target flow table to be offloaded may be written into the target file in advance. The target file can be preset, and when the target flow table to be unloaded is monitored, the target flow table information of the target flow table is written into the target file, so that the target process can conveniently acquire the flow table information in the target file, and the unloading speed of the flow table is further determined.
In the above embodiment, a file storing data may be set up outside, the flow data may be written to a specific file, and then a simple reading program may be set up to read the specific file at regular intervals (N seconds), so that a flow data map may be obtained at every N seconds. The implementation of the write traffic may be implemented with kernel_write provided by the kernel. Similarly, the read traffic can be easily implemented with the kernel_read.
In the above embodiment, in the netlink communication between OVS-vswitchd of the cloud computing host and OVS-datapath of the intelligent network card, data is written into the file to record the number of streams tables to be offloaded whenever one (or more) streams tables to be offloaded are issued by using netlink on the cloud computing host. And a process monitor is started to take data from the file at regular time, and the average flow table unloading number is calculated through time difference. Thus reflecting the flow of the netlink offloaded by the relevant flow table. By writing the target flow table information of the target flow table to be unloaded into the target file, the target process can conveniently acquire the flow table information in the target file, the time for determining the unloading speed of the flow table is shortened, and the efficiency for determining the unloading speed of the flow table is improved.
In an exemplary embodiment, the flow number may also be sent to a specific monitoring program to perform flow monitoring counting, for example, a counter is set in a specific monitoring program, and is used for counting the number of flow tables to be offloaded into the intelligent network card, which are sent in the target duration through the communication between the target processes. The ratio of this number to the target duration is determined as the flow table unloading speed.
In one exemplary embodiment, determining the flow table offload speed of the intelligent network card based on the number of flow tables comprises: determining a predetermined period, wherein the predetermined period is a time period for determining the number of flow tables; and determining the ratio of the flow table number to the time period as the flow table unloading speed. In the present embodiment, the number of flow tables generated in a predetermined period may be determined, and the ratio of the number of flow tables to the predetermined period may be determined as the flow table unloading speed.
In the above embodiment, the target process, such as a monitor process, may acquire the flow table information in the target file from the target file according to a predetermined period, and further determine the number of flow tables according to the flow table information. When determining the flow table unloading speed, the flow table unloading speed can be determined according to the time period of the target process for acquiring the flow table information from the target file and the flow table number of the flow table information acquired in the time period. The ratio of the number of flow tables to the time period, e.g. a predetermined period, determines the flow table unloading speed. The flow meter unloading speed can reflect the flow trend and the speed of the intelligent network card unloading flow meter, and the effect of intuitively reflecting the flow trend and the speed of the intelligent network card unloading flow meter is achieved.
In the above embodiment, when the data stream is relatively stable, there is no large amount of new stream, so that the unloading of the large-scale intelligent network card stream table is not required. Only a small amount of traffic or no traffic can be detected when monitoring the netlink traffic. When a large amount of new data flows are suddenly gushed in, a large amount of new flows are generated, so that the large-scale intelligent network card flow table unloading is needed. A large amount of traffic can be detected when monitoring the netlink traffic. For example, when the average flow table unloading number monitored is 20 pieces/s, the flow table speed of unloading to the intelligent network card is 20 pieces/s, namely the speed of newly built flow is 20 pieces/s. The unloading condition of the intelligent network card flow table can be known exactly through the corresponding relation.
In one exemplary embodiment, before determining the number of flow tables to send flow tables to be offloaded to the intelligent network card via the target inter-process communication, the method further comprises: establishing netlink communication between an OVS-vswitchd and an OVS-datapath of the intelligent network card under the condition that data to be unloaded exist, so as to obtain the target inter-process communication; packaging the data to be offloaded into a data packet to be offloaded to obtain the flow table to be offloaded; and sending the to-be-unloaded flow table to the intelligent network card through the communication between the target processes. In this embodiment, the target process communication may be a netlink, which may be a netlink communication between OVS-vswitchd (virtual machine) of the cloud computing host and OVS-datapath (kernel module) of the intelligent network card, where in the netlink communication, data is written into the target file to record the number of the flow tables to be offloaded whenever one (or more) flow tables to be offloaded are sent by using the netlink on the cloud computing host. Before determining the number of flow tables to be offloaded to the intelligent network card through the target inter-process communication, a netlink ready to send a data packet can be established. And judging whether a flow table is issued, namely whether the data packet needs to be sent. And if the judgment result is yes, preparing a data packet package. And encapsulating the data packet. And transmitting the data packet through the netlink. After the data packet is sent through the netlink, the data can be written into a feature count file (text), namely a target file, and the monitoring program reads the program, so that the unloading condition and the unloading speed of the flow table can be determined. By counting the netlink flow used for unloading the intelligent network card hardware, the intelligent network card hardware unloading is monitored, the flow trend and the speed of the intelligent network card unloading flow chart can be intuitively reflected without adding other hardware facilities, and the cost for determining the flow trend and the speed of the intelligent network card unloading flow chart is saved.
In one exemplary embodiment, after determining the flow table offload speed of the intelligent network card based on the flow table number, the method further comprises: and reducing the flow sent to the intelligent network card through the target inter-process communication under the condition that the flow table unloading speed is greater than a preset threshold value. In this embodiment, more functions, such as determining a flooding attack, may be performed by focusing on the data characteristics of the smart network card unloading flow table, i.e., the flow table unloading speed. The data characteristics of the monitoring data of the monitoring method are very obvious when the flooding attack is performed, and whether the flooding attack occurs can be judged by setting a threshold value. The smart network card may be restricted from unloading the flow table by restricting the flow of the netlink, e.g., to defend against flooding attacks. When flooding attack occurs, the number of the flow tables can be limited, and the purpose of limiting the newly built flow tables can be achieved by limiting the flow speed of the netlink.
In the above embodiment, whether the system is under network attack can be accurately determined through the flow table unloading speed, and when the system is under network attack, if the flow table unloading speed is greater than a predetermined threshold, the flow sent to the intelligent network card through the communication between the target processes can be reduced. The safety of the system is improved.
In one exemplary embodiment, before determining the number of flow tables to send flow tables to be offloaded to the intelligent network card via the target inter-process communication, the method further comprises: matching the received flow table to be unloaded with the stored flow rule in the intelligent network card to obtain a matching result; generating a target flow rule corresponding to the flow table to be unloaded under the condition that the matching result indicates that the matching is failed; and forwarding the flow table to be unloaded to the intelligent network card based on the target flow rule. In this embodiment, after receiving the data packet, the data packet may be matched with the flow rule unloaded on the intelligent network card in the ovs-datapath of the intelligent network card, and if the matching is successful, the data packet is forwarded according to the flow rule that the matching is successful. In the event of a match failure, a new flow rule, i.e., a target flow rule, is generated by ovs-vsitcd, which is reported to the host. ovs-vsitcd performs hardware offloading, namely, issues the flow table to ovs-datapath of the intelligent network card through the netlink. And monitors the number of unloaded flow tables through netlink. And determines the flow table offload rate using monitor.
In the above embodiment, in case of failure in matching, it is determined that the intelligent network card has not unloaded the flow table to be unloaded, so that a target flow rule may be newly created, and the flow to be unloaded is unloaded by using the target flow rule. Namely, different flow tables can be unloaded by applying different flow rules, so that the unloading efficiency of the flow tables is improved.
In one exemplary embodiment, before determining the number of flow tables to send flow tables to be offloaded to the intelligent network card via the target inter-process communication, the method further comprises: and determining the size of the received flow table, and transmitting the flow table to the intelligent network card through the communication between the target processes when the size of the flow table is larger than the set size. And storing the flow table when the flow table size is smaller than the set size. In this embodiment, in the case of receiving the flow table, the size of the flow table may be determined first, and when the flow table is too large, a large amount of computation force and the processor occupancy rate may be occupied. Therefore, the flow table with the flow larger than the set size can be unloaded to the intelligent network card, the intelligent network card processes the flow table, and when the flow table is smaller than the set size, even if a target host stores and processes the flow table, the situation of system blocking and the like can not occur, so that the flow table with the flow table smaller than the set size can be stored and processed. The flow table with the flow size larger than the set size is unloaded to the intelligent network card, so that the utilization rate of the calculation force of the system can be maximized, the running speed of the system can be ensured, and the processing speed of the flow table can be ensured through the flow table with the flow size smaller than the set size being stored.
FIG. 6 is a flow chart (I) of a method for determining the unloading speed of a flow table according to an embodiment of the invention, as shown in FIG. 6, the flow comprises:
step S602, a data packet is received.
And step S604, matching the data packet with the flow rule unloaded on the intelligent network card in the ovs-datapath of the intelligent network card.
Step S606, if the matching is successful, forwarding is performed according to the flow rule of the successful matching.
In step S608, in case of failure in matching, the new flow rule, i.e. the target flow rule, is generated by the ovs-vsitcd reported to the host.
In step S610, ovs-vsitcd performs hardware offloading, i.e. the flow table is issued to ovs-datapath of the intelligent network card through the netlink.
Step S612, monitoring the number of unloaded flow tables through netlink.
In step S614, the flow table unloading speed is determined by monitor.
FIG. 7 is a flow chart (II) of a method for determining the unloading speed of a flow table according to an embodiment of the invention, as shown in FIG. 7, the flow comprises:
in step S702, a netlink is established to prepare to send a data packet.
Step S704, it is determined whether a flow table is issued, i.e. whether a data packet needs to be sent. If the determination result is yes, step S706 is executed, and if the determination result is no, step S702 is executed.
In step S706, a packet encapsulation is prepared.
In step S708, the data size of the data packet to be encapsulated is calculated.
In step S710, the data packet is encapsulated.
Step S712, the data packet is transmitted.
In step S714, the data is written in the feature count file (text).
In step S716, the monitoring program reads the program.
In the foregoing embodiment, the intelligent network card hardware offload is monitored by counting the netlink traffic used for the intelligent network card hardware offload. Compared with the prior monitoring method for commonly displaying the flow table of specific unloading, the method for monitoring the flow table of the specific unloading provides a new monitoring method which is positioned at the host end and can reflect the change condition of hardware unloading. Even if the intelligent network card is doing a great deal of hardware unloading actions, the network flow can be kept stable.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The embodiment also provides a device for determining the unloading speed of the flow table, which is used for implementing the above embodiment and the preferred implementation manner, and is not described again. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 8 is a block diagram of a flow table unloading speed determining apparatus according to an embodiment of the present application, as shown in fig. 8, including:
a first determining module 82, configured to determine the number of flow tables that send flow tables to be offloaded to an intelligent network card through a target inter-process communication, where the target inter-process communication is a netlink communication that is established between OVS-vswitchd and an OVS-datapath of the intelligent network card in advance, and the target inter-process communication is used to send the flow tables to be offloaded to the intelligent network card;
a second determining module 84 is configured to determine a flow table unloading speed of the intelligent network card based on the number of flow tables.
In one exemplary embodiment, the first determining module 82 may be configured to determine the number of flow tables to be offloaded by sending the flow tables to the intelligent network card via the target inter-process communication by: the method comprises the steps that a control target process obtains flow table information in a target file according to a preset period, wherein the target file is a preset file and is used for storing flow table information of flow to be unloaded into an intelligent network card; and determining the number of the flow table information as the number of the flow tables.
In an exemplary embodiment, the device may be configured to determine, before the control target process obtains the flow table information in the target file according to a predetermined period, target flow table information of the target flow table in a case that a target flow table to be offloaded is sent to the intelligent network card; and writing the target flow table information of the target flow table into the target file.
In one exemplary embodiment, the second determining module 84 may determine the flow table offload speed of the intelligent network card based on the number of flow tables by: determining a predetermined period, wherein the predetermined period is a time period for determining the number of flow tables; and determining the ratio of the flow table number to the time period as the flow table unloading speed.
In an exemplary embodiment, the device may be further configured to, before determining that the number of flow tables to be offloaded is sent to the intelligent network card through the target inter-process communication, establish a netlink communication between the OVS-vswitchd and the OVS-datapath of the intelligent network card in the presence of the data to be offloaded, to obtain the target inter-process communication; packaging the data to be offloaded into a data packet to be offloaded to obtain the flow table to be offloaded; and sending the to-be-unloaded flow table to the intelligent network card through the communication between the target processes.
In an exemplary embodiment, the apparatus may be further configured to reduce traffic sent to the intelligent network card through the target inter-process communication after determining a flow table offload speed of the intelligent network card based on the number of flow tables if the flow table offload speed is greater than a predetermined threshold.
In an exemplary embodiment, the device may be further configured to, before determining the number of flow tables for sending the flow tables to be offloaded to the intelligent network card through the target inter-process communication, match the received flow tables to be offloaded with stored flow rules in the intelligent network card, to obtain a matching result; generating a target flow rule corresponding to the flow table to be unloaded under the condition that the matching result indicates that the matching is failed; and forwarding the flow table to be unloaded to the intelligent network card based on the target flow rule.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the present application also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principles of the present application should be included in the protection scope of the present application.

Claims (10)

1. A method for determining a flow table unloading speed, comprising:
determining the number of flow tables for sending flow tables to be offloaded to an intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be offloaded to the intelligent network card;
and determining the flow table unloading speed of the intelligent network card based on the flow table quantity.
2. The method of claim 1, wherein determining the number of flow tables to send to the intelligent network card flow tables to be offloaded via the target inter-process communication comprises:
the method comprises the steps that a control target process obtains flow table information in a target file according to a preset period, wherein the target file is a preset file and is used for storing flow table information of flow to be unloaded into an intelligent network card;
and determining the number of the flow table information as the number of the flow tables.
3. The method according to claim 2, wherein before the control target process acquires the flow table information in the target file at a predetermined period, the method further comprises:
under the condition that a target flow table to be unloaded is sent to the intelligent network card, determining target flow table information of the target flow table;
and writing the target flow table information of the target flow table into the target file.
4. The method of claim 1, wherein determining a flow table offload speed of the intelligent network card based on the number of flow tables comprises:
determining a predetermined period, wherein the predetermined period is a time period for determining the number of flow tables;
and determining the ratio of the flow table number to the time period as the flow table unloading speed.
5. The method of claim 1, wherein prior to determining the number of flow tables to send to the intelligent network card flow tables to be offloaded via the target inter-process communication, the method further comprises:
establishing netlink communication between an OVS-vswitchd and an OVS-datapath of the intelligent network card under the condition that data to be unloaded exist, so as to obtain the target inter-process communication;
packaging the data to be offloaded into a data packet to be offloaded to obtain the flow table to be offloaded;
and sending the to-be-unloaded flow table to the intelligent network card through the communication between the target processes.
6. The method of claim 1, wherein after determining the flow table offload speed of the intelligent network card based on the number of flow tables, the method further comprises:
and reducing the flow sent to the intelligent network card through the target inter-process communication under the condition that the flow table unloading speed is greater than a preset threshold value.
7. The method of claim 1, wherein prior to determining the number of flow tables to send to the intelligent network card flow tables to be offloaded via the target inter-process communication, the method further comprises:
matching the received flow table to be unloaded with the stored flow rule in the intelligent network card to obtain a matching result;
generating a target flow rule corresponding to the flow table to be unloaded under the condition that the matching result indicates that the matching is failed;
and forwarding the flow table to be unloaded to the intelligent network card based on the target flow rule.
8. A flow table unloading speed determining apparatus, comprising:
the first determining module is used for determining the number of flow tables for sending flow tables to be unloaded to the intelligent network card through target inter-process communication, wherein the target inter-process communication is established in advance between OVS-vswitchd and OVS-datapath of the intelligent network card, and the target inter-process communication is used for sending the flow tables to be unloaded to the intelligent network card;
and the second determining module is used for determining the flow table unloading speed of the intelligent network card based on the flow table quantity.
9. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 7.
10. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 7 when the computer program is executed.
CN202311294467.7A 2023-10-08 2023-10-08 Method and device for determining unloading speed of flow table, storage medium and electronic equipment Pending CN117278459A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311294467.7A CN117278459A (en) 2023-10-08 2023-10-08 Method and device for determining unloading speed of flow table, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311294467.7A CN117278459A (en) 2023-10-08 2023-10-08 Method and device for determining unloading speed of flow table, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN117278459A true CN117278459A (en) 2023-12-22

Family

ID=89210266

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311294467.7A Pending CN117278459A (en) 2023-10-08 2023-10-08 Method and device for determining unloading speed of flow table, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN117278459A (en)

Similar Documents

Publication Publication Date Title
US10594565B2 (en) Multicast advertisement message for a network switch in a storage area network
US10374900B2 (en) Updating a virtual network topology based on monitored application data
EP3275140B1 (en) Technique for achieving low latency in data center network environments
WO2021128927A1 (en) Message processing method and apparatus, storage medium, and electronic apparatus
CN109639488B (en) Multi-extranet shunt acceleration method and system
CN106878199A (en) The collocation method and device of a kind of access information
CN106603409B (en) Data processing system, method and equipment
US11303571B2 (en) Data communication method and data communications network
CN113965508B (en) Dual path data transmission method, electronic device, and computer-readable storage medium
CN110971540B (en) Data information transmission method and device, switch and controller
CN113839862B (en) Method, system, terminal and storage medium for synchronizing ARP information between MCLAG neighbors
CN110839007B (en) Cloud network security processing method and device and computer storage medium
CN116723154A (en) Route distribution method and system based on load balancing
CN117278459A (en) Method and device for determining unloading speed of flow table, storage medium and electronic equipment
CN111404705B (en) SDN optimization method and device and computer readable storage medium
CN109450794A (en) A kind of communication means and equipment based on SDN network
CN109039822A (en) A kind of BFD protocol massages filter method and system
CN116915837B (en) Communication method and communication system based on software defined network
US20230262146A1 (en) Analyzing network data for debugging, performance, and identifying protocol violations using parallel multi-threaded processing
WO2022253190A1 (en) Service flow performance testing method and apparatus, and communication network
WO2022253192A1 (en) Message forwarding method and apparatus, and communication network
WO2022253194A1 (en) Packet forwarding method and apparatus, and communication network
CN118041937A (en) Data access method and device of storage device
WO2024093365A1 (en) Time delay determination method and apparatus, and electronic device and storage medium
US20220224615A1 (en) Latency Assurance Method, System, and Apparatus, Computing Device, and Storage Medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination