CN113746893A - Intelligent network card data forwarding method, system and terminal based on FPGA - Google Patents

Intelligent network card data forwarding method, system and terminal based on FPGA Download PDF

Info

Publication number
CN113746893A
CN113746893A CN202110806562.5A CN202110806562A CN113746893A CN 113746893 A CN113746893 A CN 113746893A CN 202110806562 A CN202110806562 A CN 202110806562A CN 113746893 A CN113746893 A CN 113746893A
Authority
CN
China
Prior art keywords
flow table
cache module
table rule
network card
intelligent network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110806562.5A
Other languages
Chinese (zh)
Other versions
CN113746893B (en
Inventor
吴智谦
陈翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202110806562.5A priority Critical patent/CN113746893B/en
Publication of CN113746893A publication Critical patent/CN113746893A/en
Application granted granted Critical
Publication of CN113746893B publication Critical patent/CN113746893B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/56Provisioning of proxy services
    • H04L67/568Storing data temporarily at an intermediate stage, e.g. caching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application discloses an intelligent network card data forwarding method, system and terminal based on FPGA, the method comprises: defining a module used for storing high-heat flow table rules on an FPGA of an intelligent network card as a first cache module, and defining a hardware module used for storing low-heat flow table rules on the intelligent network card as a second cache module; when a data message in the data stream enters the intelligent network card, determining an action executed by the data message by inquiring a flow table rule in an SOC CPU of the intelligent network card, and issuing the flow table rule to a first cache module; and determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule. The system comprises: the device comprises a first cache module, a second cache module and a software control module. The terminal includes a memory and a processor. Through the method and the device, the data forwarding performance and the forwarding efficiency of the intelligent network card can be effectively improved.

Description

Intelligent network card data forwarding method, system and terminal based on FPGA
Technical Field
The present invention relates to the Field of network communication technologies, and in particular, to a method, a system, and a terminal for forwarding data of an intelligent network card based on a Field-Programmable Gate Array (FPGA).
Background
In the technical field of network communication, an OVS (Open Virtual Switch) is an important component of a current network infrastructure in a cloud data center. The data path of the OVS is implemented in kernel mode, which usually consumes a large amount of CPU computational resources. With the expansion of data size, users have higher and higher requirements on the data forwarding performance of the OVS. Therefore, how to improve the data forwarding performance is an important technical problem.
Currently, taking OVS data forwarding as an example, improving OVS data forwarding performance generally includes two improvement directions: one is the Bypass kernel technology at the software level. The method mainly moves the data path from the kernel mode to the user mode, and improves the forwarding efficiency and the network throughput by matching with a series of optimization. However, the improvement of the software direction is limited to improve the forwarding performance of the OVS, and at the same time, precious CPU resources are occupied. Therefore, more efficient is the other direction: the offload technology on the hardware level. The method mainly uses special hardware ASIC or FPGA to accelerate by unloading the OVS flow table to the intelligent network card.
However, in the current method for forwarding data in the hardware aspect, because the size of the flow table is getting larger and larger, the flow table rules cached by the intelligent network card will exceed the hardware capacity, how to select which flow table rules are unloaded to the hardware, and how to age the rules which are unloaded to the hardware, which is a problem that must be faced when further improving the forwarding performance.
Disclosure of Invention
The application provides an intelligent network card data forwarding method, system and terminal based on an FPGA (field programmable gate array), and aims to solve the problem that the data forwarding performance is not high enough due to the data forwarding method in the prior art.
In order to solve the technical problem, the embodiment of the application discloses the following technical scheme:
an intelligent network card data forwarding method based on FPGA comprises the following steps:
defining a module used for storing high-heat flow table rules on an FPGA of an intelligent network card as a first cache module, and defining a hardware module used for storing low-heat flow table rules on the intelligent network card as a second cache module, wherein the high-heat flow table rules are flow table rules hit by data messages within set time, and the low-heat flow table rules are flow table rules not hit by the data messages within the set time;
when a data message in a data stream enters the intelligent network card, whether the data message hits a stream table rule in a first cache module is judged by inquiring the first cache module;
if so, executing a specified action, and adding 1 to a hit counter in the first cache module, wherein the specified action comprises: any one of forwarding, discarding, mirroring and adding encapsulation;
if not, judging whether the data message hits a flow table rule in the second cache module by inquiring the second cache module;
if yes, executing the specified action, and adding 1 to a hit counter in the second cache module;
if not, the SOC CPU of the intelligent network card determines the action executed by the data message by inquiring the flow table rule in the SOC CPU, and issues the flow table rule hit by the data message to the first cache module;
the method comprises the steps of determining the heat degree of a current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between a first cache module and a second cache module according to the heat degree of the current flow table rule.
Optionally, before the hardware module for storing the high-heat flow table rule on the FPGA defining the intelligent network card is the first cache module and the hardware module for storing the low-heat flow table rule on the intelligent network card is the second cache module, the method further includes:
saving all flow table rules of the OVS, and recording the flow table rule attribute of any flow table rule, wherein the flow table rule attribute comprises: hardware offload tags, hit count, and aging time.
Optionally, the determining the heat degree of the current flow table rule by the periodic polling method, and controlling the current flow table rule to switch between the first cache module and the second cache module according to the heat degree of the current flow table rule includes:
when the data message triggers flow table unloading, unloading the current flow table rule into a first cache module, updating a hardware unloading mark of the first cache module to be a first-level cache, and resetting the hit times to be aging time;
inquiring the hit times of the flow table rule from the intelligent network card according to a set inquiry period, and updating the flow table rule attribute according to an inquiry result;
judging whether the aging time of the current flow table rule is less than or equal to a set aging time threshold value;
if yes, migrating the current flow table rule from the first cache module to the second cache module, and updating the hardware unloading mark of the current flow table rule to be a second-level cache;
and when the aging time of the current flow table rule is 0, deleting the current flow table rule from the second cache module, updating the hardware unloading mark of the current flow table rule to be null, resetting the hit times, and setting the aging time to be an invalid value.
Optionally, the determining, by a periodic polling method, the heat degree of the current flow table rule, and controlling, according to the heat degree of the current flow table rule, switching of the current flow table rule between the first cache module and the second cache module further includes:
judging whether the added value of the hit times of any flow table rule in the second cache module in the set time is larger than or equal to a set hit time threshold value or not;
if yes, the flow table rule is migrated from the second cache module to the first cache module again, the hardware unloading mark of the flow table rule is updated to be a first-level cache, a hit counter is reset, and the aging time is set to be a preset aging time maximum value.
Optionally, when a data packet in the data stream enters the intelligent network card, before querying the first cache module and determining whether the data packet hits the stream table rule in the first cache module, the method further includes:
and when the intelligent network card is initialized, initializing the first cache module and the second cache module.
Optionally, the method for initializing the first cache module and the second cache module includes:
adding a default flow table rule in the first cache module, and forwarding the data message to the second cache module for query;
and adding a default flow table rule in the second cache module, and forwarding the default flow table rule to the SOC CPU for query.
An intelligent network card data forwarding system based on FPGA, the system includes:
the first cache module is used for storing high-heat flow table rules, the first cache module is arranged on an FPGA of the intelligent network card, and the high-heat flow table rules are flow table rules hit by data messages within set time;
the second cache module is used for storing the low-heat flow table rule, the second cache module is arranged on the intelligent network card, and the low-heat flow table rule is a flow table rule which is not hit by the data message within set time;
the software control module is used for determining the action executed by the data message by inquiring the flow table rule in the software control module when the data message in the data flow enters the intelligent network card, and issuing the flow table rule to the first cache module;
the software control module is also used for determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule.
Optionally, the software control module is further configured to store all flow table rules of the OVS, and record a flow table rule attribute of any flow table rule, where the flow table rule attribute includes: hardware offload tags, hit count, and aging time.
Optionally, the software control module is further configured to initialize the first cache module and the second cache module when the intelligent network card is initialized.
A terminal, the terminal comprising: a processor, and a memory communicatively coupled to the processor, wherein,
the memory stores instructions executable by the processor, and the instructions are executed by the processor to enable the processor to implement the FPGA-based intelligent network card data forwarding method according to any one of the above items.
The technical scheme provided by the embodiment of the application can have the following beneficial effects:
the application provides an intelligent network card data forwarding method based on FPGA, firstly defining a module used for storing high-heat flow table rules on the FPGA of the intelligent network card as a first cache module, a hardware module used for storing low-heat flow table rules on the intelligent network card as a second cache module, secondly, when the data message in the data flow enters the intelligent network card, whether the data message hits the flow table rule in a certain cache module or not is judged by sequentially inquiring the first cache module and the second cache module, when the flow table rule of one cache module is hit, the action specified by the flow table rule is executed, and the hit counter of the cache module is increased by 1, when the flow table rules of the two cache modules are not hit, the SOC CPU of the intelligent network card determines the action executed by the data message by inquiring the flow table rules in the SOC CPU, and sends the flow table rules hit by the data message to the first cache module; and finally, determining the heat degree of the current flow table rule by a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule. According to the method, the first cache module and the second cache module are defined, the high-heat flow table rule and the low-heat flow table rule are distinguished, and therefore a two-stage cache mechanism is executed on the flow table rules. Whether the data message hits the flow table rule or not is checked by sequentially inquiring the first cache module and the second cache module, so that not only can the specified action be ensured to be executed in time, but also the hit times can be counted, a basis is provided for a subsequent aging mechanism, and the data forwarding efficiency is improved. In this embodiment, the heat of the current flow table rule is determined by a periodic polling method, and the current flow table rule is controlled to be switched between the first cache module and the second cache module according to the heat of the current flow table rule, so that the hit rate of hardware is improved, the forwarding performance of high-heat data is facilitated, and the overall data forwarding performance is improved.
The application also provides an intelligent network card data forwarding system based on the FPGA, which mainly comprises: the device comprises a first cache module, a second cache module and a software control module. Through first cache module and second cache module, divide into high heat flow table rule and low heat flow table rule with data flow table rule, realize carrying out two-stage cache mechanism to flow table rule, ensure that first cache module deposits the most commonly used flow table rule to make the data flow that corresponds obtain the highest performance of forwardding, and delete low heat flow table rule for follow-up in time, save space and resource and provide the basis, can effectively improve data forwarding efficiency. The cache module stored in the flow table rule is determined through the software control module, the heat of the current flow table rule is determined through a periodic polling mode, and switching is performed among different caches according to the heat, so that the hit rate of hardware is improved, the forwarding performance of high-heat data is facilitated, and the data forwarding performance of the whole system is improved.
The application also provides a terminal, and the terminal also has the corresponding technical effects of the intelligent network card data forwarding method and system based on the FPGA, and the description is omitted here.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for forwarding data of an intelligent network card based on an FPGA according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of an intelligent network card data forwarding system based on an FPGA according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating a connection relationship between an intelligent network card data forwarding system based on an FPGA and an intelligent network card in the embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For a better understanding of the present application, embodiments of the present application are explained in detail below with reference to the accompanying drawings.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart of a method for forwarding data of an intelligent network card based on an FPGA according to an embodiment of the present application.
As shown in fig. 1, the method for forwarding data of the intelligent network card based on the FPGA in this embodiment mainly includes the following steps:
s2: a module used for storing the high-heat flow table rule on the FPGA of the intelligent network card is defined as a first cache module, and a hardware module used for storing the low-heat flow table rule on the intelligent network card is defined as a second cache module.
In this embodiment, the high-heat flow table rule is a flow table rule hit by a data packet within a set time, and the low-heat flow table rule is a flow table rule not hit by a data packet within a set time. Two-level caching is realized by defining two caching modules, and the most common flow table rule is ensured to be stored in the first caching module, so that the corresponding data flow obtains the highest forwarding performance.
The first cache module may be implemented by using a TCAM on the FPGA, and the second cache module may be implemented by using a DDR memory. The hardware processing speed of the FPGA is faster than that of a DDR memory, and a TCAM on the FPGA is used as a first cache module, so that the data forwarding efficiency can be effectively improved. The DDR is a module which has a hardware processing speed lower than that of the FPGA but has a capacity much larger than that of the FPGA, is used for caching the low-heat flow table rule, can relieve the storage pressure of the first cache module, and is also beneficial to improving the data forwarding efficiency.
Further, before the step S2, the method further includes a step S1: and storing all flow table rules of the OVS, and recording the flow table rule attribute of any flow table rule. Wherein the flow table rule attributes include: hardware offload tags, hit count, and aging time. The hardware unloading mark is used for indicating whether the current flow table rule is subjected to hardware acceleration or not, storing the current flow table rule in which level of cache module and using which level of cache module to perform acceleration. The number of times of hit is the number of times of hit of the hardware table entry. The aging time is used for reflecting the idle time of the hardware flow table rule, and when a data message hits the flow table rule in a certain cache module, the aging time is reset to be the maximum value, which also indicates that the current rule should not be aged.
With continued reference to fig. 1, after the first cache module and the second cache module are defined, step S4 is executed: when the data message in the data stream enters the intelligent network card, whether the data message hits the stream table rule in the first cache module is judged by inquiring the first cache module.
In this embodiment, the high-heat flow table rule stored in the first cache module firstly queries whether the data packet hits the flow table rule in the first cache module, and this query sequence is favorable to further improve the data forwarding performance.
If the data packet hits the flow table rule in the first cache module, step S5 is executed: the specified action is performed and the hit counter in the first cache module is incremented by 1.
Wherein the specified actions include: any one of forwarding, discarding, mirroring, and adding encapsulation. The flow table cache module arranged on the intelligent network card FPGA records matching keywords, actions and hit counters of the flow table. When a packet hits a record, the specified action is performed and the hit counter is incremented by 1.
If the data packet does not hit the flow table rule in the first cache module, step S6 is executed: and judging whether the data message hits the flow table rule in the second cache module or not by inquiring the second cache module.
If the data packet hits the flow table rule in the second cache module, step S7 is executed: the specified action is performed and the hit counter in the second cache module is incremented by 1.
If the data packet does not hit the flow table rule in the second cache module, step S8 is executed: the SOC CPU of the intelligent network card determines the action executed by the data message by inquiring the flow table rule in the SOC CPU, and issues the flow table rule hit by the data message to the first cache module.
That is to say, if the data packet does not hit the flow table rule in the first cache module, or the flow table rule in the second cache module, the data packet needs to be handed to the CPU of the intelligent network card for decision-making, the SOC CPU of the intelligent network card determines the action to be executed by the data packet by querying the flow table rule therein, and at the same time, the data packet is identified as a data packet, and the flow table rule hit by the data packet is issued to the first cache module.
Further, before the step S4, the method further includes S3: when the intelligent network card is initialized, the first cache module and the second cache module are initialized.
Specifically, step S3 includes the following processes:
s31: and adding a default flow table rule in the first cache module, and forwarding the data message to the second cache module for query.
S32: and adding a default flow table rule in the second cache module, and forwarding the default flow table rule to the SOC CPU for query.
With continued reference to fig. 1, the method of this embodiment further includes step S9: and determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule.
The periodic polling method in the embodiment mainly reads the hit count of the flow table rule in the intelligent network card periodically, including the flow table rules in the first cache module and the second cache module, and if the read hit count increment is 0, the aging time of the flow table rule is reduced by a period value; if the hit count increment value is not 0, then the aging time is reset to the aging maximum.
Specifically, step S9 includes the following process:
s91: when the data message triggers flow table unloading, the current flow table rule is unloaded to the first cache module, the hardware unloading mark of the first cache module is updated to be a first-level cache, and the hit times are reset to be aging time.
S92: and inquiring the hit times of the flow table rule from the intelligent network card according to the set inquiry period, and updating the flow table rule attribute according to the inquiry result.
S93: and judging whether the aging time of the current flow table rule is less than or equal to a set aging time threshold value.
If the aging time of the current flow table rule is less than or equal to the set aging time threshold, executing step S94: and migrating the current flow table rule from the first cache module to the second cache module, and updating the hardware unloading mark of the current flow table rule to be a second-level cache. Usually, when the aging time of the current flow table rule is reduced to a set aging time threshold, the migration action is started, that is, the current flow table rule is migrated from the first cache module to the second cache module, and the hardware unloading flag of the current flow table rule is updated to be the second level cache. The aging time threshold set in this embodiment is determined according to different hardware and traffic models of the data packets.
And if the aging time of the current flow table rule is larger than the set aging time threshold, the flow table rule is not migrated.
When the aging time of the current flow table rule is 0, step S95 is executed: and deleting the current flow table rule from the second cache module, updating the hardware unloading mark of the current flow table rule to be null, resetting the hit times and setting the aging time to be an invalid value. The aging time of the current flow table rule is reduced to 0, which indicates that the current flow table rule is hit too few times and is not hit substantially, and the current flow table rule is deleted from the second cache module by step S95, and the flow table rule attribute thereof is updated. By the method, CPU resources and space can be further saved, and the data forwarding performance of the intelligent network card is improved.
Further, step S9 of the present embodiment further includes step S96: and judging whether the added value of the hit times of any flow table rule in the second cache module in the set time is larger than or equal to the set hit time threshold.
If the number of hits of any flow table rule in the second cache module is increased by more than or equal to the set hit threshold within the set time, execute step S97: and migrating any flow table rule from the second cache module to the first cache module again, updating the hardware unloading mark of any flow table rule to be a first-level cache, resetting a hit counter, and setting the aging time to be a preset aging time maximum value.
And if the hit times of any flow table rule in the second cache module is less than the set hit time threshold, the flow table rule is not migrated.
In summary, in practical applications, the method in the embodiment is adopted, and the operation process is as follows:
1) if the first data packet in the data stream enters the network card, the first data packet is forwarded at this time, according to the method in this embodiment, the first cache module is firstly queried, since the first cache module is empty, any flow table rule will not be hit, then the second cache module is queried, the default flow table rule will be hit, the data packet is sent to the SOC CPU for processing, the SOC CPU forwards the data packet to the destination according to the flow table rule, meanwhile, the flow table rule is unloaded into the first cache module, the hardware unloading flag for updating the flow table rule is the first cache module, the reset hit counter is 0, and the aging time is set to be the preset aging maximum value.
2) When a subsequent data packet in the data stream enters the intelligent network card, the first cache module is inquired, the stream table rule cached in the previous step is hit, so that the specified action is executed and forwarded to the destination, the stream table rule cached in the first cache module is hit each time, and the hit count of the rule is increased by 1.
3) The low-heat flow table rule is cached in the second cache module, through a periodic query mechanism, if the aging time of the flow table rule is detected to be lower than a preset aging threshold value, the flow table data is considered to be changed into the low-heat rule, the FPGA is informed to delete the flow table rule in the first cache module at the moment, the flow table rule is unloaded into the second cache module, a hardware unloading mark of the updated rule is marked as the second cache module, the hit count is reset to be 0, and the aging time is a preset aging maximum value.
4) If a data packet corresponding to the rule is received, the first cache module is inquired, the second cache module is inquired after the data packet is missed, the flow table rule cached in the previous step is hit, the specified action is executed and forwarded to the destination, the flow table rule in the second cache module is hit each time, and the hit count of the rule is increased by 1.
5) Through a periodic query mechanism, if the aging time of the flow table rule is detected to be higher than a preset aging threshold value and the hit count is larger than the preset threshold value, the flow table rule is considered to be changed into a hot rule, at the moment, the flow table rule in the second cache module is notified to be deleted, the flow table rule is unloaded to the first cache module, the hardware unloading mark of the updated rule is the first cache module, the hit count is reset to be 0, and the aging time is a preset aging maximum value.
6) Through a periodic query mechanism, if the aging time of the stream table rule is detected to be 0, the rule is considered to be overtime, the stream table rule in the second cache module is deleted, meanwhile, the unloading mark of the updating rule hardware is null, and the hit count and the aging time are invalid values.
Example two
Referring to fig. 2 on the basis of the embodiment shown in fig. 1, fig. 2 is a schematic structural diagram of an intelligent network card data forwarding system based on an FPGA according to an embodiment of the present application. As can be seen from fig. 2, the intelligent network card data forwarding system based on the FPGA in this embodiment mainly includes: the device comprises a first cache module, a second cache module and a software control module.
The first cache module is used for storing high-heat flow table rules, the first cache module is arranged on an FPGA of the intelligent network card, and the high-heat flow table rules are flow table rules hit by data messages within set time; and the second cache module is used for storing the low-heat flow table rule, the second cache module is arranged on the intelligent network card, and the low-heat flow table rule is a flow table rule which is not hit by the data message within set time.
The first cache module may adopt a TCAM on the FPGA, and the second cache module may adopt a DDR memory.
The software control module is used for determining the action executed by the data message of the data flow by inquiring the flow table rule in the software control module when the data message in the data flow enters the intelligent network card, and issuing the flow table rule to the first cache module; the method is also used for determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule.
Further, the software control module in this embodiment is further configured to store all flow table rules of the OVS, and record a flow table rule attribute of any flow table rule, where the flow table rule attribute includes: hardware offload tags, hit count, and aging time.
The software control module is also used for initializing the first cache module and the second cache module when the intelligent network card is initialized.
Specifically, the software control module of this embodiment includes: the device comprises a first judging unit, a second judging unit, a cache switching unit, a storage unit and an initialization unit.
The first judging unit is used for judging whether the data packet hits the flow table rule in the first cache module by inquiring the first cache module when the data packet in the data flow enters the intelligent network card, if so, executing a specified action, and adding 1 to a hit counter in the first cache module, wherein the specified action comprises the following steps: and any one of forwarding, discarding, mirroring and adding the package, if not, starting a second judgment module unit. The second judging unit is used for judging whether the data packet hits the flow table rule in the second cache module by inquiring the second cache module, if so, executing the specified action, and adding 1 to a hit counter in the second cache module. And the cache switching unit is used for determining the heat degree of the current flow table rule through a periodic polling method and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule. The holding unit is used for holding all flow table rules of the OVS and recording the flow table rule attribute of any flow table rule. The initialization unit is configured to initialize the first cache module and the second cache module, specifically, add a default flow table rule in the first cache module, forward the data packet to the second cache module for query, add a default flow table rule in the second cache module, and forward the data packet to the SOC CPU for query.
Further, the buffer switching unit includes: the device comprises a load shedding subunit, an inquiry subunit, a first judgment unit, a first migration subunit, a deletion subunit, a second judgment unit and a second migration subunit.
The unloading subunit is used for unloading the current flow table rule into the first cache module when the data message triggers flow table unloading, updating a hardware unloading mark of the first cache module to be a first-level cache, and resetting the hit times to be aging time; the query subunit is used for querying the hit times of the flow table rule from the intelligent network card according to a set query period and updating the flow table rule attribute according to a query result; the first judging unit is used for judging whether the aging time of the current flow table rule is less than or equal to a set aging time threshold value; the first migration subunit is used for migrating the current flow table rule from the first cache module to the second cache module when the aging time of the current flow table rule is less than or equal to a set aging time threshold value, and updating the hardware unloading mark of the current flow table rule to be a secondary cache; the deleting subunit is used for deleting the current flow table rule from the second cache module when the aging time of the current flow table rule is 0, updating the hardware unloading mark of the current flow table rule to be null, resetting the hit times and setting the aging time to be an invalid value; the second judgment subunit is used for judging whether the increased value of the hit times of any flow table rule in the second cache module in the set time is larger than or equal to the set hit time threshold; and the second migration subunit is used for migrating any flow table rule from the second cache module to the first cache module again when the hit frequency of any flow table rule in the second cache module is larger than or equal to the set hit frequency threshold, updating the hardware unloading mark of any flow table rule to be a first-level cache, resetting the hit counter, and setting the aging time to be the maximum value of the preset aging time.
In the embodiment of the present application, a schematic diagram of a connection relationship between an intelligent network card data forwarding system based on an FPGA and an intelligent network card can be seen from fig. 3.
The working principle and the working method of the intelligent network card data forwarding system based on the FPGA in this embodiment have been elaborated in detail in the embodiment shown in fig. 1, and the two embodiments may refer to each other, which is not described herein again.
EXAMPLE III
The present application further provides a terminal, including: the fault injection verification system comprises a processor and a memory which is connected with the processor in a communication mode, wherein instructions which can be executed by the processor are stored in the memory and are executed by the processor, so that the processor can execute the fault injection verification method based on the interprocess communication mechanism.
The fault injection verification method based on the interprocess communication mechanism executed by the processor is as follows:
1) all flow table rules of the OVS are saved, and the flow table rule attribute of any flow table rule is recorded, wherein the flow table rule attribute comprises the following steps: hardware offload flags, hit times, and aging time;
2) defining a module used for storing high-heat flow table rules on an FPGA of an intelligent network card as a first cache module, and defining a hardware module used for storing low-heat flow table rules on the intelligent network card as a second cache module, wherein the high-heat flow table rules are flow table rules hit by data messages within set time, and the low-heat flow table rules are flow table rules not hit by the data messages within the set time;
3) when the intelligent network card is initialized, initializing a first cache module and a second cache module;
4) when a data message in the data stream enters the intelligent network card, whether the data message hits a stream table rule in a first cache module is judged by inquiring the first cache module;
5) if the data message hits the flow table rule in the first cache module, executing a specified action, and adding 1 to a hit counter in the first cache module, wherein the specified action comprises: any one of forwarding, discarding, mirroring and adding encapsulation;
6) if the data message does not hit the flow table rule in the first cache module, whether the data message hits the flow table rule in the second cache module is judged by inquiring the second cache module;
7) if the data message hits the flow table rule in the second cache module, executing the specified action, and adding 1 to a hit counter in the second cache module;
8) if the data message does not hit the flow table rule in the second cache module, the SOC CPU of the intelligent network card determines the action executed by the data message by inquiring the flow table rule in the SOC CPU, and sends the flow table rule hit by the data message to the first cache module;
9) and determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule.
The above description is merely exemplary of the present application and is presented to enable those skilled in the art to understand and practice the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. An intelligent network card data forwarding method based on FPGA is characterized by comprising the following steps:
defining a module used for storing high-heat flow table rules on an FPGA of an intelligent network card as a first cache module, and defining a hardware module used for storing low-heat flow table rules on the intelligent network card as a second cache module, wherein the high-heat flow table rules are flow table rules hit by data messages within set time, and the low-heat flow table rules are flow table rules not hit by the data messages within the set time;
when a data message in a data stream enters the intelligent network card, whether the data message hits a stream table rule in a first cache module is judged by inquiring the first cache module;
if so, executing a specified action, and adding 1 to a hit counter in the first cache module, wherein the specified action comprises: any one of forwarding, discarding, mirroring and adding encapsulation;
if not, judging whether the data message hits a flow table rule in the second cache module by inquiring the second cache module;
if yes, executing the specified action, and adding 1 to a hit counter in the second cache module;
if not, the SOC CPU of the intelligent network card determines the action executed by the data message by inquiring the flow table rule in the SOC CPU, and issues the flow table rule hit by the data message to the first cache module;
the method comprises the steps of determining the heat degree of a current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between a first cache module and a second cache module according to the heat degree of the current flow table rule.
2. The method according to claim 1, wherein before the hardware module for storing the high-heat flow table rule on the FPGA of the intelligent network card is the first cache module and the hardware module for storing the low-heat flow table rule on the intelligent network card is the second cache module, the method further comprises:
saving all flow table rules of the OVS, and recording the flow table rule attribute of any flow table rule, wherein the flow table rule attribute comprises: hardware offload tags, hit count, and aging time.
3. The method for forwarding the data of the intelligent network card based on the FPGA according to claim 2, wherein the determining the heat degree of the current flow table rule through the periodic polling method, and controlling the current flow table rule to switch between the first cache module and the second cache module according to the heat degree of the current flow table rule comprises:
when the data message triggers flow table unloading, unloading the current flow table rule into a first cache module, updating a hardware unloading mark of the first cache module to be a first-level cache, and resetting the hit times to be aging time;
inquiring the hit times of the flow table rule from the intelligent network card according to a set inquiry period, and updating the flow table rule attribute according to an inquiry result;
judging whether the aging time of the current flow table rule is less than or equal to a set aging time threshold value;
if yes, migrating the current flow table rule from the first cache module to the second cache module, and updating the hardware unloading mark of the current flow table rule to be a second-level cache;
and when the aging time of the current flow table rule is 0, deleting the current flow table rule from the second cache module, updating the hardware unloading mark of the current flow table rule to be null, resetting the hit times, and setting the aging time to be an invalid value.
4. The method for forwarding the data of the intelligent network card based on the FPGA of claim 3, wherein the method for periodically polling determines the heat degree of the current flow table rule, and controls the current flow table rule to switch between the first cache module and the second cache module according to the heat degree of the current flow table rule, further comprising:
judging whether the added value of the hit times of any flow table rule in the second cache module in the set time is larger than or equal to a set hit time threshold value or not;
if yes, the flow table rule is migrated from the second cache module to the first cache module again, the hardware unloading mark of the flow table rule is updated to be a first-level cache, a hit counter is reset, and the aging time is set to be a preset aging time maximum value.
5. The method according to claim 1, wherein when a data packet in a data stream enters the intelligent network card, the method further comprises, before querying the first cache module to determine whether the data packet hits the flow table rule in the first cache module, the method further comprises:
and when the intelligent network card is initialized, initializing the first cache module and the second cache module.
6. The method for forwarding the data of the intelligent network card based on the FPGA of claim 5, wherein the method for initializing the first cache module and the second cache module comprises:
adding a default flow table rule in the first cache module, and forwarding the data message to the second cache module for query;
and adding a default flow table rule in the second cache module, and forwarding the default flow table rule to the SOC CPU for query.
7. An intelligent network card data forwarding system based on FPGA is characterized by comprising:
the first cache module is used for storing high-heat flow table rules, the first cache module is arranged on an FPGA of the intelligent network card, and the high-heat flow table rules are flow table rules hit by data messages within set time;
the second cache module is used for storing the low-heat flow table rule, the second cache module is arranged on the intelligent network card, and the low-heat flow table rule is a flow table rule which is not hit by the data message within set time;
the software control module is used for determining the action executed by the data message by inquiring the flow table rule in the software control module when the data message in the data flow enters the intelligent network card, and issuing the flow table rule to the first cache module;
the software control module is also used for determining the heat degree of the current flow table rule through a periodic polling method, and controlling the current flow table rule to be switched between the first cache module and the second cache module according to the heat degree of the current flow table rule.
8. The FPGA-based intelligent network card data forwarding system of claim 7, wherein the software control module is further configured to maintain all flow table rules of the OVS and record flow table rule attributes of any flow table rule, and the flow table rule attributes include: hardware offload tags, hit count, and aging time.
9. The FPGA-based intelligent network card data forwarding system of claim 7, wherein the software control module is further configured to initialize the first cache module and the second cache module when the intelligent network card is initialized.
10. A terminal, characterized in that the terminal comprises: a processor, and a memory communicatively coupled to the processor, wherein,
the memory stores instructions executable by the processor, and the instructions are executed by the processor to enable the processor to execute the data forwarding method of the intelligent FPGA-based network card according to any one of claims 1 to 6.
CN202110806562.5A 2021-07-16 2021-07-16 FPGA-based intelligent network card data forwarding method, system and terminal Active CN113746893B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110806562.5A CN113746893B (en) 2021-07-16 2021-07-16 FPGA-based intelligent network card data forwarding method, system and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110806562.5A CN113746893B (en) 2021-07-16 2021-07-16 FPGA-based intelligent network card data forwarding method, system and terminal

Publications (2)

Publication Number Publication Date
CN113746893A true CN113746893A (en) 2021-12-03
CN113746893B CN113746893B (en) 2023-07-14

Family

ID=78728707

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110806562.5A Active CN113746893B (en) 2021-07-16 2021-07-16 FPGA-based intelligent network card data forwarding method, system and terminal

Country Status (1)

Country Link
CN (1) CN113746893B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002028A (en) * 2022-04-30 2022-09-02 济南浪潮数据技术有限公司 Message processing method, device and medium
CN115277582A (en) * 2022-07-13 2022-11-01 清华大学 Software data packet classification acceleration method, device, equipment and storage medium
CN115499312A (en) * 2022-11-11 2022-12-20 之江实验室 Integration configuration method based on FPGA (field programmable Gate array) back-end P4 multi-mode intelligent network card
CN115622959A (en) * 2022-11-07 2023-01-17 浪潮电子信息产业股份有限公司 Switch control method, device, equipment, storage medium and SDN (software defined network)
WO2023024799A1 (en) * 2021-08-24 2023-03-02 苏州盛科通信股份有限公司 Packet forwarding method, network forwarding device and computer storage medium
CN116185886A (en) * 2022-12-13 2023-05-30 中国科学院声学研究所 Matching table system
CN116684358A (en) * 2023-07-31 2023-09-01 之江实验室 Flow table management system and method for programmable network element equipment
WO2024037366A1 (en) * 2022-08-15 2024-02-22 阿里云计算有限公司 Forwarding rule issuing method, and intelligent network interface card and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112134806A (en) * 2020-09-30 2020-12-25 新华三大数据技术有限公司 Flow table aging time adjusting method and device and storage medium
CN112565090A (en) * 2020-11-09 2021-03-26 烽火通信科技股份有限公司 High-speed forwarding method and device
CN112838989A (en) * 2019-11-25 2021-05-25 中兴通讯股份有限公司 Data stream management method, network equipment and storage medium
CN112929299A (en) * 2021-01-27 2021-06-08 广州市品高软件股份有限公司 SDN cloud network implementation method, device and equipment based on FPGA accelerator card

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112838989A (en) * 2019-11-25 2021-05-25 中兴通讯股份有限公司 Data stream management method, network equipment and storage medium
CN112134806A (en) * 2020-09-30 2020-12-25 新华三大数据技术有限公司 Flow table aging time adjusting method and device and storage medium
CN112565090A (en) * 2020-11-09 2021-03-26 烽火通信科技股份有限公司 High-speed forwarding method and device
CN112929299A (en) * 2021-01-27 2021-06-08 广州市品高软件股份有限公司 SDN cloud network implementation method, device and equipment based on FPGA accelerator card

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024799A1 (en) * 2021-08-24 2023-03-02 苏州盛科通信股份有限公司 Packet forwarding method, network forwarding device and computer storage medium
CN115002028A (en) * 2022-04-30 2022-09-02 济南浪潮数据技术有限公司 Message processing method, device and medium
CN115002028B (en) * 2022-04-30 2024-02-13 济南浪潮数据技术有限公司 Message processing method, device and medium
CN115277582A (en) * 2022-07-13 2022-11-01 清华大学 Software data packet classification acceleration method, device, equipment and storage medium
CN115277582B (en) * 2022-07-13 2024-03-26 清华大学 Software data packet classification acceleration method, device, equipment and storage medium
WO2024037366A1 (en) * 2022-08-15 2024-02-22 阿里云计算有限公司 Forwarding rule issuing method, and intelligent network interface card and storage medium
CN115622959A (en) * 2022-11-07 2023-01-17 浪潮电子信息产业股份有限公司 Switch control method, device, equipment, storage medium and SDN (software defined network)
CN115499312A (en) * 2022-11-11 2022-12-20 之江实验室 Integration configuration method based on FPGA (field programmable Gate array) back-end P4 multi-mode intelligent network card
CN116185886A (en) * 2022-12-13 2023-05-30 中国科学院声学研究所 Matching table system
CN116185886B (en) * 2022-12-13 2023-10-13 中国科学院声学研究所 Matching table system
CN116684358A (en) * 2023-07-31 2023-09-01 之江实验室 Flow table management system and method for programmable network element equipment
CN116684358B (en) * 2023-07-31 2023-12-12 之江实验室 Flow table management system and method for programmable network element equipment

Also Published As

Publication number Publication date
CN113746893B (en) 2023-07-14

Similar Documents

Publication Publication Date Title
CN113746893A (en) Intelligent network card data forwarding method, system and terminal based on FPGA
WO2018107681A1 (en) Processing method, device, and computer storage medium for queue operation
JP3957570B2 (en) Router device
KR102364332B1 (en) Non-Volatile Memory Persistence Method and Computing Device
US9213501B2 (en) Efficient storage of small random changes to data on disk
US20200364080A1 (en) Interrupt processing method and apparatus and server
WO2020199760A1 (en) Data storage method, memory and server
CN108089825B (en) Storage system based on distributed cluster
WO2017219867A1 (en) Short message retry processing method, apparatus and system
US20170005953A1 (en) Hierarchical Packet Buffer System
CN113111033A (en) Method and system for dynamically redistributing bucket indexes in distributed object storage system
CN114513472A (en) Network congestion control method and device
CN105302493A (en) Swap-in and swap-out control method and system for SSD cache in mixed storage array
CN113572582B (en) Data transmission and retransmission control method and system, storage medium and electronic device
CN113645140B (en) Message statistical method, device, storage medium and network equipment
CN107517266A (en) A kind of instant communication method based on distributed caching
EP3920475A1 (en) Memory management method and apparatus
CN111522506B (en) Data reading method and device
CN112711564A (en) Merging processing method and related equipment
CN101610477B (en) Multimedia messaging service processing system and method
CN116016313A (en) Flow table aging control method, system, equipment and readable storage medium
CN103491124A (en) Method for processing multimedia message data and distributed cache system
US9641437B2 (en) Packet relay device and packet relay method
CN114615219B (en) Network interface device, electronic device, and method of operating network interface device
CN114900456B (en) MAC address management device and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant