CN116094840A - Intelligent network card and convergence and distribution system - Google Patents

Intelligent network card and convergence and distribution system Download PDF

Info

Publication number
CN116094840A
CN116094840A CN202310361386.8A CN202310361386A CN116094840A CN 116094840 A CN116094840 A CN 116094840A CN 202310361386 A CN202310361386 A CN 202310361386A CN 116094840 A CN116094840 A CN 116094840A
Authority
CN
China
Prior art keywords
network card
intelligent network
convergence
distribution
processing engine
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310361386.8A
Other languages
Chinese (zh)
Other versions
CN116094840B (en
Inventor
赵齐昆
黄祥祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xingyun Zhilian Technology Co Ltd
Original Assignee
Zhuhai Xingyun Zhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xingyun Zhilian Technology Co Ltd filed Critical Zhuhai Xingyun Zhilian Technology Co Ltd
Priority to CN202310361386.8A priority Critical patent/CN116094840B/en
Publication of CN116094840A publication Critical patent/CN116094840A/en
Application granted granted Critical
Publication of CN116094840B publication Critical patent/CN116094840B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/04Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks
    • H04L63/0428Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload
    • H04L63/0442Network architectures or network communication protocols for network security for providing a confidential data exchange among entities communicating through data packet networks wherein the data content is protected, e.g. by encrypting or encapsulating the payload wherein the sending and receiving network entities apply asymmetric encryption, i.e. different keys for encryption and decryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • H04L43/045Processing captured monitoring data, e.g. for logfile generation for graphical visualisation of monitoring data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Hardware Design (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Environmental & Geological Engineering (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides an intelligent network card and a converging and diverging system. The intelligent network card comprises: the packet processing engine is used for analyzing the first message data packet to obtain a message analysis result, and forwarding the first message data packet to the network card service port or the convergence and diversion processing engine according to the message analysis result and a flow table issued to the packet processing engine; the converging and diverging processing engine is used for carrying out matching processing on the first message data packet according to a first converging and diverging business rule issued to the converging and diverging processing engine to obtain a matching processing result and then forwarding the first message data packet to the converging and diverging port; the network card service port is used for outputting network card flow; and the convergence diversion port is used for providing a convergence diversion function together with the convergence diversion processing engine based on the matching processing result. Therefore, loss caused by repeated analysis and repeated decryption is avoided, flow pressure is reduced, integration into an existing data link and regular configuration are facilitated, and overall efficiency is improved.

Description

Intelligent network card and convergence and distribution system
Technical Field
The application relates to the technical field of computers, in particular to an intelligent network card and a converging and diverging system.
Background
With the development of services such as cloud platforms, cloud computing, cloud services, and the like, more and more services are deployed in cloud environments and various cloud applications, and data centers, virtual machine platforms, and the like are also faced with increasing traffic. For example, in a cloud environment most of the traffic is deployed on a virtual machine platform. It is necessary to perform visual analysis on the traffic flow between the virtual machines in the host machine in combination with the flow collection, that is, extract and analyze the virtualized environment flow at the same time of the virtualized environment flow collection. For this purpose, it is necessary to use a device having a converging and diverging function, such as a converging and diverging device. The convergence and distribution equipment is used for outputting the processed access signals, generally converging each access data, transmitting the data to the convergence and distribution equipment through a convergence link, and distributing the data to different analysis equipment, such as a visual analysis tool, application of data packet depth detection and the like, by the convergence and distribution equipment.
However, in the prior art, the convergence and distribution device faces to huge flow pressure, for example, service flow between virtual machines in a host machine and virtualized environment flow, and further has the problems of large useless flow, repeated analysis of messages, decryption of messages and the like.
Therefore, the application provides an intelligent network card and a converging and diverging system, which are used for solving the technical problems in the prior art.
Disclosure of Invention
In a first aspect, the present application provides an intelligent network card. The intelligent network card comprises: the packet processing engine is used for analyzing the first message data packet to obtain a message analysis result, and forwarding the first message data packet to a network card service port or a convergence and diversion processing engine according to the message analysis result and a flow table issued to the packet processing engine; the convergence and distribution processing engine is used for carrying out matching processing on the first message data packet according to a first convergence and distribution business rule issued to the convergence and distribution processing engine to obtain a matching processing result and then forwarding the first message data packet to a convergence and distribution port; the network card service port is used for outputting the flow of the intelligent network card; and the converging and diverging port is used for providing the converging and diverging function of the intelligent network card together with the converging and diverging processing engine based on the matching processing result.
According to the first aspect of the application, the loss caused by repeated analysis and repeated decryption is avoided, the flow pressure of the back-end equipment is also reduced, the integration into the existing data link and the regular configuration are facilitated, and the overall efficiency is improved.
In a possible implementation manner of the first aspect of the present application, the intelligent network card further includes: the decryption module is used for decrypting the second message data packet received by the intelligent network card to obtain a decrypted second message data packet, wherein the decrypted second message data packet is the first message data packet; the encryption module is used for encrypting the first message data packet to obtain an encrypted first message data packet, and the encrypted first message data packet is used for the convergence and distribution function of the intelligent network card.
In a possible implementation manner of the first aspect of the present application, the flow of the intelligent network card output through the network card service port is not used for a convergence and splitting function of the intelligent network card.
In a possible implementation manner of the first aspect of the present application, the convergence and splitting function of the intelligent network card includes one or more of the following: flow convergence, flow filtration, flow diversion, flow forwarding, mobile network signaling analysis, duplication output and load balancing.
In a possible implementation manner of the first aspect of the present application, the flow table issued to the packet processing engine corresponds to lower layer information in a flow rule, where the lower layer information in the flow rule includes a five-tuple, and the five-tuple includes a source address, a source port, a destination address, a destination port, and a transport layer protocol.
In a possible implementation manner of the first aspect of the present application, the first aggregate-splitting service rule issued to the aggregate-splitting processing engine corresponds to high-level information in the flow rule, where the high-level information in the flow rule includes a regular rule, a quintuple filtering rule, an application-layer keyword filtering rule, and an application-layer uniform resource locator filtering rule.
In a possible implementation manner of the first aspect of the present application, the aggregate-split port is connected to one or more deep packet inspection applications to implement traffic analysis.
In a possible implementation manner of the first aspect of the present application, the aggregate and shunt port is connected to a second aggregate and shunt device, where the second aggregate and shunt device is configured to aggregate and shunt the traffic received from the aggregate and shunt port according to a second aggregate and shunt service rule, where the first aggregate and shunt service rule and the second aggregate and shunt service rule together form an aggregate and shunt policy, and the first aggregate and shunt service rule is at least configured to screen useless traffic according to the aggregate and shunt policy.
In a possible implementation manner of the first aspect of the present application, the workload distribution between the intelligent network card and the second convergence and distribution device in executing the convergence and distribution policy is based on a division of labor between the first convergence and distribution service rules and the second convergence and distribution service rules in constructing the convergence and distribution policy.
In a possible implementation manner of the first aspect of the present application, when the convergence and offloading policy is used for privacy calculation of a service scenario, division of the first convergence and offloading policy and the second convergence and offloading policy between the first convergence and offloading policy and the second convergence and offloading policy includes that the first convergence and offloading policy includes encryption and decryption rules, and distribution of workload between the intelligent network card and the second convergence and offloading device on executing the convergence and offloading policy includes that the intelligent network card is used for encryption and decryption calculation.
In a second aspect, an embodiment of the present application further provides a convergence splitting system. The converging and diverging system comprises: a plurality of intelligent network cards, each of the plurality of intelligent network cards comprising: the system comprises a packet processing engine, a convergence and distribution processing engine, a network card service port and a convergence and distribution port; and the second convergence and distribution equipment is connected with the convergence and distribution ports of the plurality of intelligent network cards and is used for converging and distributing the flow received from the convergence and distribution ports of the plurality of intelligent network cards according to a second convergence and distribution service rule. The method comprises the steps that for each intelligent network card in the plurality of intelligent network cards, a packet processing engine of the intelligent network card is used for analyzing a first message data packet received by the intelligent network card to obtain a message analysis result, the first message data packet is forwarded to a network card service port of the intelligent network card or a converging and shunting processing engine of the intelligent network card according to the message analysis result and a flow table of the packet processing engine issued to the intelligent network card, the converging and shunting processing engine of the intelligent network card is used for carrying out matching processing on the first message data packet according to a first converging and shunting service rule issued to the converging and shunting processing engine of the intelligent network card to obtain a matching processing result and then forwarding the first message data packet to a converging and shunting port of the intelligent network card, the network card service port of the intelligent network card is used for outputting flow of the intelligent network card, and the converging and shunting port of the intelligent network card is used for providing a converging and shunting function of the intelligent network card together based on the matching processing result.
According to the second aspect of the method, the loss caused by repeated analysis and repeated decryption is avoided, the flow pressure of the back-end equipment is also reduced, integration into the existing data link and regular configuration are facilitated, and the overall efficiency is improved.
In a possible implementation manner of the second aspect of the present application, for each of the plurality of intelligent network cards, a first aggregate-split service rule issued to an aggregate-split processing engine of the intelligent network card indicates to screen useless traffic.
In a possible implementation manner of the second aspect of the present application, for each of the plurality of intelligent network cards, the flow of the intelligent network card output through the network card service port of the intelligent network card is not used for the convergence and splitting function of the intelligent network card.
In a possible implementation manner of the second aspect of the present application, a first aggregate-split service rule issued to an aggregate-split processing engine of each of the plurality of intelligent network cards and the second aggregate-split service rule together form an aggregate-split policy of the aggregate-split system.
In one possible implementation manner of the second aspect of the present application, the division of labor between the first aggregate-split business rule and the second aggregate-split business rule issued to the aggregate-split processing engine of each of the plurality of intelligent network cards on the aggregate-split policy is based on a business application scenario of the aggregate-split system.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of an application scenario of a convergence splitting device;
fig. 2 is a schematic diagram of an intelligent network card according to an embodiment of the present application;
fig. 3 is a schematic diagram of a converging and diverging system according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that in the description of this application, "at least one" means one or more than one, and "a plurality" means two or more than two. In addition, the words "first," "second," and the like, unless otherwise indicated, are used solely for the purposes of description and are not to be construed as indicating or implying a relative importance or order.
Fig. 1 is a schematic diagram of an application scenario of a convergence and splitting device. As shown in fig. 1, signals from the financial production network 110 are directed to the flow collection platform 120 and messages output by the flow collection platform 120 are transmitted to the visual analysis tool 130. The flow acquisition platform 120 includes a management center 122 and a splitter 124. The splitter 124 provides aggregate splitting-related functionality as an aggregate splitting device. In general, the convergence and distribution device is configured to process the access signal and output the processed access signal, and generally, aggregate each access data, and transmit the aggregate and distribution data to the convergence and distribution device through the convergence and distribution device, and then the aggregate and distribution device distributes the data to different analysis devices, such as a visual analysis tool, an application of packet depth detection (Deep Packet Inspection, DPI), and the like. The DPI is a deep detection technology based on data packets, performs deep detection on different network application layer loads, and determines validity of the message by detecting the payload of the message. The DPI analyzes and extracts the header information of each layer added in the packaging process of the data packet through a feature matching technology, and then matches the header information with the feature information in the existing rule base, so that the flow is identified. The DPI technology can be used for identifying various malicious applications and contents thereof, and the DPI technology is matched with a third party application system to realize network flow analysis, network optimization and security management and control. The DPI technology is a core basic technology of network visualization, is positioned in a data processing link of a network visualization chain, and is connected with data acquisition and data application. As shown in fig. 1, the financial production network 110 provides access data to the traffic collection platform 120 for aggregate offload and subsequent DPI applications. The financial production network 110 may include various business tiles, such as a switching core, business area, internet area, office area, and home area, representing various possible access data. The financial production network 110 outputs various access data to the splitter 124 under the flow acquisition platform 120 through port mirroring, link concatenation, and various flow modes. The splitter 124 provides aggregation and splitting related functions such as filtering, aggregation, replication, splitting, load balancing, etc., so that various access data from the financial production network 110 is sent to the visual analysis tool 130 in various messages after performing aggregation and splitting related processing according to related policies and rules. The visual analysis tool 130, in conjunction with the DPI application described above, provides various applications such as network analysis, traffic auditing, network security, traffic backtracking, and situational awareness by parsing messages from the traffic collection platform 120. The management center 122 under the flow collection platform 120 provides management functions such as cluster management, alerting, flow monitoring, flow analysis, etc., along with the flow splitters 124, bridge from data collection to data application between the financial production network 110 and the visual analysis tool 130.
The application scenario of the convergence and diversion device shown in fig. 1, that is, the bridge from data collection to data application is built between the financial production network 110 and the visual analysis tool 130 through the convergence and diversion related function through the diverter 124 under the flow collection platform 120 in fig. 1, which can be suitable for other service scenarios, such as service scenarios of a data center, cloud computing, and the like. The flow of different virtual machines is generally converged through a front-end acquisition card such as a network card and then transmitted to convergence and distribution equipment through a convergence and distribution port. The convergence and distribution device matches the received flow (message, access data, etc.) to the corresponding analysis device (such as DPI application) according to the preset matching rule. Therefore, the convergence and distribution device needs to cope with traffic from the front-end acquisition card, such as a network card, and also needs to face the virtualized environment traffic and traffic between the virtual machines in the host. In the face of large flow pressures, convergence splitting devices, such as splitter 124 shown in fig. 1, also need to take into account data security and privacy requirements. The traffic transmitted from the front-end acquisition card, such as a network card, for example, various packet data packets are generally encrypted, so that the convergence and distribution device generally needs to decrypt the encrypted packet to perform filtering and distribution processing, and the packet transmitted from the convergence and distribution device to the rear end, such as a DPI application, also needs to be encrypted and transmitted, so that the convergence and distribution device may consume a large amount of resources to perform decryption, analysis and encryption of the packet data packets while dealing with a large amount of traffic. In some service scenarios with higher requirements on data security and privacy protection, such as privacy calculation, federal learning and other scenarios, an asymmetric encryption algorithm may be adopted to encrypt the message, so that the convergence and distribution device consumes more resources in the aspects of message decryption, message data packet unpacking and the like, and also brings data processing delay. For this reason, the present application provides an intelligent network card and a convergence and distribution system for solving the above-mentioned problems, and the following details are described in connection with other embodiments of the present application.
Fig. 2 is a schematic diagram of an intelligent network card according to an embodiment of the present application. As shown in fig. 2, the intelligent network card 200 receives access data from the outside, and in fig. 2, the intelligent network card 200 is schematically shown to receive acquisition data 202 from the outside. The intelligent network card 200 includes a packet processing engine 210, an aggregation and offload processing engine 220, a network card traffic port 230, and an aggregation and offload port 240. The packet processing engine 210 is configured to parse the first packet data packet to obtain a packet parsing result, and forward the first packet data packet to the network card service port 230 or the convergence and distribution processing engine 220 according to the packet parsing result and a flow table issued to the packet processing engine 210. The convergence and offloading processing engine 220 is configured to perform matching processing on the first packet data according to a first convergence and offloading traffic rule issued to the convergence and offloading processing engine 220 to obtain a matching processing result, and forward the first packet data to the convergence and offloading port 240. And a network card service port 230, configured to output the traffic of the intelligent network card 200. And the convergence and splitting port 240 is configured to provide, together with the convergence and splitting processing engine 220, a convergence and splitting function of the intelligent network card 200 based on the matching processing result. The collected data 202 received from the outside by the intelligent network card 200 may be data collected by front-end data or access data obtained from a service scenario such as the financial production network 110 shown in fig. 1. The intelligent network card 200 includes a packet processing engine 210 and an aggregate-split processing engine 220. The packet processing engine 210 is configured to unpack and parse the packet received by the intelligent network card 200. In some cases, the smart network card 200 receives plaintext data, i.e., unencrypted data, and at this time, the packet processing engine 210 may perform packet unpacking and packet parsing without performing decryption processing. For example, the collected data 202 may be plaintext data, and the packet processing engine 210 may directly unpack and parse the collected data 202. In other cases, the smart card 200 receives ciphertext data, that is, encrypted data, for example, the smart card 200 may acquire data from a production service, such as the financial production network 110 shown in fig. 1, where the acquired data 202 is ciphertext data, and needs to be decrypted to perform packet unpacking and packet parsing by the packet processing engine 210. Therefore, the first packet may be defined as a packet received by the packet processing engine 210, that is, the packet processing engine 210 is configured to parse the first packet to obtain a packet parsing result. The first packet corresponds to plaintext data, such as acquisition data 202, received by the intelligent network card 200 without decryption. And under the condition that the first message data packet needs to be decrypted, the first message data packet corresponds to the message data packet after being decrypted. The packet processing engine 210 parses the first packet to obtain a packet parsing result, where parsing the first packet means parsing and extracting information for flow table matching. Then, the packet processing engine 210 forwards the first packet data packet to the network card service port 230 or the convergence and diversion processing engine 220 according to the packet parsing result and the flow table issued to the packet processing engine 210. Here, the packet processing engine 210 may determine whether the first packet data packet is suitable for the convergence and splitting related function through flow table matching according to the parsing result obtained by parsing the first packet data packet and the flow table. For example, the packet processing engine 210 may determine that the first packet data packet is suitable for the aggregate-split related functions such as traffic filtering and traffic splitting and is thus suitable for being forwarded to the aggregate-split port 240, or may determine that the first packet data packet is suitable for a conventional packet forwarding operation and is thus suitable for being forwarded to the network card service port 230. Therefore, after the packet processing engine 210 parses the packet of the packet data received by the intelligent network card 200, the packet is selectively forwarded to the network card service port 230 or the convergence and distribution processing engine 220 in the intelligent network card 200, which is helpful for implementing message processing with convergence and distribution related functions while implementing message parsing. Further, the convergence and offloading processing engine 220 is configured to perform a matching process on the first packet data packet according to a first convergence and offloading traffic rule issued to the convergence and offloading processing engine 220 to obtain a matching result, and forward the first packet data packet to the convergence and offloading port 240. In this way, the aggregation and splitting processing engine 220 built in the intelligent network card 200 integrates the message processing of the aggregation and splitting related function into the intelligent network card 200, and particularly performs matching processing on the first message data packet according to the issued first aggregation and splitting service processing rule to obtain a matching processing result. Therefore, through the aggregate-flow-splitting processing engine 220 and the first aggregate-flow-splitting service rule issued to the aggregate-flow-splitting processing engine 220, useless flow screening can be performed on the flow received by the intelligent network card 200, or specific aggregate-flow-splitting rules can be applied to perform flow screening and flow splitting according to actual needs. Further, the network card service port 230 is configured to output the traffic of the intelligent network card 200, and the opposite convergent split port 240 is configured to provide, together with the convergent split processing engine 220, a convergent split function of the intelligent network card 200 based on the matching processing result. Thus, the network card service port 230 and the convergence and distribution port 240 serve as external ports of the intelligent network card 200, and provide conventional forwarding of network traffic to downstream devices of the intelligent network card 200, that is, output of traffic of the intelligent network card 200 through the network card service port 230, and provide data volume for convergence and distribution, that is, provide convergence and distribution functions of the intelligent network card 200 through the convergence and distribution port 240. Therefore, through the network card service port 230 connected to the intelligent network card 200, other devices can complete conventional network card traffic, forwarding processing of network messages, etc. together with the intelligent network card 200; in contrast, through the convergence and distribution port 240 connected to the intelligent network card, other devices may together form a convergence link with the intelligent network card 200, for example, the back-end convergence and distribution device may be connected to the convergence and distribution port 240 of the intelligent network card 200, so as to undertake a part of the tasks of data packet analysis and filtration, tunnel message decapsulation, and the like through the intelligent network card 200.
With continued reference to fig. 2, the intelligent network card 200 implements, through a built-in packet processing engine 210 and an aggregation and splitting processing engine 220, packet unpacking, packet parsing, and packet processing for performing aggregation and splitting related functions in the network card. Because the conventional network message processing also involves unpacking, decrypting and parsing the message, the intelligent network card 200 shown in fig. 2 is beneficial to realizing message processing of the convergence and distribution related function while realizing message parsing by integrating the convergence and distribution related function to the flow forwarding of the conventional network card, which means that only one message decryption and one message parsing are needed at the intelligent network card 200, and thus the loss caused by repeated parsing and repeated decryption is avoided. In some service scenarios with higher requirements on data security and privacy protection, such as privacy calculation, federal learning and other scenarios in which an asymmetric encryption algorithm may be adopted to encrypt the message, the intelligent network card 200 can effectively save the loss in the aspect of message encryption and decryption, and improve the overall system efficiency. In addition, the intelligent network card 200 performs forwarding internally, that is, from the packet processing engine 210 to the converging and diverging processing engine 220, in comparison with the manner of copying the total traffic and transmitting to the converging and diverging device at the back end, so that there is no need to copy a large amount of useless traffic, and the processing pressure of the back end device is reduced. The intelligent network card 200 shown in fig. 2 provides, on one hand, conventional network card packet and traffic forwarding to the outside through the network card service port 230, and on the other hand, provides, together with the convergence and splitting processing engine 220, a convergence and splitting function of the intelligent network card 200 based on the matching processing result through the convergence and splitting port 240. External devices with respect to the intelligent network card 200 may form a conventional network data link or an aggregate link for aggregate-splitting related processing by connecting to the network card traffic port 230 or to the aggregate-splitting port 240, respectively, together with the intelligent network card 200. This means that the intelligent network card 200 can make full use of existing data links, such as existing convergence links and downstream convergence and offloading devices, without modifying the existing network architecture, and can also maintain the conventional network card functions of the intelligent network card 200, such as forwarding network card traffic without performing convergence and offloading processing. Further, the packet processing engine 210 in the intelligent network card 200 forwards the first packet data packet to the network card service port 230 or the convergence and distribution processing engine 220 according to the packet parsing result and the flow table issued to the packet processing engine 210, and the convergence and distribution processing engine 220 performs matching processing on the first packet data packet according to the first convergence and distribution service rule issued to the convergence and distribution processing engine 220 to obtain a matching processing result and forwards the first packet data packet to the convergence and distribution port 240. Therefore, by managing the flow table issued to the packet processing engine 210 and the first aggregate-split service rule issued to the aggregate-split processing engine 220, rule configuration, such as configuring the flow rule and the aggregate-split service rule, can be conveniently performed, and corresponding flow rules and aggregate-split service rules can also be designed in combination with specific communication protocols, filtering policies, and the like. For example, the intelligent network card 200 may be applied to the flow collection platform 120 shown in fig. 1 and used to replace the splitter 124, and the management center 122 may be used to define the flow rules and the aggregate-split service rules issued to the intelligent network card 200 in combination with the management requirements, so that the conventional network card flow and the flow that needs to be subjected to the aggregate-split processing can be managed conveniently. For example, a part of convergence and distribution related functions can be arranged to be borne by the intelligent network card 200 according to the needs of specific service scenes, and a convergence and distribution system is formed by combining downstream convergence and distribution equipment and the intelligent network card 200, so that better division of work is realized. A relatively simple aggregate-split traffic rule may be issued to the intelligent network card 200 as a first aggregate-split traffic rule, e.g., screening for unwanted traffic, etc., while a relatively complex aggregate-split traffic rule may be implemented by an otherwise provided aggregate-split device, which may better utilize limited storage resources and computational resources within the intelligent network card 200. In summary, the intelligent network card 200 shown in fig. 2 avoids the loss caused by repeated parsing and repeated decryption, reduces the flow pressure of the back-end device, facilitates integration into the existing data link and regular configuration, and improves the overall efficiency.
In one possible implementation, the intelligent network card 200 further includes: a decryption module (not shown), configured to decrypt the second packet received by the intelligent network card 200 to obtain a decrypted second packet, where the decrypted second packet is the first packet; the encryption module (not shown) is configured to encrypt the first packet data to obtain an encrypted first packet data packet, where the encrypted first packet data packet is used for the convergence and splitting function of the intelligent network card 200. In the relevant application scenario of convergence and distribution, the data amount for convergence and distribution, which is output by the intelligent network card 200 through the convergence and distribution port 240, is generally transmitted to a convergence and distribution device or an analysis tool such as a DPI application at the back end, so that the packet data packet is encrypted and then transmitted for data security and privacy protection. In addition, the intelligent network card 200 may also receive encrypted messages, that is, ciphertext data or encrypted data. Therefore, the intelligent network card 200 may include a decryption module for decrypting the second packet received by the intelligent network card 200 to obtain a decrypted second packet, where the decrypted second packet is the first packet, that is, the packet received by the packet processing engine 210. The intelligent network card 200 further includes an encryption module, configured to provide the encrypted first packet data packet for the convergence and distribution function of the intelligent network card 200. It should be understood that the decryption module only needs to decrypt the second packet received by the intelligent network card 200 once, that is, decrypt the second packet received by the intelligent network card 200 to obtain a decrypted second packet, where the decrypted second packet is the first packet. In this way, when the intelligent network card 200 performs the processing related to the convergence and distribution of the first packet data packet subsequently, the processing related to the convergence and distribution of the unencrypted packet can be performed without performing decryption operation again, and the data security and privacy protection are ensured because the traffic forwarding is performed inside the intelligent network card 200.
In one possible implementation, the traffic of the intelligent network card 200 output through the network card service port 230 is not used for the convergence and offloading function of the intelligent network card 200. As such, the network card traffic port 230 is used for conventional network card traffic, thereby distinguishing from data for the aggregate-split function output through the aggregate-split port 240. It is also meant that external devices with respect to the intelligent network card 200 may form a conventional network data link or an aggregate link for aggregate-splitting related processing by connecting to the network card traffic port 230 or to the aggregate-splitting port 240, respectively, together with the intelligent network card 200. This is advantageous in that existing data links, such as existing aggregate links and downstream aggregate and offload devices, are fully utilized without altering the existing network architecture.
In one possible implementation, the aggregate-splitting function of the intelligent network card 200 includes one or more of the following: flow convergence, flow filtration, flow diversion, flow forwarding, mobile network signaling analysis, duplication output and load balancing. The convergence and offloading processing engine 220 performs matching processing on the first packet data packet according to a first convergence and offloading service rule issued to the convergence and offloading processing engine 220 to obtain a matching processing result, and forwards the first packet data packet to the convergence and offloading port 240. Therefore, the first aggregate-split service rule issued to the aggregate-split processing engine 220 may be managed through rule configuration, so that the intelligent network card 200 may implement various aggregate-split functions.
In one possible implementation, the flow table issued to the packet processing engine corresponds to lower layer information in a flow rule, the lower layer information in the flow rule including a five-tuple including a source address, a source port, a destination address, a destination port, and a transport layer protocol. By managing the flow table issued to the packet processing engine 210 and the first aggregate-split service rule issued to the aggregate-split processing engine 220, rule configuration, such as configuring the flow rule and the aggregate-split service rule, may be conveniently performed, and corresponding flow rules and aggregate-split service rules may also be designed in combination with specific communication protocols, filtering policies, and the like. Here, lower layer information in the flow rule, for example, a flow rule of four or less layers, may be issued as a flow table to the packet processing engine 210. The lower layer information in the flow rule comprises five-tuple, wherein the five-tuple comprises a source address, a source port, a destination address, a destination port and a transport layer protocol. The five-tuple as a set of these information can be used for flow table matching, packet forwarding, etc. In some embodiments, the first aggregate-split traffic rule issued to the aggregate-split processing engine corresponds to higher-level information in the flow rule, the higher-level information in the flow rule including a regular rule, a five-tuple filter rule, an application-level keyword filter rule, and an application-level uniform resource locator (Universal Resource Locator, URL) filter rule. Here, higher-layer information in the flow rule, for example, a rule of four or more layers, may be used as the first aggregate-split service rule issued to the aggregate-split processing engine. Specifically, the first aggregate-split service rule may include, for example, a quintuple filtering rule for filtering based on quintuple information, an application layer keyword filtering rule for filtering traffic based on keywords, and an application layer URL filtering rule for filtering traffic based on URLs. It should be understood that the first aggregate-offload service rule issued to the aggregate-offload processing engine may be combined with the flow table issued to the packet processing engine 210, or that higher-level information in the flow rule may be combined with lower-level information in the flow rule, so as to embody an overall management policy, or may better target specific communication protocols and service scenario requirements. Therefore, the intelligent network card 200 can be used as a link for forwarding the traffic and the messages of the conventional network card on one hand, and can be used as a link in the convergence and diversion link on the other hand, and rule configuration and strategy design on the management level are provided based on the overall requirements such as communication protocols, filtering rules, service scenes and the like.
In one possible implementation, the aggregate drop port 240 interfaces with one or more deep packet inspection applications to enable traffic analysis. The deep data packet detection application, namely DPI application, analyzes and extracts the header information of each layer added in the data packet in the packaging process through a feature matching technology, and then matches the header information with the feature information in the existing rule base, so that the flow is identified.
In a possible implementation manner, the aggregate and shunt port 240 is connected to a second aggregate and shunt device, where the second aggregate and shunt device is configured to aggregate and shunt the traffic received from the aggregate and shunt port 240 according to a second aggregate and shunt service rule, and the first aggregate and shunt service rule and the second aggregate and shunt service rule together form an aggregate and shunt policy, and the first aggregate and shunt service rule is at least configured to screen the useless traffic according to the aggregate and shunt policy. External devices with respect to the intelligent network card 200 may form a conventional network data link or an aggregate link for aggregate-splitting related processing by connecting to the network card traffic port 230 or to the aggregate-splitting port 240, respectively, together with the intelligent network card 200. Here, the second convergence and offloading device is connected to the convergence and offloading port 240, and performs convergence and offloading on the traffic received from the convergence and offloading port 240 according to a second convergence and offloading traffic rule. Therefore, the intelligent network card 200 and the second convergence and distribution device implement division according to the first convergence and distribution service rule and the second convergence and distribution service rule, where the first convergence and distribution service rule is at least used for screening useless traffic according to the convergence and distribution policy. This means that, at the intelligent network card 200, the flow pressure faced by the second convergence and diversion device can be effectively reduced by the convergence and diversion processing engine 220 of the intelligent network card 200 to screen the useless flow according to the first convergence and diversion service rule. And, by managing the aggregate-split policy and adjusting the first aggregate-split service rule and the second aggregate-split service rule, the workload between the intelligent network card 200 and the second aggregate-split device can be arranged in combination with the needs of specific service scenarios. In some embodiments, the workload distribution between the intelligent network card 200 and the second convergence and offloading device in performing the convergence and offloading policy is based on a division of labor between the first convergence and offloading traffic rules and the second convergence and offloading traffic rules in constructing the convergence and offloading policy. For example, a relatively simple aggregate-split traffic rule may be issued to the intelligent network card 200 as a first aggregate-split traffic rule, such as screening for unwanted traffic, etc., while a relatively complex aggregate-split traffic rule may be executed by the second aggregate-split device, which may better utilize limited storage resources and computational resources within the intelligent network card 200. In some embodiments, when the aggregate and offload policy is used for privacy computation of a service scenario, the division of the first aggregate and offload policy between the first aggregate and offload policy and the second aggregate and offload policy includes that the first aggregate and offload policy includes encryption and decryption rules, and the distribution of the workload between the intelligent network card 200 and the second aggregate and offload device in executing the aggregate and offload policy includes that the intelligent network card 200 is used for encryption and decryption computation. Here, in a privacy computing service scenario or a similar service scenario with higher requirements on data security and privacy protection, an asymmetric encryption algorithm is generally adopted to encrypt a message, and a corresponding algorithm is also required to decrypt the message encrypted by the asymmetric encryption algorithm, so that resources are required to be occupied in the aspect of encrypting and decrypting the message. The first convergence and distribution service rule includes encryption and decryption rules by aiming at privacy calculation service scenes or similar service scenes on the convergence and distribution strategy, which means that workload distribution between the intelligent network card 200 and the second convergence and distribution equipment on executing the convergence and distribution strategy includes the intelligent network card 200 for encryption and decryption calculation. Therefore, the repeated encryption and decryption of the message, namely, the primary encryption and decryption of the message at the intelligent network card 200, can be avoided, so that the resource is effectively saved.
Fig. 3 is a schematic diagram of a converging and diverging system according to an embodiment of the present application. The converging and diverging system comprises: a plurality of intelligent network cards, each of the plurality of intelligent network cards comprising: the system comprises a packet processing engine, a convergence and distribution processing engine, a network card service port and a convergence and distribution port. The plurality of intelligent network cards is illustratively shown in FIG. 3 as including intelligent network card A310, intelligent network card B320, and intelligent network card C330. It should be appreciated that the aggregation and offloading system may include any number of intelligent network cards. The intelligent network card a 310 includes: packet processing engine a 312, aggregate-split processing engine a 316, network card traffic port a 314, and aggregate-split port a 318. The intelligent network card B320 includes: packet processing engine B322, aggregate-split processing engine B326, network card traffic port B324, and aggregate-split port B328. The intelligent network card C330 includes: packet processing engine C332, aggregate-split processing engine C336, network card traffic port C334, and aggregate-split port C338. The convergence and distribution system further includes a second convergence and distribution device 340, connected to the convergence and distribution ports of each of the plurality of intelligent network cards, and configured to perform convergence and distribution on the traffic received from the convergence and distribution ports of each of the plurality of intelligent network cards according to a second convergence and distribution service rule. As shown in fig. 3, the second convergence and splitting device 340 is connected to the convergence and splitting port a 318 of the intelligent network card a 310, the convergence and splitting port B328 of the intelligent network card B320, and the convergence and splitting port C338 of the intelligent network card C330. For each intelligent network card of the plurality of intelligent network cards, the packet processing engine of the intelligent network card is used for analyzing a first packet data packet received by the intelligent network card to obtain a packet analysis result, forwarding the first packet data packet to a network card service port of the intelligent network card or an aggregation and distribution processing engine of the intelligent network card according to the packet analysis result and a flow table of the packet processing engine issued to the intelligent network card, wherein the aggregation and distribution processing engine of the intelligent network card is used for carrying out matching processing on the first packet data packet according to a first aggregation and distribution service rule issued to the aggregation and distribution processing engine of the intelligent network card to obtain a matching processing result and then forwarding the first packet data packet to the aggregation and distribution port of the intelligent network card, the network card service port of the intelligent network card is used for outputting the flow of the intelligent network card, and the aggregation and distribution port of the intelligent network card is used for providing the aggregation and distribution function of the intelligent network card together with the aggregation and distribution processing engine of the intelligent network card based on the matching processing result. Taking the intelligent network card a 310 as an example, the packet processing engine a 312 of the intelligent network card a 310 is configured to parse a first packet data packet received by the intelligent network card a 310 to obtain a packet parsing result, and forward the first packet data packet to the network card service port a 314 of the intelligent network card a 310 or the converging and diverging processing engine a 316 of the intelligent network card a 310 according to the packet parsing result and a flow table of the packet processing engine a 312 issued to the intelligent network card a 310, where the converging and diverging processing engine a 316 of the intelligent network card a 310 is configured to perform a matching process on the first packet data packet according to a first converging and diverging service rule issued to the converging and diverging processing engine a 316 of the intelligent network card a 310 to obtain a matching process result, and then forward the first packet data packet to the converging and diverging port a 318 of the intelligent network card a 310, where the network card service port a 314 of the intelligent network card a 310 is configured to output a flow of the intelligent network card a 310, and the converging and diverging port a 318 of the intelligent network card a 310 is configured to provide a matching and diverging function of the intelligent network card a 310 based on the converging and diverging processing engine a 316 of the intelligent network card a 310. Intelligent network card B320 and intelligent network card C330 are similar to intelligent network card a 310 and reference is also made to the details regarding intelligent network card 200 shown in fig. 2.
With continued reference to fig. 3, each of the plurality of intelligent network cards, similar to the intelligent network card 200 shown in fig. 2, implements packet unpacking, packet parsing, and packet processing for performing functions related to convergence and distribution in the network card through a built-in packet processing engine and convergence and distribution processing engine, and implements rule configuration and policy division through a first convergence and distribution service rule issued to the respective convergence and distribution processing engine. That is, by combining the first aggregate-split service rule issued to the aggregate-split processing engine of each of the plurality of intelligent network cards (e.g., the aggregate-split processing engine a 316 issued to the intelligent network card a 310) with the second aggregate-split service rule of the second aggregate-split device 340, the aggregate-split policy of the aggregate-split system may be implemented, for example, to provide rule configuration and policy design on a management level based on overall requirements such as a communication protocol, filtering rules, and service scenarios. And, the labor division and burden between the plurality of intelligent network cards and the second convergence and distribution device 340 can be regulated by managing the first convergence and distribution business rule and the second convergence and distribution business rule issued to the respective convergence and distribution processing engines, thereby better utilizing resources.
In one possible implementation, for each of the plurality of intelligent network cards, a first aggregate-split traffic rule issued to an aggregate-split processing engine of the intelligent network card indicates to screen unwanted traffic. The aggregate and shunt processing engine of each intelligent network card in the plurality of intelligent network cards is used for screening useless traffic according to the first aggregate and shunt service rule, so that the traffic pressure faced by the second aggregate and shunt device 340 can be effectively reduced.
In one possible implementation, for each of the plurality of intelligent network cards, the flow of the intelligent network card output through the network card service port of the intelligent network card is not used for the convergence and diversion function of the intelligent network card. In this way, the network card service port is used for conventional network card traffic, and thus is distinguished from data for the aggregate-split function output through the aggregate-split port. This is advantageous in that existing data links, such as existing aggregate links and downstream aggregate and offload devices, are fully utilized without altering the existing network architecture.
In one possible implementation manner, a first convergence and diversion service rule issued to the convergence and diversion processing engine of each of the plurality of intelligent network cards and the second convergence and diversion service rule together form a convergence and diversion policy of the convergence and diversion system. By combining the first aggregate-split service rule issued to the aggregate-split processing engine of each of the plurality of intelligent network cards (e.g., the aggregate-split processing engine a 316 issued to the intelligent network card a 310) with the second aggregate-split service rule of the second aggregate-split device 340, the aggregate-split policy of the aggregate-split system can be implemented, for example, rule configuration and policy design on the management level can be provided based on overall requirements, such as a communication protocol, filtering rules, service scenarios, and the like. And, the labor division and burden between the plurality of intelligent network cards and the second convergence and distribution device 340 can be regulated by managing the first convergence and distribution business rule and the second convergence and distribution business rule issued to the respective convergence and distribution processing engines, thereby better utilizing resources.
In one possible implementation manner, the division of labor in the convergence and distribution strategy between the first convergence and distribution business rule and the second convergence and distribution business rule issued to the convergence and distribution processing engine of each intelligent network card in the plurality of intelligent network cards is based on the business application scenario of the convergence and distribution system. Thus, the method is favorable for carrying out division of labor and strategy design in combination with specific business application scenes. For example, in a privacy computing service scenario or a similar service scenario with higher requirements on data security and privacy protection, an asymmetric encryption algorithm is generally adopted to encrypt a message, and a corresponding algorithm is also required to decrypt the message encrypted by the asymmetric encryption algorithm, so that resources are required to be occupied in the aspect of encrypting and decrypting the message. By optimizing the convergence and distribution policy for the private computing service scenario or the similar service scenario, a plurality of intelligent network cards can be arranged to bear the encryption and decryption computing work, so that the burden of repeated decryption by the second convergence and distribution device 340 can be reduced as much as possible.
Fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present application, where the computing device 400 includes: one or more processors 410, a communication interface 420, and a memory 430. The processor 410, communication interface 420, and memory 430 are interconnected by a bus 440. Optionally, the computing device 400 may further include an input/output interface 450, where the input/output interface 450 is connected to an input/output device for receiving parameters set by a user, etc. The computing device 400 can be used to implement some or all of the functionality of the device embodiments or system embodiments described above in the embodiments of the present application; the processor 410 can also be used to implement some or all of the operational steps of the method embodiments described above in the embodiments of the present application. For example, specific implementations of the computing device 400 performing various operations may refer to specific details in the above-described embodiments, such as the processor 410 being configured to perform some or all of the steps of the above-described method embodiments or some or all of the operations of the above-described method embodiments. For another example, in the present embodiment, the computing device 400 may be configured to implement some or all of the functions of one or more components of the apparatus embodiments described above, and the communication interface 420 may be configured to implement communication functions and the like necessary for the functions of the apparatuses, components, and the processor 410 may be configured to implement processing functions and the like necessary for the functions of the apparatuses, components.
It should be appreciated that the computing device 400 of fig. 4 may include one or more processors 410, and that the processors 410 may cooperatively provide processing power in a parallelized connection, a serialized connection, a serial-parallel connection, or any connection, or that the processors 410 may constitute a processor sequence or processor array, or that the processors 410 may be separated into primary and secondary processors, or that the processors 410 may have different architectures such as heterogeneous computing architectures. In addition, the computing device 400 shown in FIG. 4, the associated structural and functional descriptions are exemplary and not limiting. In some example embodiments, computing device 400 may include more or fewer components than shown in fig. 4, or combine certain components, or split certain components, or have a different arrangement of components.
The processor 410 may have various specific implementations, for example, the processor 410 may include one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphic processing unit, GPU), a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), or a data processor (data processing unit, DPU), which are not limited in this embodiment. Processor 410 may also be a single-core processor or a multi-core processor. Processor 410 may be comprised of a combination of a CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof. The processor 410 may also be implemented solely with logic devices incorporating processing logic, such as an FPGA or digital signal processor (digital signal processor, DSP) or the like. The communication interface 420 may be a wired interface, which may be an ethernet interface, a local area network (local interconnect network, LIN), etc., or a wireless interface, which may be a cellular network interface, or use a wireless local area network interface, etc., for communicating with other modules or devices.
The memory 430 may be a nonvolatile memory such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Memory 430 may also be volatile memory, which may be random access memory (random access memory, RAM) used as external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). Memory 430 may also be used to store program code and data such that processor 410 invokes the program code stored in memory 430 to perform some or all of the operational steps of the method embodiments described above, or to perform corresponding functions in the apparatus embodiments described above. Moreover, computing device 400 may contain more or fewer components than shown in FIG. 4, or may have a different configuration of components.
The bus 440 may be a peripheral component interconnect express (peripheral component interconnect express, PCIe) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (Ubus or UB), a computer quick link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. The bus 440 may be divided into an address bus, a data bus, a control bus, and the like. The bus 440 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But is shown with only one bold line in fig. 4 for clarity of illustration, but does not represent only one bus or one type of bus.
The method and the device provided in the embodiments of the present application are based on the same inventive concept, and because the principles of solving the problems by the method and the device are similar, the embodiments, implementations, examples or implementation of the method and the device may refer to each other, and the repetition is not repeated. Embodiments of the present application also provide a system that includes a plurality of computing devices, each of which may be structured as described above. The functions or operations that may be implemented by the system may refer to specific implementation steps in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer instructions which, when executed on a computer device (e.g., one or more processors), may implement the method steps in the above-described method embodiments. The specific implementation of the processor of the computer readable storage medium in executing the above method steps may refer to specific operations described in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein again.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. The present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The computer program product includes one or more computer instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, tape), optical media, or semiconductor media. The semiconductor medium may be a solid state disk, or may be a random access memory, flash memory, read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, register, or any other form of suitable storage medium.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. Each flow and/or block of the flowchart and/or block diagrams, and combinations of flows and/or blocks in the flowchart and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. The steps in the method of the embodiment of the application can be sequentially adjusted, combined or deleted according to actual needs; the modules in the system of the embodiment of the application can be divided, combined or deleted according to actual needs. Such modifications and variations of the embodiments of the present application are intended to be included herein, if they fall within the scope of the claims and their equivalents.

Claims (15)

1. An intelligent network card, characterized in that the intelligent network card comprises:
the packet processing engine is used for analyzing the first message data packet to obtain a message analysis result, and forwarding the first message data packet to a network card service port or a convergence and diversion processing engine according to the message analysis result and a flow table issued to the packet processing engine;
the convergence and distribution processing engine is used for carrying out matching processing on the first message data packet according to a first convergence and distribution business rule issued to the convergence and distribution processing engine to obtain a matching processing result and then forwarding the first message data packet to a convergence and distribution port;
The network card service port is used for outputting the flow of the intelligent network card; and
and the converging and diverging port is used for providing the converging and diverging function of the intelligent network card together with the converging and diverging processing engine based on the matching processing result.
2. The intelligent network card of claim 1, wherein the intelligent network card further comprises:
the decryption module is used for decrypting the second message data packet received by the intelligent network card to obtain a decrypted second message data packet, wherein the decrypted second message data packet is the first message data packet;
the encryption module is used for encrypting the first message data packet to obtain an encrypted first message data packet, and the encrypted first message data packet is used for the convergence and distribution function of the intelligent network card.
3. The intelligent network card of claim 1, wherein the traffic of the intelligent network card output through the network card service port is not used for a convergence and diversion function of the intelligent network card.
4. The intelligent network card of claim 1, wherein the aggregate offload function of the intelligent network card comprises one or more of: flow convergence, flow filtration, flow diversion, flow forwarding, mobile network signaling analysis, duplication output and load balancing.
5. The intelligent network card of claim 1, wherein the flow table issued to the packet processing engine corresponds to lower layer information in a flow rule, the lower layer information in the flow rule comprising a five-tuple comprising a source address, a source port, a destination address, a destination port, and a transport layer protocol.
6. The intelligent network card of claim 5, wherein the first aggregate-split traffic rule issued to the aggregate-split processing engine corresponds to higher-level information in the flow rule, the higher-level information in the flow rule including a regular rule, a five-tuple filter rule, an application-level keyword filter rule, and an application-level uniform resource locator filter rule.
7. The intelligent network card of claim 1, wherein the aggregate drop port interfaces with one or more deep packet inspection applications to perform traffic analysis.
8. The intelligent network card according to claim 1, wherein the convergence and distribution port is connected to a second convergence and distribution device, the second convergence and distribution device is configured to perform convergence and distribution on the traffic received from the convergence and distribution port according to a second convergence and distribution service rule, the first convergence and distribution service rule and the second convergence and distribution service rule together form a convergence and distribution policy, and the first convergence and distribution service rule is at least configured to screen useless traffic according to the convergence and distribution policy.
9. The intelligent network card of claim 8, wherein the workload distribution between the intelligent network card and the second converged offload device in performing the converged offload policy is based on a division of labor between the first converged offload service rule and the second converged offload service rule in constructing the converged offload policy.
10. The intelligent network card according to claim 9, wherein when the aggregate and offload policy is used for privacy computation service scenarios, the division of the first aggregate and offload service rule and the second aggregate and offload service rule to form the aggregate and offload policy includes that the first aggregate and offload service rule includes encryption and decryption rules, and the distribution of the workload between the intelligent network card and the second aggregate and offload device to implement the aggregate and offload policy includes that the intelligent network card is used for encryption and decryption computation.
11. A converging and diverging system, comprising:
a plurality of intelligent network cards, each of the plurality of intelligent network cards comprising: the system comprises a packet processing engine, a convergence and distribution processing engine, a network card service port and a convergence and distribution port; and
A second convergence and distribution device connected with the convergence and distribution ports of the plurality of intelligent network cards and used for converging and distributing the flow received from the convergence and distribution ports of the plurality of intelligent network cards according to a second convergence and distribution service rule,
the method comprises the steps that for each intelligent network card in the plurality of intelligent network cards, a packet processing engine of the intelligent network card is used for analyzing a first message data packet received by the intelligent network card to obtain a message analysis result, the first message data packet is forwarded to a network card service port of the intelligent network card or a converging and shunting processing engine of the intelligent network card according to the message analysis result and a flow table of the packet processing engine issued to the intelligent network card, the converging and shunting processing engine of the intelligent network card is used for carrying out matching processing on the first message data packet according to a first converging and shunting service rule issued to the converging and shunting processing engine of the intelligent network card to obtain a matching processing result and then forwarding the first message data packet to a converging and shunting port of the intelligent network card, the network card service port of the intelligent network card is used for outputting flow of the intelligent network card, and the converging and shunting port of the intelligent network card is used for providing a converging and shunting function of the intelligent network card together based on the matching processing result.
12. The converged offload system of claim 11, wherein for each of the plurality of intelligent network cards, a first converged offload service rule issued to an converged offload processing engine of that intelligent network card indicates screening of unwanted traffic.
13. The convergence and diversion system as set forth in claim 11, wherein for each of the plurality of intelligent network cards, the traffic of the intelligent network card output through the network card traffic port of the intelligent network card is not used for the convergence and diversion function of the intelligent network card.
14. The converged offload system of claim 11, wherein a first converged offload service rule issued to a converged offload processing engine of each of the plurality of intelligent network cards together with the second converged offload service rule forms a converged offload policy of the converged offload system.
15. The converged offload system of claim 14, wherein the division of labor between the first converged offload business rule and the second converged offload business rule issued to the converged offload processing engine of each of the plurality of intelligent network cards is based on a business application scenario of the converged offload system.
CN202310361386.8A 2023-04-07 2023-04-07 Intelligent network card and convergence and distribution system Active CN116094840B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310361386.8A CN116094840B (en) 2023-04-07 2023-04-07 Intelligent network card and convergence and distribution system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310361386.8A CN116094840B (en) 2023-04-07 2023-04-07 Intelligent network card and convergence and distribution system

Publications (2)

Publication Number Publication Date
CN116094840A true CN116094840A (en) 2023-05-09
CN116094840B CN116094840B (en) 2023-06-16

Family

ID=86187179

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310361386.8A Active CN116094840B (en) 2023-04-07 2023-04-07 Intelligent network card and convergence and distribution system

Country Status (1)

Country Link
CN (1) CN116094840B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387219A (en) * 2011-12-13 2012-03-21 曙光信息产业(北京)有限公司 Multi-network-card load balancing system and method
CN102497430A (en) * 2011-12-13 2012-06-13 曙光信息产业(北京)有限公司 System and method for implementing splitting equipment
US20160026592A1 (en) * 2014-07-25 2016-01-28 StorNetware Systems Pvt. Ltd. Unified Converged Network, Storage And Compute System
US20180270162A1 (en) * 2017-03-20 2018-09-20 Diamanti Inc Distributed Flexible Scheduler for Converged Traffic
CN111277517A (en) * 2020-01-19 2020-06-12 长沙星融元数据技术有限公司 Programmable switching chip-based convergence and shunt method and device, storage medium and electronic equipment
CN115174676A (en) * 2022-07-04 2022-10-11 深圳星云智联科技有限公司 Convergence and shunt method and related equipment thereof

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102387219A (en) * 2011-12-13 2012-03-21 曙光信息产业(北京)有限公司 Multi-network-card load balancing system and method
CN102497430A (en) * 2011-12-13 2012-06-13 曙光信息产业(北京)有限公司 System and method for implementing splitting equipment
US20160026592A1 (en) * 2014-07-25 2016-01-28 StorNetware Systems Pvt. Ltd. Unified Converged Network, Storage And Compute System
US20180270162A1 (en) * 2017-03-20 2018-09-20 Diamanti Inc Distributed Flexible Scheduler for Converged Traffic
CN111277517A (en) * 2020-01-19 2020-06-12 长沙星融元数据技术有限公司 Programmable switching chip-based convergence and shunt method and device, storage medium and electronic equipment
CN115174676A (en) * 2022-07-04 2022-10-11 深圳星云智联科技有限公司 Convergence and shunt method and related equipment thereof

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈训逊,方滨兴,李蕾: "高速网络环境下入侵检测系统结构研究", 计算机研究与发展, no. 09 *

Also Published As

Publication number Publication date
CN116094840B (en) 2023-06-16

Similar Documents

Publication Publication Date Title
CN113821310B (en) Data processing method, programmable network card device, physical server and storage medium
CN111131379B (en) Distributed flow acquisition system and edge calculation method
KR20150103248A (en) Deep packet inspection method, device, and coprocessor
CN109845223B (en) Enforcing network security policies using pre-classification
CN103067218B (en) A kind of express network packet content analytical equipment
US20160269428A1 (en) Data processing
CN111147403B (en) Message processing method and device, storage medium and electronic device
KR102019104B1 (en) Method for processing traffic using multi network interface card and network device thereof
Holtz et al. Building scalable distributed intrusion detection systems based on the mapreduce framework
CN115174676A (en) Convergence and shunt method and related equipment thereof
CN116094840B (en) Intelligent network card and convergence and distribution system
CN113986969A (en) Data processing method and device, electronic equipment and storage medium
CN117195263A (en) Database encryption method and device
CN109889530B (en) Web application firewall system and computer storage medium
CN103957173A (en) Semantic switch
CN115033407B (en) System and method for collecting and identifying flow suitable for cloud computing
CN116015925A (en) Data transmission method, device, equipment and medium
CN116094696A (en) Data security protection method, data security management platform, system and storage medium
CN112787835B (en) Network device and method for processing data related to network message
US20150081649A1 (en) In-line deduplication for a network and/or storage platform
Sismis et al. Analysis of TLS prefiltering for IDS acceleration
US10554517B2 (en) Reduction of volume of reporting data using content deduplication
Ye et al. Two-step p2p traffic classification with connection heuristics
Khan et al. Network processors for communication security: a review
US10291693B2 (en) Reducing data in a network device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant