CN117527654B - Method and system for analyzing network traffic packet - Google Patents

Method and system for analyzing network traffic packet Download PDF

Info

Publication number
CN117527654B
CN117527654B CN202410016891.3A CN202410016891A CN117527654B CN 117527654 B CN117527654 B CN 117527654B CN 202410016891 A CN202410016891 A CN 202410016891A CN 117527654 B CN117527654 B CN 117527654B
Authority
CN
China
Prior art keywords
memory access
direct memory
remote direct
traffic
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410016891.3A
Other languages
Chinese (zh)
Other versions
CN117527654A (en
Inventor
黎立印
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Xingyun Zhilian Technology Co Ltd
Original Assignee
Zhuhai Xingyun Zhilian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Xingyun Zhilian Technology Co Ltd filed Critical Zhuhai Xingyun Zhilian Technology Co Ltd
Priority to CN202410016891.3A priority Critical patent/CN117527654B/en
Publication of CN117527654A publication Critical patent/CN117527654A/en
Application granted granted Critical
Publication of CN117527654B publication Critical patent/CN117527654B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/12Network monitoring probes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/50Testing arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a method and a system for analyzing network traffic packet grabbing, which relate to the technical field of computers. Comprising the following steps: receiving the remote direct memory access uplink and downlink traffic and judging whether the remote direct memory access uplink and downlink traffic accords with a matching rule or not through a message forwarding processing engine; when the uplink and downlink flows of the remote direct memory access meet the matching rule, transmitting the uplink and downlink flows of the remote direct memory access to a remote direct memory access protocol processing engine through a message forwarding processing engine, and then uploading a first mirror image flow corresponding to the uplink and downlink flows of the remote direct memory access to a second application through the remote direct memory access protocol processing engine; and carrying out unpacking analysis on the first mirror image traffic through the second application so as to execute network traffic packet grabbing analysis associated with the remote direct memory access uplink and downlink traffic. Thus, the method is beneficial to improving the analysis performance of the grabbing packet and improving the efficiency of the system.

Description

Method and system for analyzing network traffic packet
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and system for packet grabbing analysis of network traffic.
Background
To locate network problems, it is often necessary to grab packets of network traffic and then analyze the network data of the grabbed packets. In the remote direct memory access network, the remote direct memory access technology is adopted, the intervention of a central processing unit is not needed, and data is directly and rapidly transmitted from one system to a computer storage area of another remote host through the participation of a network card, so that network data does not pass through a system kernel. In the prior art, for packet-grabbing analysis of network traffic of a remote direct memory access network, one way is to copy the remote direct memory access traffic requiring packet-grabbing analysis to a monitoring device such as a special packet-grabbing instrument by means of a switch, but this requires additional ports and devices, which is costly and inconvenient. Another approach is to use a conventional protocol processing engine and network queues to handle remote direct memory access traffic, invoking a host kernel for packet-grabbing analysis, but this necessitates passing through the system kernel, increasing processing burden and requiring additional network interfaces.
To this end, the present application provides a method and system for network traffic packet-grabbing analysis.
Disclosure of Invention
In a first aspect, the present application provides a method for network traffic packet-grabbing analysis. The method comprises the following steps: receiving remote direct memory access uplink and downlink traffic and determining whether the remote direct memory access uplink and downlink traffic meets a matching rule by a message forwarding processing engine, wherein a storage operation associated with the remote direct memory access uplink and downlink traffic is performed by a first application, the matching rule being based at least on a solution Bao Jiexi capability of a second application different from the first application and indicating at least one network interface; when the remote direct memory access uplink and downlink flows accord with the matching rule, transmitting the remote direct memory access uplink and downlink flows to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading first mirror image flows corresponding to the remote direct memory access uplink and downlink flows to the second application through the remote direct memory access protocol processing engine; and carrying out unpacking analysis on the first mirror image traffic through the second application so as to execute network traffic packet grabbing analysis associated with the remote direct memory access uplink and downlink traffic.
According to the first aspect of the application, no additional equipment or interfaces are needed, a flexible network flow packet grabbing analysis strategy can be formulated, network flows needing packet grabbing analysis can be distinguished efficiently, complex and changeable network environments and network flow components can be dealt with, the original processing flow aiming at remote direct memory access uplink and downlink flows is not interfered, various existing software and hardware structures for remote direct memory access data transmission can be adapted, the advantages of a remote direct memory access technology in terms of improving data transmission performance are fully utilized, the system kernel is bypassed, the cost of data copying is saved, and the packet grabbing analysis performance and the system efficiency are improved.
In a possible implementation manner of the first aspect of the present application, the user-mode remote direct memory access queue includes a remote direct memory access traffic sending queue, a remote direct memory access traffic receiving queue, a mirror traffic receiving queue, and a remote direct memory access completion queue, where the remote direct memory access protocol processing engine uses the remote direct memory access traffic sending queue and the remote direct memory access traffic receiving queue to transmit the remote direct memory access uplink and downlink traffic with the first application, and the remote direct memory access protocol processing engine uses the mirror traffic receiving queue to transmit the first mirror traffic to the second application, and the remote direct memory access traffic sending queue, the remote direct memory access traffic receiving queue, and the mirror traffic receiving queue share the remote direct memory access completion queue.
In a possible implementation manner of the first aspect of the present application, the remote direct memory access uplink traffic is remote direct memory access uplink traffic or remote direct memory access downlink traffic, and the method further includes: when the remote direct memory access uplink traffic does not accord with the matching rule, transmitting the remote direct memory access uplink traffic to a network protocol processing engine through the message forwarding processing engine, and then transmitting the remote direct memory access uplink traffic to a system kernel protocol processing layer through the network protocol processing engine; and performing unpacking analysis on the remote direct memory access uplink traffic through the system kernel protocol processing layer to obtain unpacking analysis results, and then performing network traffic grabbing analysis associated with the remote direct memory access uplink traffic through the second application on the unpacking analysis results.
In a possible implementation manner of the first aspect of the present application, the system kernel protocol processing layer includes a network protocol stack, a network device layer, and a network driver, and the network protocol processing engine transmits the remote direct memory access uplink traffic to the system kernel protocol processing layer using a network queue applied to a non-remote direct memory access network.
In a possible implementation manner of the first aspect of the present application, the packet forwarding processing engine, the remote direct memory access protocol processing engine, and the network protocol processing engine are all deployed in a remote direct memory access network card, the system kernel protocol processing layer is deployed in a kernel space of a host system, the first application and the second application are both deployed in a user space of the host system, and the remote direct memory access network card is connected to the host system through a shortcut peripheral component interconnect interface.
In a possible implementation manner of the first aspect of the present application, when the remote direct memory access downlink traffic does not meet the matching rule, transmitting, by the packet forwarding processing engine, the remote direct memory access downlink traffic to the network protocol processing engine, then transmitting, by the network protocol processing engine, the remote direct memory access downlink traffic to the system kernel protocol processing layer for parsing Bao Jiexi, and performing, by the second application, a network traffic packet grabbing analysis associated with the remote direct memory access downlink traffic, or updating, by the second application, the matching rule, and then updating, based on a resolution Bao Jiexi capability of the updated second application, the matching rule, so that the remote direct memory access downlink traffic meets the updated matching rule, and performing, by the updated second application, a network traffic packet grabbing analysis associated with the remote direct memory access downlink traffic.
In a possible implementation manner of the first aspect of the present application, the method further includes: receiving non-remote direct memory access uplink traffic by a message forwarding processing engine and judging whether the non-remote direct memory access uplink traffic accords with the matching rule; when the uplink flow of the non-remote direct memory access accords with the matching rule, transmitting the uplink flow of the non-remote direct memory access to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading a second mirror flow corresponding to the uplink flow of the non-remote direct memory access to the second application through the remote direct memory access protocol processing engine; and carrying out unpacking analysis on the second mirror image flow through the second application so as to execute network flow grabbing analysis associated with the non-remote direct memory access uplink flow.
In a possible implementation manner of the first aspect of the present application, the remote direct memory access protocol processing engine transmits the second image traffic to the second application using the image traffic receiving queue, and the remote direct memory access protocol processing engine adds a completion queue entry to the remote direct memory access completion queue in response to completion of transmission of the first image traffic or completion of transmission of the second image traffic.
In a possible implementation manner of the first aspect of the present application, the remote direct memory access traffic sending queue and the remote direct memory access traffic receiving queue belong to the same queue pair, and the first application manages the remote direct memory access traffic sending queue by submitting a sending queue work request to the remote direct memory access traffic sending queue and manages the remote direct memory access traffic receiving queue by submitting a receiving queue work request to the remote direct memory access traffic receiving queue, and the second application manages the mirror traffic receiving queue by submitting a mirror traffic receiving queue work request to the mirror traffic receiving queue.
In a possible implementation manner of the first aspect of the present application, the remote direct memory access uplink and downlink traffic conforms to the matching rule, and includes: the at least one network interface indicated by the matching rule includes a network interface for transceiving the remote direct memory access upstream and downstream traffic.
In a possible implementation manner of the first aspect of the present application, the solution Bao Jiexi capability of the second application includes at least one communication standard and at least one network protocol supported by the second application, where the remote direct memory access uplink and downlink traffic conforms to the matching rule, and includes: the at least one communication standard supported by the second application includes a communication standard associated with the remote direct memory access upstream and downstream traffic, and the at least one network protocol supported by the second application includes a network protocol associated with the remote direct memory access upstream and downstream traffic.
In a second aspect, embodiments of the present application further provide a computer device, where the computer device includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements a method according to any implementation manner of any one of the foregoing aspects when the computer program is executed.
In a third aspect, embodiments of the present application also provide a computer-readable storage medium storing computer instructions that, when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
In a fourth aspect, embodiments of the present application also provide a computer program product comprising instructions stored on a computer-readable storage medium, which when run on a computer device, cause the computer device to perform a method according to any one of the implementations of any one of the above aspects.
In a fifth aspect, embodiments of the present application further provide a system for network traffic packet-grabbing analysis. The system comprises: a message forwarding processing engine, configured to receive remote direct memory access uplink and downlink traffic and determine whether the remote direct memory access uplink and downlink traffic meets a matching rule, where a storage operation associated with the remote direct memory access uplink and downlink traffic is performed by a first application, and the matching rule is based at least on a solution Bao Jiexi capability of a second application different from the first application and indicates at least one network interface; the remote direct memory access protocol processing engine transmits the remote direct memory access uplink and downlink traffic to the remote direct memory access protocol processing engine when the remote direct memory access uplink and downlink traffic accords with the matching rule, and then the remote direct memory access protocol processing engine transmits a first mirror traffic corresponding to the remote direct memory access uplink and downlink traffic to a second application; the second application is configured to perform unpacking analysis on the first mirrored traffic to perform network traffic packet-grabbing analysis associated with the remote direct memory access upstream and downstream traffic.
According to the fifth aspect of the application, no extra equipment or interfaces are needed, a flexible network flow packet-grabbing analysis strategy can be formulated, network flows needing packet-grabbing analysis can be distinguished efficiently, complex and changeable network environments and network flow compositions can be dealt with, the original processing flow aiming at remote direct memory access uplink and downlink flows is not interfered, various existing software and hardware structures for remote direct memory access data transmission can be adapted, the advantages of a remote direct memory access technology in terms of improving data transmission performance are fully utilized, the system kernel is bypassed, the cost of data copying is saved, and the packet-grabbing analysis performance and the system efficiency are improved.
In a possible implementation manner of the fifth aspect of the present application, the user-mode remote direct memory access queue includes a remote direct memory access traffic sending queue, a remote direct memory access traffic receiving queue, a mirror traffic receiving queue, and a remote direct memory access completion queue, where the remote direct memory access protocol processing engine uses the remote direct memory access traffic sending queue and the remote direct memory access traffic receiving queue to transmit the remote direct memory access uplink and downlink traffic with the first application, and the remote direct memory access protocol processing engine uses the mirror traffic receiving queue to transmit the first mirror traffic to the second application, and the remote direct memory access traffic sending queue, the remote direct memory access traffic receiving queue, and the mirror traffic receiving queue share the remote direct memory access completion queue.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a method for packet grabbing analysis of network traffic according to an embodiment of the present application;
fig. 2 is a schematic diagram of a system for network traffic packet-grabbing analysis according to a first embodiment provided in the examples of the present application;
fig. 3 is a schematic diagram of a system for network traffic packet-grabbing analysis according to a second embodiment provided in the examples of the present application;
fig. 4 is a schematic structural diagram of a computing device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
It should be understood that in the description of this application, "at least one" means one or more than one, and "a plurality" means two or more than two. In addition, the words "first," "second," and the like, unless otherwise indicated, are used solely for the purposes of description and are not to be construed as indicating or implying a relative importance or order.
Fig. 1 is a flow chart of a method for packet grabbing analysis of network traffic according to an embodiment of the present application. As shown in fig. 1, the method includes the following steps.
Step S110: receiving, by a message forwarding processing engine, remote direct memory access upstream and downstream traffic and determining whether the remote direct memory access upstream and downstream traffic meets a matching rule, wherein a storage operation associated with the remote direct memory access upstream and downstream traffic is performed by a first application, the matching rule being based at least on a solution Bao Jiexi capability of a second application different from the first application and indicating at least one network interface.
Step S120: and when the remote direct memory access uplink and downlink flow accords with the matching rule, transmitting the remote direct memory access uplink and downlink flow to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading a first mirror image flow corresponding to the remote direct memory access uplink and downlink flow to the second application through the remote direct memory access protocol processing engine.
Step S130: and carrying out unpacking analysis on the first mirror image traffic through the second application so as to execute network traffic packet grabbing analysis associated with the remote direct memory access uplink and downlink traffic.
The network nodes, the data centers, the servers and the like need to constantly carry out packet grabbing analysis on network traffic, so that basis is provided for positioning network problems and implementing network monitoring. Network traffic is typically packets or other forms of network data, and therefore requires unpacking Bao Jiexi of packets, e.g., unpacking packets. Considering various communication standards, network protocols, security protocols, various packet lengths, data transmission mechanisms and the like, the network traffic packet-grabbing analysis needs to cope with complex and changeable network environments and network traffic compositions. With the development of high-speed digital communication technology and the increase of data transmission scale, remote direct memory access technology is widely applied to improve data transmission performance. In particular, in a remote direct memory access network, the remote direct memory access technology is utilized, the intervention of a central processing unit is not needed, and data is directly and rapidly transmitted from one system to a computer storage area of another remote host through the participation of a network card. Therefore, in an application scenario based on remote direct memory access technology, such as a network node, a host, etc. on a remote direct memory access network, network traffic received through the remote direct memory access network is not passed through the system kernel, but is directly transferred from a remote direct memory access device, such as a remote direct memory access network card, to a memory on the host side. Compared to network traffic received over a remote direct memory access network, network traffic received over a non-remote direct memory access network, such as network traffic received over a transmission control protocol/internet protocol (Transmission Control Protocol/Internet Protocol, TCP/IP) network, these network data undergo multiple levels of movement and copying of data by the system kernel, thus requiring the processing functions of the central processor to be expended and also incurring overhead for the operating system to operate the copying of external memory and the memory space switching. The method and system for packet grabbing analysis of network traffic provided by the embodiments of the present application not only can cope with complex and changeable network environments and network traffic compositions, but also can make full use of the advantages of the remote direct memory access technology in terms of improving data transmission performance, and are described in further detail below with reference to fig. 1.
Referring to fig. 1, the remote direct memory access uplink and downlink traffic refers to network traffic generated in an application scenario based on a remote direct memory access technology, for example, a network node, a host, etc. on a remote direct memory access network, based on a remote direct memory access protocol and using remote direct memory access hardware for data transmission. The remote direct memory access uplink and downlink traffic can be remote direct memory access uplink traffic or remote direct memory access downlink traffic according to the flow direction of the network traffic, that is, according to message sending or message receiving. The remote direct memory access uplink traffic is network traffic received by the host, such as packets received by the host from a remote direct memory access network. The remote direct memory access downstream traffic is network traffic initiated by the host to be sent out, such as packet data packets initiated by the host to be sent out through the remote direct memory access network. The message forwarding processing engine is a module on a remote direct memory access hardware, such as a remote direct memory access network card, connected with the host, and is used for receiving and transmitting network message data packets. The remote direct memory access protocol processing engine is also a module on the remote direct memory access hardware for enabling remote direct memory access, such as transferring network data directly to the memory of the host or reading network data directly from the memory of the host, bypassing the kernel and operating system of the host. The first application is for performing a storage operation associated with the remote direct memory access upstream and downstream traffic, and may be a conventional business application using remote direct memory access transfer functionality, such as high performance communication business software, storage business software, and the like. The second application is different from the first application, and the matching rules are based at least on solution Bao Jiexi capabilities of the second application. Here, the second application may develop a design based on any suitable development tool, such as a network-based wrapper tool application program that employs open sources, an open source function library, etc., so long as the second application is capable of providing solution Bao Jiexi capabilities corresponding to the matching rules. The capability of the second application to unpack Bao Jiexi may be understood as the capability of unpacking and parsing a packet. In some embodiments, the second application may support unpacking parsing of network packets having a combination of communication standards, network protocols, security protocols within a range. The matching rules are issued by the second application to the message forwarding processing engine and managed by the second application. And after the message forwarding processing engine receives the remote direct memory access uplink and downlink traffic, judging whether the received remote direct memory access uplink and downlink traffic accords with the matching rule or not based on the matching rule. Here, the matching rule indicates at least one network interface, and thus, a scope of the message forwarding processing engine for subsequent processing can be defined by the at least one network interface indicated by the matching rule. For example, it may be defined by a matching rule that only remote direct memory access upstream and downstream traffic received over a particular interface at a particular time meets the matching rule. In this way, network traffic that needs to be analyzed for packet capture can be effectively distinguished by the isolation characteristics between different network interfaces. In some embodiments, where the resources of the network interfaces are limited, it may be possible to allocate some network interfaces for data transfers based on remote direct memory access and other network interfaces for data transfers not based on remote direct memory access, such as network traffic received over a transmission control protocol/internet protocol network, such that only network traffic passing through the network interface allocated to the remote direct memory access data transfer may be restricted by the network interface indicated by the matching rule to conform to the matching rule. In other embodiments, depending on the particular network traffic packet-grabbing analysis requirements, only a portion of the network interfaces allocated to remote direct memory access data transmissions may need to be network traffic-grabbing analyzed, such as network traffic grabbing analysis through a particular network interface (e.g., a particular remote direct memory access network is accessed or a particular network link is docked), which may also be implemented by the network interfaces indicated by the matching rules. The matching rule is based at least on the second application's solution Bao Jiexi capability, which, as mentioned above, represents the capability of unpacking and parsing network packets combined between a range of communication standards, network protocols, security protocols. Thus, the matching rule may be defined such that only packets that can be parsed by the second application for unpacking conform to the matching rule. In this way, the second application may be used to manage the matching rules, and the matching rules may be used to formulate a flexible network traffic packet-grabbing analysis policy, for example, a packet data packet received within a specified period of time that meets a specific requirement, such as a specific packet length or a specific network address, and the matching rules are based at least on the solution Bao Jiexi capability of the second application, so as to ensure that the network packet data packet that meets the matching rules must be solved Bao Jiexi by the second application.
With continued reference to fig. 1, performing, by a first application, a storage operation associated with the remote direct memory access upstream and downstream traffic, and performing, by a second application different from the first application, a network traffic packet-grabbing analysis associated with the remote direct memory access upstream and downstream traffic when the remote direct memory access upstream and downstream traffic meets the matching rule; therefore, the conventional remote direct memory access data transmission requirement can be met through the first application, and the unpacking and analyzing function for the remote direct memory access uplink and downlink traffic can be independently provided through the second application provided separately, so that the method is beneficial to adapting to various software and hardware structures for remote direct memory access data transmission. Further, by using the network interface indicated by the matching rule and the capability of the matching rule based on at least the solution Bao Jiexi of the second application, on the premise of ensuring that the remote direct memory access uplink and downlink traffic conforming to the matching rule can be solved Bao Jiexi by the second application, a flexible network traffic packet-grabbing analysis strategy can be formulated, and network traffic needing to be subjected to packet-grabbing analysis can be effectively distinguished by using isolation characteristics between different network interfaces. Further, the first mirror image flow corresponding to the remote direct memory access uplink and downlink flow meeting the matching rule is sent to the second application through the remote direct memory access protocol processing engine, and then the second application unpacks and analyzes the first mirror image flow to execute network flow packet grabbing analysis related to the remote direct memory access uplink and downlink flow, so that the original processing flow aiming at the remote direct memory access uplink and downlink flow is not interfered, an equivalent network flow packet grabbing analysis effect is realized through generating the first mirror image flow and processing the first mirror image flow, and the first mirror image flow is sent depending on the remote direct memory access protocol processing engine, thereby being beneficial to popularization and application to the existing software and hardware structure for remote direct memory access data transmission. In summary, the method for packet analysis of network traffic shown in fig. 1 does not need to rely on extra equipment and interfaces, can formulate a flexible packet analysis strategy of network traffic and efficiently distinguish network traffic needing to be subjected to packet analysis, can cope with complex and changeable network environments and network traffic compositions, does not interfere with the original processing flow aiming at remote direct memory access uplink and downlink traffic, can adapt to various existing software and hardware structures for remote direct memory access data transmission, fully utilizes the advantages of the remote direct memory access technology in terms of improving data transmission performance, bypasses system kernels, saves the cost of data copying, and is beneficial to improving the packet analysis performance and improving the system efficiency.
In one possible implementation, the user mode remote direct memory access queue includes a remote direct memory access traffic send queue, a remote direct memory access traffic receive queue, a mirror traffic receive queue, and a remote direct memory access completion queue, where the remote direct memory access protocol processing engine uses the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue to transmit the remote direct memory access upstream and downstream traffic between the remote direct memory access traffic receive queue and the first application, and the remote direct memory access protocol processing engine uses the mirror traffic receive queue to transmit the first mirror traffic to the second application, and the remote direct memory access traffic send queue, the remote direct memory access traffic receive queue, and the mirror traffic receive queue share the remote direct memory access completion queue. Here, the user-state remote direct memory access queue refers to a queue for software to interact with remote direct memory access hardware in a software and hardware structure for remote direct memory access data transmission. For example, a user-mode remote direct memory access queue may be used to interact with a remote direct memory access protocol processing engine in a remote direct memory access network card using a conventional business application for remote direct memory access transport functions. The remote direct memory access traffic transmission queue is directed to downstream traffic, that is, in order to initiate message transmission, for example, a first application may invoke a specific interface in a remote direct memory access user state database to submit a work request to the remote direct memory access traffic transmission queue so as to initiate message transmission. The remote direct memory access traffic receiving queue is directed to uplink traffic, that is, in order to process a message sent from a remote end, for example, a first application may call a specific interface in a remote direct memory access user state database to submit a work request to the remote direct memory access traffic receiving queue so as to receive the message. The mirror image flow receiving queue is a special type of receiving queue and is specially used for receiving mirror image flow, and the mirror image flow receiving queue comprises first mirror image flow corresponding to the remote direct memory access uplink and downlink flow. It should be appreciated that the mirrored traffic receive queue still belongs to the user-mode remote direct memory access queue and thus utilizes the existing queue resources in the hardware and software architecture for remote direct memory access data transfer. For the original processing flow aiming at the remote direct memory access uplink and downlink traffic, the remote direct memory access protocol processing engine utilizes the remote direct memory access traffic sending queue and the remote direct memory access traffic receiving queue to transmit the remote direct memory access uplink and downlink traffic between the first application, so that the remote direct memory access downlink traffic in the remote direct memory access uplink and downlink traffic is transmitted to the remote direct memory access protocol processing engine from the first application through the remote direct memory access traffic sending queue, and the remote direct memory access uplink traffic in the remote direct memory access uplink and downlink traffic is transmitted to the first application through the remote direct memory access traffic receiving queue, thereby being beneficial to the first application to execute the storage operation associated with the remote direct memory access uplink and downlink traffic. For the network traffic packet grabbing analysis of the remote direct memory access uplink and downlink traffic, the mirror traffic receiving queue is used for transmitting the first mirror traffic from the remote direct memory access protocol processing engine to the second application, for example, the second application can submit a work request to the mirror traffic receiving queue so as to receive the first mirror traffic, so that the second application is facilitated to carry out the unpacking analysis on the first mirror traffic so as to execute the network traffic packet grabbing analysis related to the remote direct memory access uplink and downlink traffic. Therefore, by using the mirror image flow receiving queue provided specially, the existing software and hardware structure for remote direct memory access data transmission can be used, the original processing flow for remote direct memory access uplink and downlink flows is not interfered, and no extra equipment and interfaces are needed to be relied on. Further, because the remote direct memory access completion queue is shared, the transmission completion of the remote direct memory access uplink and downlink traffic and the transmission completion of the first mirror traffic can be achieved by adding a completion queue entry to the remote direct memory access completion queue by the remote direct memory access protocol processing engine, thereby achieving a hint record of completion of message transmission. Thus, the application layer can poll the remote direct memory access completion queue to check whether the added completion queue entry exists, thereby realizing the interaction of the control layer. In addition, each completion queue entry may include commit request identification information, such as a 64-bit commit request identification code, which may correspond to a work request in the upper remote direct memory access traffic send queue or the remote direct memory access traffic receive queue, or may correspond to a work request in the upper mirror traffic receive queue.
In a possible implementation manner, the remote direct memory access uplink traffic is remote direct memory access uplink traffic or remote direct memory access downlink traffic, and the method further includes: when the remote direct memory access uplink traffic does not accord with the matching rule, transmitting the remote direct memory access uplink traffic to a network protocol processing engine through the message forwarding processing engine, and then transmitting the remote direct memory access uplink traffic to a system kernel protocol processing layer through the network protocol processing engine; and performing unpacking analysis on the remote direct memory access uplink traffic through the system kernel protocol processing layer to obtain unpacking analysis results, and then performing network traffic grabbing analysis associated with the remote direct memory access uplink traffic through the second application on the unpacking analysis results. The matching rules are based at least on the solution Bao Jiexi capabilities of the second application, thus ensuring that network packets that meet the matching rules must be solved Bao Jiexi by the second application. Therefore, for the remote direct memory access uplink traffic which does not accord with the matching rule, the system kernel protocol processing layer can be called, and the corresponding unpacking analysis result is obtained by utilizing the unpacking Bao Jiexi capability of the system kernel protocol processing layer, so that the problem of insufficient unpacking Bao Jiexi capability of the second application can be solved. In this way, the matching rule can be utilized, so that the remote direct memory access uplink traffic conforming to the matching rule is sent to the second application for parsing Bao Jiexi through the remote direct memory access protocol processing engine, and the remote direct memory access uplink traffic not conforming to the matching rule is processed through the system kernel protocol processing layer and then the second application executes the network traffic packet capturing analysis, thereby improving the overall efficiency of the system. Here, the network protocol processing engine refers to a processing engine for processing network traffic received through a non-remote direct memory access network, such as network traffic received through a transmission control protocol/internet protocol network. The network protocol processing engine may be a transmission control protocol/internet protocol processing engine.
In some embodiments, the system kernel protocol processing layer includes a network protocol stack, a network device layer, and a network driver, and the network protocol processing engine transmits the remote direct memory access upstream traffic to the system kernel protocol processing layer using a network queue applied to a non-remote direct memory access network. In this way, the matching rule can be utilized, so that the remote direct memory access uplink traffic conforming to the matching rule is sent to the second application for parsing Bao Jiexi through the remote direct memory access protocol processing engine, and the remote direct memory access uplink traffic not conforming to the matching rule is processed through the system kernel protocol processing layer and then the second application executes the network traffic packet capturing analysis, thereby improving the overall efficiency of the system.
In some embodiments, the message forwarding processing engine, the remote direct memory access protocol processing engine and the network protocol processing engine are all deployed in a remote direct memory access network card, the system kernel protocol processing layer is deployed in a kernel space of a host system, the first application and the second application are both deployed in a user space of the host system, and the remote direct memory access network card is connected with the host system through a shortcut peripheral component interconnect interface. Therefore, the advantages of the remote direct memory access technology in the aspect of improving the data transmission performance are fully utilized, the system kernel is bypassed, the cost of data copying is saved, and the packet grabbing analysis performance and the system efficiency are improved. And, the matching rule can be utilized, so that the remote direct memory access uplink traffic conforming to the matching rule is sent to the second application for solution Bao Jiexi through the remote direct memory access protocol processing engine, and the remote direct memory access uplink traffic not conforming to the matching rule is processed through the system kernel protocol processing layer and then the second application executes network traffic packet capturing analysis, thereby improving the overall efficiency of the system.
In some embodiments, when the remote direct memory access downstream traffic does not meet the matching rule, transmitting the remote direct memory access downstream traffic to the network protocol processing engine by the message forwarding processing engine, then transmitting the remote direct memory access downstream traffic to the system kernel protocol processing layer for parsing Bao Jiexi by the network protocol processing engine, and performing network traffic packet grabbing analysis associated with the remote direct memory access downstream traffic by the second application, or updating the matching rule by updating the second application, and then updating the matching rule based on a resolution Bao Jiexi capability of the updated second application, such that the remote direct memory access downstream traffic meets the updated matching rule, and performing network traffic packet grabbing analysis associated with the remote direct memory access downstream traffic by the updated second application. The matching rules are based at least on the solution Bao Jiexi capabilities of the second application, thus ensuring that network packets that meet the matching rules must be solved Bao Jiexi by the second application. Therefore, for the remote direct memory access downlink traffic which does not accord with the matching rule, because the message is initiated from the host, the system kernel protocol processing layer can be selected to be called to process or update the second application and the corresponding matching rule, thereby providing higher flexibility.
In one possible embodiment, the method further comprises: receiving non-remote direct memory access uplink traffic by a message forwarding processing engine and judging whether the non-remote direct memory access uplink traffic accords with the matching rule; when the uplink flow of the non-remote direct memory access accords with the matching rule, transmitting the uplink flow of the non-remote direct memory access to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading a second mirror flow corresponding to the uplink flow of the non-remote direct memory access to the second application through the remote direct memory access protocol processing engine; and carrying out unpacking analysis on the second mirror image flow through the second application so as to execute network flow grabbing analysis associated with the non-remote direct memory access uplink flow. As described above, the storage operation associated with the remote direct memory access upstream and downstream traffic is performed by a first application, and the network traffic packet-grabbing analysis associated with the remote direct memory access upstream and downstream traffic is performed by a second application different from the first application when the remote direct memory access upstream and downstream traffic meets the matching rule. Therefore, the conventional remote direct memory access data transmission requirement can be met through the first application, and the unpacking and analyzing function for the remote direct memory access uplink and downlink traffic can be independently provided through the second application provided separately, so that the method is beneficial to adapting to various software and hardware structures for remote direct memory access data transmission. Here, further, considering that the network traffic packet-grabbing analysis is needed to be performed on the non-remote direct memory access uplink traffic, the non-remote direct memory access uplink traffic which needs to be subjected to the packet-grabbing analysis can be regarded as the remote direct memory access uplink traffic, so that the corresponding second mirror traffic is generated, and the second mirror traffic is sent up through the remote direct memory access protocol processing engine, so that the system kernel is bypassed, the cost of data copying is saved, and the packet-grabbing analysis performance and the system efficiency are improved.
In some embodiments, the remote direct memory access protocol processing engine transmits the second mirrored traffic to the second application using the mirrored traffic receive queue, the remote direct memory access protocol processing engine adding a completion queue entry to the remote direct memory access completion queue in response to completion of transmission of the first mirrored traffic or completion of transmission of the second mirrored traffic. For the network traffic packet grabbing analysis of the remote direct memory access uplink and downlink traffic, the mirror traffic receiving queue is used for transmitting the first mirror traffic from the remote direct memory access protocol processing engine to the second application, for example, the second application can submit a work request to the mirror traffic receiving queue so as to receive the first mirror traffic, so that the second application is facilitated to carry out the unpacking analysis on the first mirror traffic so as to execute the network traffic packet grabbing analysis related to the remote direct memory access uplink and downlink traffic. Similarly, the non-remote direct memory access upstream traffic requiring the packet-grabbing analysis may be considered as remote direct memory access upstream traffic, and thus the second image traffic may be processed in a manner of processing the first image traffic, that is, submitting the work request to the image traffic receiving queue by the second application to receive the second image traffic, so as to facilitate the second application to perform the packet-grabbing analysis on the second image traffic to perform the network traffic associated with the non-remote direct memory access upstream traffic.
In one possible implementation, the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue belong to the same queue pair, and the first application manages the remote direct memory access traffic send queue by submitting a send queue work request to the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue by submitting a receive queue work request to the remote direct memory access traffic receive queue, and the second application manages the mirror traffic receive queue by submitting a mirror traffic receive queue work request to the mirror traffic receive queue. Therefore, the special mirror image flow receiving queue can utilize the existing software and hardware structure for remote direct memory access data transmission, does not interfere the original processing flow for remote direct memory access uplink and downlink flows, and does not need to rely on additional equipment and interfaces.
In a possible implementation manner, the remote direct memory access uplink and downlink traffic conforms to the matching rule, and includes: the at least one network interface indicated by the matching rule includes a network interface for transceiving the remote direct memory access upstream and downstream traffic. In this way, the second application may be utilized to manage the matching rules, and the matching rules may be utilized to formulate a flexible network traffic packet-grabbing analysis strategy.
In a possible implementation manner, the solution Bao Jiexi capability of the second application includes at least one communication standard and at least one network protocol supported by the second application, where the remote direct memory access uplink and downlink traffic conforms to the matching rule, and includes: the at least one communication standard supported by the second application includes a communication standard associated with the remote direct memory access upstream and downstream traffic, and the at least one network protocol supported by the second application includes a network protocol associated with the remote direct memory access upstream and downstream traffic. In this way, the second application may be utilized to manage the matching rules, and the matching rules may be utilized to formulate a flexible network traffic packet-grabbing analysis strategy.
Fig. 2 is a schematic diagram of a system for network traffic packet grabbing analysis according to a first embodiment provided in the examples of the present application. As shown in fig. 2, the system includes: a message forwarding processing engine 244, a remote direct memory access protocol processing engine 242, a first application 202, a second application 204. The message forwarding processing engine 244 is configured to receive the remote direct memory access uplink and downlink traffic 230 and determine whether the remote direct memory access uplink and downlink traffic 230 meets a matching rule. The storage operations associated with the remote direct memory access upstream and downstream traffic 230 are performed by the first application 202. The matching rule is based at least on the solution Bao Jiexi capabilities of a second application 204 different from the first application 202 and indicates at least one network interface. When the remote direct memory access uplink and downlink traffic 230 meets the matching rule, the message forwarding processing engine 244 transmits the remote direct memory access uplink and downlink traffic 230 to the remote direct memory access protocol processing engine 242, and then the remote direct memory access protocol processing engine 242 uploads a first mirrored traffic (indicated by mirrored traffic 232 in fig. 2) corresponding to the remote direct memory access uplink and downlink traffic 230 to the second application 204. The second application 204 is configured to perform a packet-grabbing analysis on the network traffic associated with the remote direct memory access upstream and downstream traffic 230 by performing a packet-grabbing analysis on the first mirrored traffic (denoted by mirrored traffic 232 in fig. 2).
With continued reference to FIG. 2, the user-mode remote direct memory access queue 210 includes a remote direct memory access traffic send queue 212, a remote direct memory access traffic receive queue 214, a mirror traffic receive queue 218, and a remote direct memory access completion queue 216. Wherein the remote direct memory access protocol processing engine 242 uses the remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 to transfer the remote direct memory access upstream and downstream traffic 230 between the first application 202. The remote direct memory access protocol processing engine 242 uses the mirrored traffic receive queue 218 to transmit the first mirrored traffic (represented by mirrored traffic 232 in fig. 2) to the second application 204. The remote direct memory access traffic send queue 212, the remote direct memory access traffic receive queue 214, and the mirror traffic receive queue 218 share the remote direct memory access completion queue 216. The remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 belong to the same queue pair, and the first application 202 manages the remote direct memory access traffic send queue 212 by submitting a send queue work request 220 to the remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 by submitting a receive queue work request 222 to the remote direct memory access traffic receive queue 214. The second application 204 manages the mirrored traffic receive queue 218 by submitting mirrored traffic receive queue work requests 224 to the mirrored traffic receive queue 218.
The system for packet analysis is used for packet analysis of network traffic in the second embodiment shown in fig. 2, does not need to rely on additional equipment and interfaces, can formulate flexible packet analysis strategies of network traffic and efficiently distinguish network traffic needing to be subjected to packet analysis, can cope with complex and changeable network environments and network traffic compositions, does not interfere with the original processing flow aiming at remote direct memory access uplink and downlink traffic, can adapt to various existing software and hardware structures for remote direct memory access data transmission, fully utilizes the advantages of remote direct memory access technology in terms of improving data transmission performance, bypasses system kernels and saves cost of data copying, and is beneficial to improving packet analysis performance and improving system efficiency.
Fig. 3 is a schematic diagram of a system for network traffic packet grabbing analysis according to a second embodiment provided in the examples of the present application. As shown in fig. 3, the system includes: a message forwarding processing engine 244, a remote direct memory access protocol processing engine 242, a first application 202, a second application 204. The message forwarding processing engine 244 is configured to receive the remote direct memory access uplink and downlink traffic 230 and determine whether the remote direct memory access uplink and downlink traffic 230 meets a matching rule. The storage operations associated with the remote direct memory access upstream and downstream traffic 230 are performed by the first application 202. The matching rule is based at least on the solution Bao Jiexi capabilities of a second application 204 different from the first application 202 and indicates at least one network interface. When the remote direct memory access uplink and downlink traffic 230 meets the matching rule, the message forwarding processing engine 244 transmits the remote direct memory access uplink and downlink traffic 230 to the remote direct memory access protocol processing engine 242, and then the remote direct memory access protocol processing engine 242 uploads a first mirrored traffic (indicated by mirrored traffic 232 in fig. 3) corresponding to the remote direct memory access uplink and downlink traffic 230 to the second application 204. The second application 204 is configured to perform a packet-grabbing analysis on the network traffic associated with the remote direct memory access upstream and downstream traffic 230 by performing a packet-grabbing analysis on the first mirrored traffic.
With continued reference to FIG. 3, the user-mode remote direct memory access queue 210 includes a remote direct memory access traffic send queue 212, a remote direct memory access traffic receive queue 214, a mirror traffic receive queue 218, and a remote direct memory access completion queue 216. Wherein the remote direct memory access protocol processing engine 242 uses the remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 to transfer the remote direct memory access upstream and downstream traffic 230 between the first application 202. The remote direct memory access protocol processing engine 242 uses the mirrored traffic receive queue 218 to transmit the first mirrored traffic (represented by mirrored traffic 232 in fig. 3) to the second application 204. The remote direct memory access traffic send queue 212, the remote direct memory access traffic receive queue 214, and the mirror traffic receive queue 218 share the remote direct memory access completion queue 216. The remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 belong to the same queue pair, and the first application 202 manages the remote direct memory access traffic send queue 212 by submitting a send queue work request 220 to the remote direct memory access traffic send queue 212 and the remote direct memory access traffic receive queue 214 by submitting a receive queue work request 222 to the remote direct memory access traffic receive queue 214. The second application 204 manages the mirrored traffic receive queue 218 by submitting mirrored traffic receive queue work requests 224 to the mirrored traffic receive queue 218.
The system for packet analysis is used for packet analysis of network traffic in the second embodiment shown in fig. 3, does not need to rely on additional equipment and interfaces, can formulate flexible packet analysis strategies of network traffic and efficiently distinguish network traffic needing to be subjected to packet analysis, can cope with complex and changeable network environments and network traffic compositions, does not interfere with the original processing flow aiming at remote direct memory access uplink and downlink traffic, can adapt to various existing software and hardware structures for remote direct memory access data transmission, fully utilizes the advantages of remote direct memory access technology in terms of improving data transmission performance, bypasses system kernels and saves cost of data copying, and is beneficial to improving packet analysis performance and improving system efficiency.
With continued reference to fig. 3, the system for network traffic packet-grabbing analysis according to the second embodiment shown in fig. 3 may further perform network traffic packet-grabbing analysis with respect to non-remote direct memory access uplink traffic. Specifically, through the packet forwarding processing engine 244, receiving the non-remote direct memory access uplink traffic and determining whether the non-remote direct memory access uplink traffic meets the matching rule; transmitting the non-remote direct memory access uplink traffic to a remote direct memory access protocol processing engine 242 through the message forwarding processing engine 244 when the non-remote direct memory access uplink traffic meets the matching rule, and then uploading a second mirror traffic (represented by mirror traffic 232 in fig. 3) corresponding to the non-remote direct memory access uplink traffic to the second application 204 through the remote direct memory access protocol processing engine 242; the second mirrored traffic (represented by mirrored traffic 232 in fig. 3) is unpacked and parsed by the second application 204 to perform network traffic packet-grabbing analysis associated with the non-remote direct memory access upstream traffic. The remote direct memory access protocol processing engine 242 transmits the second mirrored traffic (represented by mirrored traffic 232 in fig. 3) to the second application 204 using the mirrored traffic receive queue 218, and the remote direct memory access protocol processing engine adds a completion queue entry 234 to the remote direct memory access completion queue 216 in response to completion of transmission of the first mirrored traffic or completion of transmission of the second mirrored traffic. The message forwarding processing engine 244, the remote direct memory access protocol processing engine 242 and the network protocol processing engine 246 are all deployed on the remote direct memory access network card 240, the system kernel protocol processing layer 260 is deployed in a kernel space of a host system, the first application 202 and the second application 204 are all deployed in a user space of the host system, and the remote direct memory access network card 240 is connected to the host system through a shortcut peripheral component interconnect interface. It should be appreciated that for remote direct memory access upstream and downstream traffic 230, the remote direct memory access protocol processing engine 242 transmits the first mirrored traffic to the second application 204; for non-remote direct memory access upstream traffic, the remote direct memory access protocol processing engine 242 transmits second mirrored traffic to the second application 204. The first and/or second mirrored traffic transmitted by the remote direct memory access protocol processing engine 242 to the second application 204 is represented in figure 3 by mirrored traffic 232.
With continued reference to fig. 3, for upstream traffic 236 that does not meet the matching rule, such as remote direct memory access upstream traffic that does not meet the matching rule, the system kernel protocol processing layer 260 may be invoked for processing. The system kernel protocol processing layer 260 includes a network protocol stack 262, a network device layer 264, and a network driver 266. The network protocol processing engine 246 uses the network queues applied to the non-remote direct memory access network to transmit upstream traffic 236 that does not meet the matching rules to the system kernel protocol processing layer 260. In this way, the matching rule may be utilized, so that the remote direct memory access uplink traffic conforming to the matching rule is sent to the second application 204 through the remote direct memory access protocol processing engine 242 for parsing Bao Jiexi, and the remote direct memory access uplink traffic not conforming to the matching rule (represented by the uplink traffic 236 not conforming to the matching rule in fig. 3) is processed through the system kernel protocol processing layer 260 and then is subjected to the network traffic packet capturing analysis by the second application 204, thereby improving the overall system efficiency.
Fig. 4 is a schematic structural diagram of a computing device provided in an embodiment of the present application, where the computing device 400 includes: one or more processors 410, a communication interface 420, and a memory 430. The processor 410, communication interface 420, and memory 430 are interconnected by a bus 440. Optionally, the computing device 400 may further include an input/output interface 450, where the input/output interface 450 is connected to an input/output device for receiving parameters set by a user, etc. The computing device 400 can be used to implement some or all of the functionality of the device embodiments or system embodiments described above in the embodiments of the present application; the processor 410 can also be used to implement some or all of the operational steps of the method embodiments described above in the embodiments of the present application. For example, specific implementations of the computing device 400 performing various operations may refer to specific details in the above-described embodiments, such as the processor 410 being configured to perform some or all of the steps of the above-described method embodiments or some or all of the operations of the above-described method embodiments. For another example, in the present embodiment, the computing device 400 may be configured to implement some or all of the functions of one or more components of the apparatus embodiments described above, and the communication interface 420 may be configured to implement communication functions and the like necessary for the functions of the apparatuses, components, and the processor 410 may be configured to implement processing functions and the like necessary for the functions of the apparatuses, components.
It should be appreciated that the computing device 400 of fig. 4 may include one or more processors 410, and that the processors 410 may cooperatively provide processing power in a parallelized connection, a serialized connection, a serial-parallel connection, or any connection, or that the processors 410 may constitute a processor sequence or processor array, or that the processors 410 may be separated into primary and secondary processors, or that the processors 410 may have different architectures such as heterogeneous computing architectures. In addition, the computing device 400 shown in FIG. 4, the associated structural and functional descriptions are exemplary and not limiting. In some example embodiments, computing device 400 may include more or fewer components than shown in fig. 4, or combine certain components, or split certain components, or have a different arrangement of components.
The processor 410 may have various specific implementations, for example, the processor 410 may include one or more of a central processing unit (central processing unit, CPU), a graphics processor (graphic processing unit, GPU), a neural network processor (neural-network processing unit, NPU), a tensor processor (tensor processing unit, TPU), or a data processor (data processing unit, DPU), which are not limited in this embodiment. Processor 410 may also be a single-core processor or a multi-core processor. Processor 410 may be comprised of a combination of a CPU and hardware chips. The hardware chip may be an application-specific integrated circuit (ASIC), a programmable logic device (programmable logic device, PLD), or a combination thereof. The PLD may be a complex programmable logic device (complex programmable logic device, CPLD), a field-programmable gate array (field-programmable gate array, FPGA), general-purpose array logic (generic array logic, GAL), or any combination thereof. The processor 410 may also be implemented solely with logic devices incorporating processing logic, such as an FPGA or digital signal processor (digital signal processor, DSP) or the like. The communication interface 420 may be a wired interface, which may be an ethernet interface, a local area network (local interconnect network, LIN), etc., or a wireless interface, which may be a cellular network interface, or use a wireless local area network interface, etc., for communicating with other modules or devices.
The memory 430 may be a nonvolatile memory such as a read-only memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an electrically Erasable EPROM (EEPROM), or a flash memory. Memory 430 may also be volatile memory, which may be random access memory (random access memory, RAM) used as external cache. By way of example, and not limitation, many forms of RAM are available, such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and direct memory bus RAM (DR RAM). Memory 430 may also be used to store program code and data such that processor 410 invokes the program code stored in memory 430 to perform some or all of the operational steps of the method embodiments described above, or to perform corresponding functions in the apparatus embodiments described above. Moreover, computing device 400 may contain more or fewer components than shown in FIG. 4, or may have a different configuration of components.
The bus 440 may be a peripheral component interconnect express (peripheral component interconnect express, PCIe) bus, or an extended industry standard architecture (extended industry standard architecture, EISA) bus, a unified bus (Ubus or UB), a computer quick link (compute express link, CXL), a cache coherent interconnect protocol (cache coherent interconnect for accelerators, CCIX), or the like. The bus 440 may be divided into an address bus, a data bus, a control bus, and the like. The bus 440 may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus. But is shown with only one bold line in fig. 4 for clarity of illustration, but does not represent only one bus or one type of bus.
The method and the device provided in the embodiments of the present application are based on the same inventive concept, and because the principles of solving the problems by the method and the device are similar, the embodiments, implementations, examples or implementation of the method and the device may refer to each other, and the repetition is not repeated. Embodiments of the present application also provide a system that includes a plurality of computing devices, each of which may be structured as described above. The functions or operations that may be implemented by the system may refer to specific implementation steps in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein.
Embodiments of the present application also provide a computer-readable storage medium having stored therein computer instructions which, when executed on a computer device (e.g., one or more processors), may implement the method steps in the above-described method embodiments. The specific implementation of the processor of the computer readable storage medium in executing the above method steps may refer to specific operations described in the above method embodiments and/or specific functions described in the above apparatus embodiments, which are not described herein again.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. The present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any other combination. When implemented in software, the above-described embodiments may be implemented in whole or in part in the form of a computer program product. The present application may take the form of a computer program product embodied on one or more computer-usable storage media having computer-usable program code embodied therein. The computer program product includes one or more computer instructions. When loaded or executed on a computer, produces a flow or function in accordance with embodiments of the present application, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from one website, computer, server, or data center to another website, computer, server, or data center by a wired (e.g., coaxial cable, fiber optic, digital subscriber line), or wireless (e.g., infrared, wireless, microwave, etc.). Computer readable storage media can be any available media that can be accessed by a computer or data storage devices, such as servers, data centers, etc. that contain one or more collections of available media. Usable media may be magnetic media (e.g., floppy disks, hard disks, tape), optical media, or semiconductor media. The semiconductor medium may be a solid state disk, or may be a random access memory, flash memory, read only memory, erasable programmable read only memory, electrically erasable programmable read only memory, register, or any other form of suitable storage medium.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. Each flow and/or block of the flowchart and/or block diagrams, and combinations of flows and/or blocks in the flowchart and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments. It will be apparent to those skilled in the art that various modifications and variations can be made to the embodiments of the present application without departing from the spirit and scope of the embodiments of the present application. The steps in the method of the embodiment of the application can be sequentially adjusted, combined or deleted according to actual needs; the modules in the system of the embodiment of the application can be divided, combined or deleted according to actual needs. Such modifications and variations of the embodiments of the present application are intended to be included herein, if they fall within the scope of the claims and their equivalents.

Claims (15)

1. A method for network traffic packet-grabbing analysis, the method comprising:
receiving remote direct memory access uplink and downlink traffic and determining whether the remote direct memory access uplink and downlink traffic meets a matching rule by a message forwarding processing engine, wherein a storage operation associated with the remote direct memory access uplink and downlink traffic is performed by a first application, the matching rule being based at least on a solution Bao Jiexi capability of a second application different from the first application and indicating at least one network interface;
When the remote direct memory access uplink and downlink flows accord with the matching rule, transmitting the remote direct memory access uplink and downlink flows to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading first mirror image flows corresponding to the remote direct memory access uplink and downlink flows to the second application through the remote direct memory access protocol processing engine;
and carrying out unpacking analysis on the first mirror image traffic through the second application so as to execute network traffic packet grabbing analysis associated with the remote direct memory access uplink and downlink traffic.
2. The method of claim 1, wherein a user-mode remote direct memory access queue comprises a remote direct memory access traffic send queue, a remote direct memory access traffic receive queue, a mirror traffic receive queue, and a remote direct memory access completion queue, wherein the remote direct memory access protocol processing engine uses the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue to transmit the remote direct memory access upstream and downstream traffic between the remote direct memory access traffic receive queue and the first application, wherein the remote direct memory access protocol processing engine uses the mirror traffic receive queue to transmit the first mirror traffic to the second application, and wherein the remote direct memory access traffic send queue, the remote direct memory access traffic receive queue, and the mirror traffic receive queue share the remote direct memory access completion queue.
3. The method of claim 1, wherein the remote direct memory access upstream traffic is remote direct memory access upstream traffic or remote direct memory access downstream traffic, the method further comprising:
when the remote direct memory access uplink traffic does not accord with the matching rule, transmitting the remote direct memory access uplink traffic to a network protocol processing engine through the message forwarding processing engine, and then transmitting the remote direct memory access uplink traffic to a system kernel protocol processing layer through the network protocol processing engine;
and performing unpacking analysis on the remote direct memory access uplink traffic through the system kernel protocol processing layer to obtain unpacking analysis results, and then performing network traffic grabbing analysis associated with the remote direct memory access uplink traffic through the second application on the unpacking analysis results.
4. The method of claim 3, wherein the system kernel protocol processing layer comprises a network protocol stack, a network device layer, and a network driver, the network protocol processing engine transmitting the remote direct memory access upstream traffic to the system kernel protocol processing layer using a network queue applied to a non-remote direct memory access network.
5. The method of claim 3, wherein the message forwarding processing engine, the remote direct memory access protocol processing engine, and the network protocol processing engine are all deployed on a remote direct memory access network card, the system kernel protocol processing layer is deployed in a kernel space of a host system, the first application and the second application are both deployed in a user space of the host system, and the remote direct memory access network card is connected to the host system through a peripheral component interconnect express interface.
6. The method of claim 3, wherein when the remote direct memory access downstream traffic does not meet the matching rule,
transmitting, by the message forwarding processing engine, the remote direct memory access downstream traffic to the network protocol processing engine, then transmitting, by the network protocol processing engine, the remote direct memory access downstream traffic to the system kernel protocol processing layer for parsing Bao Jiexi, and performing, by the second application, network traffic packet-grabbing analysis associated with the remote direct memory access downstream traffic,
or,
updating the matching rule based on the updated solution Bao Jiexi capability of the second application by updating the second application, so that the remote direct memory access downlink traffic conforms to the updated matching rule, and executing network traffic packet grabbing analysis associated with the remote direct memory access downlink traffic by the updated second application.
7. The method according to claim 2, wherein the method further comprises:
receiving non-remote direct memory access uplink traffic by a message forwarding processing engine and judging whether the non-remote direct memory access uplink traffic accords with the matching rule;
when the uplink flow of the non-remote direct memory access accords with the matching rule, transmitting the uplink flow of the non-remote direct memory access to a remote direct memory access protocol processing engine through the message forwarding processing engine, and then uploading a second mirror flow corresponding to the uplink flow of the non-remote direct memory access to the second application through the remote direct memory access protocol processing engine;
and carrying out unpacking analysis on the second mirror image flow through the second application so as to execute network flow grabbing analysis associated with the non-remote direct memory access uplink flow.
8. The method of claim 7, wherein the remote direct memory access protocol processing engine transmits the second mirrored traffic to the second application using the mirrored traffic receive queue, the remote direct memory access protocol processing engine adding a completion queue entry to the remote direct memory access completion queue in response to completion of transmission of the first mirrored traffic or completion of transmission of the second mirrored traffic.
9. The method of claim 2, wherein the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue belong to the same queue pair, and wherein the first application manages the remote direct memory access traffic send queue by submitting a send queue work request to the remote direct memory access traffic send queue and manages the remote direct memory access traffic receive queue by submitting a receive queue work request to the remote direct memory access traffic receive queue, and wherein the second application manages the mirror traffic receive queue by submitting a mirror traffic receive queue work request to the mirror traffic receive queue.
10. The method of claim 1, wherein the remote direct memory access upstream and downstream traffic conforms to the matching rule, comprising: the at least one network interface indicated by the matching rule includes a network interface for transceiving the remote direct memory access upstream and downstream traffic.
11. The method of claim 1, wherein the solution Bao Jiexi capabilities of the second application include at least one communication standard and at least one network protocol supported by the second application, wherein the remote direct memory access upstream and downstream traffic conforms to the matching rule, comprising: the at least one communication standard supported by the second application includes a communication standard associated with the remote direct memory access upstream and downstream traffic, and the at least one network protocol supported by the second application includes a network protocol associated with the remote direct memory access upstream and downstream traffic.
12. A computer device, characterized in that it comprises a memory, a processor and a computer program stored on the memory and executable on the processor, which processor implements the method according to any of claims 1 to 11 when executing the computer program.
13. A computer readable storage medium storing computer instructions which, when run on a computer device, cause the computer device to perform the method of any one of claims 1 to 11.
14. A system for network traffic packet-grabbing analysis, the system comprising:
a message forwarding processing engine, configured to receive remote direct memory access uplink and downlink traffic and determine whether the remote direct memory access uplink and downlink traffic meets a matching rule, where a storage operation associated with the remote direct memory access uplink and downlink traffic is performed by a first application, and the matching rule is based at least on a solution Bao Jiexi capability of a second application different from the first application and indicates at least one network interface;
The remote direct memory access protocol processing engine transmits the remote direct memory access uplink and downlink traffic to the remote direct memory access protocol processing engine when the remote direct memory access uplink and downlink traffic accords with the matching rule, and then the remote direct memory access protocol processing engine transmits a first mirror traffic corresponding to the remote direct memory access uplink and downlink traffic to a second application;
the second application is configured to perform unpacking analysis on the first mirrored traffic to perform network traffic packet-grabbing analysis associated with the remote direct memory access upstream and downstream traffic.
15. The system of claim 14, wherein a user-mode remote direct memory access queue comprises a remote direct memory access traffic send queue, a remote direct memory access traffic receive queue, a mirror traffic receive queue, and a remote direct memory access completion queue, wherein the remote direct memory access protocol processing engine uses the remote direct memory access traffic send queue and the remote direct memory access traffic receive queue to transmit the remote direct memory access upstream and downstream traffic between the remote direct memory access traffic receive queue and the first application, wherein the remote direct memory access protocol processing engine uses the mirror traffic receive queue to transmit the first mirror traffic to the second application, and wherein the remote direct memory access traffic send queue, the remote direct memory access traffic receive queue, and the mirror traffic receive queue share the remote direct memory access completion queue.
CN202410016891.3A 2024-01-05 2024-01-05 Method and system for analyzing network traffic packet Active CN117527654B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410016891.3A CN117527654B (en) 2024-01-05 2024-01-05 Method and system for analyzing network traffic packet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410016891.3A CN117527654B (en) 2024-01-05 2024-01-05 Method and system for analyzing network traffic packet

Publications (2)

Publication Number Publication Date
CN117527654A CN117527654A (en) 2024-02-06
CN117527654B true CN117527654B (en) 2024-04-09

Family

ID=89753538

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410016891.3A Active CN117527654B (en) 2024-01-05 2024-01-05 Method and system for analyzing network traffic packet

Country Status (1)

Country Link
CN (1) CN117527654B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106953774A (en) * 2016-01-07 2017-07-14 无锡聚云科技有限公司 One kind is based on user-defined network packet snapping system
CN110222503A (en) * 2019-04-26 2019-09-10 西安交大捷普网络科技有限公司 Database audit method, system and equipment under a kind of load of high amount of traffic
CN113708990A (en) * 2021-08-06 2021-11-26 上海龙旗科技股份有限公司 Method and equipment for packet grabbing and unpacking of data packet
CN115934623A (en) * 2023-02-09 2023-04-07 珠海星云智联科技有限公司 Data processing method, device and medium based on remote direct memory access

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10331613B2 (en) * 2015-10-30 2019-06-25 Netapp, Inc. Methods for enabling direct memory access (DMA) capable devices for remote DMA (RDMA) usage and devices therof
US10523540B2 (en) * 2017-03-29 2019-12-31 Ca, Inc. Display method of exchanging messages among users in a group

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106953774A (en) * 2016-01-07 2017-07-14 无锡聚云科技有限公司 One kind is based on user-defined network packet snapping system
CN110222503A (en) * 2019-04-26 2019-09-10 西安交大捷普网络科技有限公司 Database audit method, system and equipment under a kind of load of high amount of traffic
CN113708990A (en) * 2021-08-06 2021-11-26 上海龙旗科技股份有限公司 Method and equipment for packet grabbing and unpacking of data packet
CN115934623A (en) * 2023-02-09 2023-04-07 珠海星云智联科技有限公司 Data processing method, device and medium based on remote direct memory access

Also Published As

Publication number Publication date
CN117527654A (en) 2024-02-06

Similar Documents

Publication Publication Date Title
CN115134245B (en) Network apparatus, method, computerized system, and machine readable storage medium
US8676917B2 (en) Administering an epoch initiated for remote memory access
US8325633B2 (en) Remote direct memory access
EP3837604B1 (en) In situ triggered function as a service within a service mesh
US11922304B2 (en) Remote artificial intelligence (AI) acceleration system
US8266630B2 (en) High-performance XML processing in a common event infrastructure
CN109992405A (en) A kind of method and network interface card handling data message
CN114153778A (en) Cross-network bridging
US20050091334A1 (en) System and method for high performance message passing
CN115858103B (en) Method, device and medium for virtual machine hot migration of open stack architecture
CN115934623B (en) Data processing method, device and medium based on remote direct memory access
CN112787999A (en) Cross-chain calling method, device, system and computer readable storage medium
CN115202573A (en) Data storage system and method
CN116049085A (en) Data processing system and method
Eran et al. Flexdriver: A network driver for your accelerator
CN117527654B (en) Method and system for analyzing network traffic packet
CN113157445B (en) Bidirectional message symmetric RSS processing method and system based on Hash operation and index value comparison
CN114595080A (en) Data processing method and device, electronic equipment and computer readable storage medium
Chen et al. High‐performance user plane function (UPF) for the next generation core networks
Ekane et al. Networking in next generation disaggregated datacenters
CN116340246B (en) Data pre-reading method and medium for direct memory access read operation
Yamamoto et al. Tinet+ tecs: Component-based tcp/ip protocol stack for embedded systems
CN117573602B (en) Method and computer device for remote direct memory access message transmission
KR20190041954A (en) Method for processing input and output on multi kernel system and apparatus for the same
CN115604198B (en) Network card controller, network card control method, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant