CN113824706B - Message parsing method and network equipment - Google Patents

Message parsing method and network equipment Download PDF

Info

Publication number
CN113824706B
CN113824706B CN202111062263.1A CN202111062263A CN113824706B CN 113824706 B CN113824706 B CN 113824706B CN 202111062263 A CN202111062263 A CN 202111062263A CN 113824706 B CN113824706 B CN 113824706B
Authority
CN
China
Prior art keywords
message
module
layer
data
protocol
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111062263.1A
Other languages
Chinese (zh)
Other versions
CN113824706A (en
Inventor
刘彦静
王明超
唐世光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPtech Information Technology Co Ltd
Original Assignee
Hangzhou DPtech Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPtech Information Technology Co Ltd filed Critical Hangzhou DPtech Information Technology Co Ltd
Priority to CN202111062263.1A priority Critical patent/CN113824706B/en
Publication of CN113824706A publication Critical patent/CN113824706A/en
Application granted granted Critical
Publication of CN113824706B publication Critical patent/CN113824706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Abstract

The disclosure provides a message parsing method and network equipment. The method comprises the following steps: the interface module receives a target message, wherein the target message comprises message heads of a plurality of protocol layers which are nested in sequence; the data storage module caches the target message output by the interface module and provides the initial storage address of the target message to the analysis and calculation module; the analysis and calculation module is used for analyzing at least one layer of message header of the target message.

Description

Message parsing method and network equipment
Technical Field
The disclosure relates to the technical field of internet communication, and in particular relates to a message parsing method and network equipment.
Background
In the related art, a processing module is formed by configuring a programmable logic device built in a network device, and the processing module performs overall analysis on a message. The message is generally encapsulated by a multi-layer protocol, the processing module analyzes the type of each layer of protocol respectively, and analyzes the corresponding message content through the logic resource corresponding to the protocol type until all layers are analyzed.
Because the logic resources corresponding to each protocol type are independent, when the protocol types are increased, the corresponding logic resources need to be increased for the processing module. Therefore, the whole parsing method in the related art makes the logic resources occupied by the processing module more, and the utilization rate of the logic resources corresponding to each protocol type is not high.
Disclosure of Invention
In view of this, the disclosure provides a method for parsing a message and a network device, so as to solve the deficiencies in the related art.
Specifically, the present disclosure is implemented by the following technical scheme:
according to a first aspect of the present disclosure, there is provided a method for parsing a message, the method being applied to a network device provided with a programmable logic device, where the programmable logic device is configured to form an interface module, a parsing calculation module and a data storage module; the method comprises the following steps:
the interface module receives a target message, wherein the target message comprises message heads of a plurality of protocol layers which are nested in sequence;
the data storage module caches the target message output by the interface module and provides the initial storage address of the target message to the analysis and calculation module;
the analysis and calculation module is used for analyzing at least one layer of message header of the target message; the process of analyzing the message header of any layer by the analysis calculation module comprises the following steps:
sending the sum of the initial storage address and the length of the message header before any layer to the data storage module, so that the data storage module reads the message data of any layer according to the received sum of the initial storage address and the length of the message header and returns the message data to the analysis and calculation module;
analyzing the message data of any layer to determine the length of the message header of any layer.
According to a second aspect of the present disclosure, there is provided a network device, in which a programmable logic device is disposed, and the programmable logic device is configured to form an interface module, an analysis calculation module, and a data storage module; wherein:
the interface module is used for receiving a target message, wherein the target message comprises message heads of a plurality of protocol layers which are nested in sequence;
the data storage module is used for caching the target message output by the interface module and providing a starting storage address of the target message to the analysis and calculation module;
the analysis and calculation module is used for analyzing at least one layer of message header of the target message; the process of analyzing the message header of any layer by the analysis calculation module comprises the following steps:
sending the sum of the initial storage address and the length of the message header before any layer to the data storage module, so that the data storage module reads the message data of any layer according to the received sum of the initial storage address and the length of the message header and returns the message data to the analysis and calculation module;
analyzing the message data of any layer to determine the length of the message header of any layer.
According to a third aspect of the present disclosure, there is provided a network device comprising, a programmable logic device;
a memory for storing configuration files;
wherein the programmable logic device implements the steps of the method of the first aspect by running the configuration file.
According to a fourth aspect of the present disclosure, there is provided a computer readable storage medium having stored therein a computer program which, when executed by a processor, implements the steps of the method of the first aspect.
The technical scheme provided by the embodiment of the disclosure can comprise the following beneficial effects:
in the embodiment of the disclosure, by abstracting the unified processing logic corresponding to each protocol type and implementing the unified processing logic by interactive cooperation among a plurality of modules configured on the programmable logic device, sequential analysis of message data of each layer can be implemented iteratively, and the same logic resource on the programmable logic device can be recycled due to consistent processing logic, so that occupation of logic resource is not changed due to increase of the protocol type, and utilization rate of the logic resource of the programmable logic device is greatly improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a block diagram of a programmable logic device shown in an embodiment of the present disclosure;
FIG. 2 is a flow chart of a message parsing method according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a starting memory address and header length according to an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a module interface interaction shown in an embodiment of the present disclosure;
FIG. 5 is a block diagram of a programmable logic device incorporating a data preprocessing module according to an embodiment of the present disclosure;
FIG. 6 is a flow chart of a parse computation module sub-module shown in an embodiment of the disclosure;
FIG. 7 is a schematic diagram of sub-module pipelined parallel processing shown in an embodiment of the present disclosure;
fig. 8 is a schematic structural diagram of a network device according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
The terminology used in the present disclosure is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used in this disclosure and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in this disclosure to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present disclosure. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
The OSI (Open System Interconnection, open communication system interconnection) model divides a network communication protocol into seven layers, namely a physical layer, a data link layer, a network layer, a session layer, a presentation layer, and an application layer, each layer comprising a plurality of protocol types. Before the message is sent, the message heads of the message are required to be packaged in sequence from a high layer to a low layer according to the hierarchical relationship, so that the message heads of the highest layer protocol are packaged at the innermost layer, and the message heads of the lowest layer protocol are positioned at the outermost layer. Because of this hierarchical relationship between protocols, except for the outermost protocol header, any one of the inner protocols is unknown before the header of the previous protocol is parsed, and only after the header of the previous protocol is parsed, it can be confirmed. Therefore, after receiving the message, the network device needs to sequentially parse the message header of each protocol from outside to inside.
As shown in the following table 1, assuming that the data to be transmitted is a payload (payload), the packet encapsulation process is: the TCP (Transmission Control Protocol ) protocol header of the highest layer, protocol layer 5, is first encapsulated to form TCP packet data, which includes the TCP header and the payload. Then, the TCP message data is used as a new message load, and IPv6 (Internet Protocol Version, internet protocol version 6) protocol message header of the protocol layer 4 is encapsulated to form IPv6 message data, wherein the IPv6 message data comprises the IPv6 message header and the TCP message data used as the message load. Further, the IPv6 message data is used as a new message payload, and a GRE (Generic Routing Encapsulation, generic routing encapsulation protocol) protocol header of the protocol layer 3 is encapsulated to form GRE message data, where the GRE message data includes the GRE message header and the IPv6 message data as the message payload. Similarly, with the GRE message data as a new payload, an IPv4 (Internet Protocol Version, internet protocol version 4) protocol header of the protocol layer 2 may be encapsulated to form IPv4 message data, where the IPv4 message data includes the IPv4 message header and the GRE message data as a message payload. And, with the IPv4 message data as a new payload, an Ethernet (Ethernet) protocol header of the protocol layer 1 may be encapsulated to form Ethernet message data, where the Ethernet message data includes the Ethernet message header and the IPv4 message data as a message payload.
TABLE 1
The network equipment realizes message analysis through the programmable logic device. The programmable logic device may be, for example, an FPGA (Field Programmable Gate Array ), or other similar device. For ease of understanding, an FPGA is described below as an example. Unlike a CPU or GPU, an FPGA is not a von Neumann architecture, but is an architecture without instructions and shared memory, and the functions of each logic unit of the FPGA are determined when reprogramming (programming) and no instructions are needed; meanwhile, the register and the on-chip memory (BRAM) belong to respective control logic, unnecessary arbitration and buffering are not needed, and therefore the FPGA has strong computing power and enough flexibility.
In the related art, an interface module and a processing module are formed on the FPGA through configuration. The processing module includes logic resources corresponding to each type of protocol, such as the above-mentioned Ethernet protocol, IPv4 protocol, GRE protocol, IPv6 protocol, TCP protocol, etc. After receiving the target message, the interface module transfers the target message to the analysis module, and the analysis module analyzes the message heads of each layer from outside to inside in sequence. When each layer of message header is analyzed, the processing module needs to judge the protocol type through the type identifier, so that analysis operation is implemented through the logic resource corresponding to the protocol type. Taking table 1 as an example, the processing module first determines that the outermost layer is an Ethernet protocol, analyzes the Ethernet message header through the logic resource corresponding to the Ethernet protocol, and can determine that the next layer is an IPv4 protocol, so as to analyze the IPv4 message header through the logic resource corresponding to the IPv4 protocol, and so on until the TCP message header is analyzed. As the types of message protocols increase, corresponding logic resources need to be adapted in the processing module to complete the overall analysis of the message, so that the processing module needs to occupy a large amount of logic resources.
Therefore, the present disclosure proposes a new message parsing scheme, the process of parsing the message is divided into different modules, each module processes different tasks, and finally, the modules are integrated and output, the message parsing is completed by direct interaction and cooperation of different modules configured on the programmable logic device, and the method for classifying and iterating the message by the modules repeatedly uses the logic resources and the storage resources of the programmable logic device, improves the utilization rate of the storage resources and the logic resources of the programmable logic device, and avoids calling more logic resources and storage resources.
Embodiments of a message parsing method and a network device of the present disclosure are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic block diagram of a programmable logic device according to an embodiment of the disclosure, where the programmable logic device is configured on a network device to form a plurality of functional modules shown in fig. 1, including: an interface module 110, a data storage module 120, and a parsing calculation module 130. The programmable logic device of the present disclosure may be, for example, the FPGA described above, but other types of chips may be used, which is not limited by the present disclosure. The functional modules on the programmable logic device are matched with each other, so that the message analysis scheme of the disclosure can be realized. The following describes the matching process between the above functional modules with reference to fig. 2 by taking FPGA as an example.
Fig. 2 is a flowchart of a message parsing method according to an embodiment of the disclosure. As shown in fig. 2, the method may include the steps of:
in step 210, the interface module 110 receives a target message.
The target message may be any message received by the network device, which may be resolved by adopting the technical scheme of the present disclosure, and the present disclosure is not limited thereto. For example, the target message may be a GRE message with the structure shown in table 1, and the target message is encapsulated with a 5-layer protocol, that is, sequentially includes 5 headers shown in table 1.
In step 220, the data storage module 120 caches the target message output by the interface module 110, and provides the initial storage address of the target message to the parsing calculation module 130.
The cache architecture employed by the data storage module 120, such as RAM (Random Access Memory random access memory) or other types of cache architectures, is not limited in this disclosure. The RAM can be divided into a single-port RAM, a pseudo double-port RAM and a double-port RAM. The single port has only one group of data lines and address lines, so that the reading and writing cannot be performed simultaneously. And two groups of data lines can be read and written simultaneously at the double ports. In a pseudo dual port RAM, one port is read only and one port is write only, the memory cells can be addressed.
The initial storage address refers to the initial storage address of the target message in the data storage module 120, and the initial storage addresses of the protocol layers of the same message are the same. Fig. 3 is a specific embodiment of a starting memory address and header length.
In step 230, the parsing calculation module 130 is configured to parse at least one layer header of the target message. The parsing calculation module 130 parses any layer of header, which includes:
sending the sum of the initial storage address and the length of the message header before any layer to the data storage module 120, so that the data storage module 120 reads the message data of any layer according to the received sum of the initial storage address and the length of the message header and returns the message data to the analysis and calculation module;
the parsing and calculating module 130 parses the at least one layer of header according to the technical solution of the present disclosure, where the at least one layer of header may be a header of a portion of a protocol of the target message, or may be a header of all protocols of the target message. When the header of at least one layer is the header of a part of the protocols of the target message, the headers of the rest protocols can be analyzed in a mode in related technology.
In parsing any layer of header, dynamic interactions between the data storage module 120 and the parsing computation module 130 are involved. In the embodiment shown in fig. 4, for example, the data storage module 120 includes a storage address output interface, an offset input interface, a fetch identifier input interface, and a message data output interface, and the parsing calculation module 130 includes a storage address input interface, an offset output interface, a fetch identifier output interface, and a message data input interface, where the storage address output interface of the data storage module 120 is connected to the storage address input interface of the parsing calculation module 130; the offset output interface of the analysis and calculation module 130 is connected with the offset input interface of the data storage module 120; the extraction identifier output interface of the analysis and calculation module 130 is connected with the extraction identifier input interface of the data storage module 120; the message data output interface of the data storage module 120 is connected to the message data input interface of the parsing calculation module 130. The data storage module 120 and the parsing calculation module 130 can dynamically interact through the interfaces, so as to parse the message data.
When the parsing calculation module 130 sends the sum of the initial storage address and the header length before any layer of the target message to the data storage module 120, the sum of the initial storage address and the header length may be spliced, and then the sum of the initial storage address and the header length is sent to an offset input interface of the data storage module 120 through an offset output interface, and after the data storage module 120 reads the message data of any layer according to the received sum of the initial storage address and the header length, the message data of any layer is sent to a message data input interface in the parsing calculation module 130 through a message data output interface.
In an embodiment, in the case that the protocol type adopted by any layer is the first type of protocol including five-tuple data, the parsing and calculating module 130 may generate the extraction identifier and send the extraction identifier to the data storage module 120, so that the data storage module 120 processes and stores the message data of any layer based on the extraction identifier. The extraction identifier input interface and the extraction identifier output interface can be used for realizing transmission of the extraction identifier, which is not limited in this disclosure. The first type of protocol herein may include IPv4, IPv6, TCP protocol, etc., and the present disclosure is not limited thereto. Taking the IPv4 protocol as an example: when determining that any layer to be analyzed is the IPv4 layer, the analysis calculation module 130 generates an extraction identifier Flag for the IPv4 protocol, and sends the extraction identifier Flag to an extraction identifier input interface through an extraction identifier output interface, so that after reading the IPv4 message data, the data storage module 120 extracts the IP address data from the IPv4 message data and stores the IP address data in the RAM.
In another embodiment, in the case where the protocol type adopted by any of the above layers is the second type of protocol without quintuple data, the parsing calculation module 130 may generate a type identifier and store the type identifier in the general data register. The second type of protocol herein may include GRE, VXLAN, GTP protocols, etc., although this disclosure is not limited in this regard. Taking the GRE protocol as an example: when determining that any layer to be parsed is a GRE layer, the parsing calculation module 130 does not generate an extraction Flag for the GRE protocol data, but generates GRE identification information, stores the GRE identification information in a general data register, and outputs the general data register when all the target message is parsed.
Of course, since the message data obtained by the parsing calculation module 130 is the target message, the interface module 110 may directly send a copy of the target message to the parsing calculation module 130. At this time, the parsing calculation module 130 needs to additionally configure processing logic for obtaining the target message and parsing the outermost header in addition to the processing logic described in the above step 230 and the like.
In order to simplify the processing logic of the parsing calculation module 130 and improve the parsing efficiency of the outermost layer header, the present disclosure may further configure a data preprocessing module 540 as shown in fig. 5 on the FPGA, where the data preprocessing module 540 preprocesses the target packet output by the interface module to parse and determine the header length and the protocol type of the outer layer of the target packet; when any layer is the first layer of message data of the inner layer, the message header length of the corresponding previous layer of message data is obtained by preprocessing the target message by the data preprocessing module 540, and is further provided to the parsing calculation module 130.
The data preprocessing module 540 is configured to preprocess the target message output by the interface module 110, so as to analyze and determine an outer protocol type of the target message, and a header length of a data link layer. The outer layer of the message comprises a data link layer or an MPLS layer, and can only comprise a single layer protocol or a Multi-layer mixture, wherein the protocol types comprise but are not limited to an Ethernet protocol, a VLAN virtual local area network protocol (Virtual Local Area Network) and an MPLS routing protocol (Multi-Protocol Label Switching). Because the data link layer or MPLS layer is simpler than the network layer protocol, the data link layer or MPLS layer protocol can be processed in advance by the data preprocessing module 540 to increase the parsing rate of the message. The result of the data preprocessing is sent to the parsing calculation module 130.
When the parsing and calculating module 130 processes the inner layer message data of the target message, the data preprocessing module 540 provides the header length of the outer layer message data. In this way, the data preprocessing module 540 processes the simpler data link layer protocol, uses a small amount of logic resources to analyze the simple protocol type, can reduce the use of logic resources in the analysis and calculation module 130, avoid the problem of analyzing the simple protocol type by complex logic operation, reduce the waste of logic resources, and increase the message analysis rate.
The parsing calculation module 130 may adopt processing logic similar to pipeline operation, and the processing logic of pipeline operation may be suitable for parsing operation on each layer of message header, so that the parsing process on each layer of message header may be multiplexed, and the logic resource utilization rate of the parsing calculation module 130 is greatly improved. The parsing calculation module 130 may include N sub-modules connected in series in sequence, where each sub-module occupies part of logic resources on the FPGA respectively, so as to perform a predefined pipeline function, and the N sub-modules are used to jointly parse any layer of message header; wherein N is an integer greater than 1. For example, as shown in fig. 6, in an embodiment, when the value of N is 6, parsing of any layer of message is completed corresponding to cooperation of 6 sub-modules; of course, N may be any integer greater than 1 in theory, and the present disclosure is not limited to the specific number of N.
Based on the processing logic implemented by the parsing calculation module 130, the processing logic may be divided into a plurality of parts in any manner, and each part is formed as a sub-module as described above accordingly. Therefore, the number of sub-modules and the logic function implemented by each sub-module may also vary based on the different partitioning manners adopted by the processing logic, which is difficult to be exhausted by the present disclosure, but not limited thereto. For ease of understanding, fig. 6 provides an exemplary sub-module division manner, where the parsing calculation module 130 is divided into 6 sub-modules, a-F, that are serially connected in sequence: the sub-module A receives the target message, and the sub-module B outputs an offset and a storage identifier, wherein the offset is the sum of the lengths of the parsed message heads. The offset output by the submodule B contains data obtained by splicing the offset and the initial storage address, the submodule C caches the next layer of message data, the submodule D calculates the offset, the submodule E analyzes the next layer of protocol type, and the submodule F outputs an analysis result. Each sub-module can complete the corresponding pipeline operation function in one clock period, and the present disclosure does not limit the time consumption of operation of each sub-module.
The process of implementing message parsing by the parsing calculation module 130 of the present disclosure through the processing logic of the pipeline operation will be described below with reference to fig. 5-6 by taking the target message with the structure shown in table 1 as an example.
First, the message interface receives the GRE target message and sends the GRE target message to the data storage module 120 and the data preprocessing module 540, respectively. The GRE target message contains 5 layers of protocol, i.e., 5 message headers. The layer 5 protocols in the embodiment of the disclosure are respectively: the protocol layer 1 is an Ethernet protocol, the protocol layer 2 is an IPv4 protocol, the protocol layer 3 is a GRE protocol, the protocol layer 4 is an IPv6 protocol, and the protocol layer 5 is a TCP protocol.
Resolution procedure for protocol layer 1:
the data preprocessing module 540 preprocesses the GRE target packet, recognizes that the data link layer is an Ethernet protocol and parses the GRE target packet, determines that the protocol layer 2 is an IPv4 protocol, and calculates the header length of the Ethernet, i.e., the offset 1. The data preprocessing module 540 sends the parsing result to the parsing calculation module 130.
The data storage module 120 caches the received GRE target message, marks the initial storage address as a1, and outputs the storage address a1 to the parsing calculation module 130.
Resolution procedure for protocol layer 2:
2-1, the submodule A judges whether the analysis of the last target message is finished, if so, the starting storage address and the preprocessing result of the new target message are received and stored in the general data register, and if not, the general data register is only cached for one beat. And for the GRE message in the example, judging that the analysis is not finished, and caching the data of the general data register by one beat.
2-2, the submodule B processes according to the data received by the submodule A: according to the protocol type of the protocol layer 2 being an IPv4 protocol, the sub-module B generates an IP address extraction identification information flag1; and, the sub-module B sends flag1, the starting memory address a1 and the offset 1 to the data memory module 120.
Accordingly, the data storage module 120 extracts the message data of the IPv4 layer according to the storage address a1 and the message header length (offset 1) provided by the sub-module B, and returns the extracted message data to the parsing calculation module 130. And the data storage module 120 extracts the IP address data in the IPv4 layer message data according to the identification information flag1 provided by the sub-module B, and stores the IP address data in the RAM.
2-3, the sub-module C receives the IPv4 layer packet data provided by the data storage module 120 and caches one beat.
2-4, the submodule D calculates the length of the IPv4 layer message header according to the value of IHL (Internet Header Length, header length) field in the IPv4 layer message data, and stores the length in a general data register.
2-5, the submodule E recognizes that the next layer is GRE protocol according to the Protocal field of the IPv4 layer message data and stores the GRE protocol in a general data register.
2-6, the submodule F judges whether the current protocol layer is the last layer, if so, generates an analysis ending mark, outputs the data stored in the general data register, and if not, only caches the general data register by one beat. In the embodiment of the disclosure, since the current protocol layer is IPv4 and is not the last layer, the sub-module F caches one beat and returns to the sub-module a to perform message parsing of the GRE layer.
Resolution procedure for protocol layer 3:
3-1, the submodule A judges whether the analysis of the data packet of the last message is finished, if so, the address and the preprocessing result of the new data packet are received and stored in the general data register, if not, the general data register is cached for one beat, and if the GRE message in the example is judged that the analysis is not finished, the general data register is cached for one beat.
3-2, the submodule B processes according to the data received by the submodule A: the data in the general data register is extracted to obtain that the protocol type of the protocol layer 3 is GRE protocol, the sub-module B does not generate extraction identification information, the message header length addition of the protocol layer 1 and the protocol layer 2 is marked as an offset 2, the sub-module B sends the initial storage address a1 and the offset 2 to the data storage module 120, and the analysis calculation module 130 adds GRE identification information in the general data register.
Accordingly, the data storage module 120 extracts the message data of the protocol layer 3 according to the storage address a1 and the message header length (offset 2) provided by the sub-module B, and returns the extracted message data to the parsing calculation module 130.
3-3, the sub-module C receives the GRE layer message data provided by the data storage module 120 and caches one beat.
3-4, the submodule D analyzes and calculates the length of the GRE layer message header according to the GRE layer message data, and stores the GRE layer message header length in a general data register.
3-5, the submodule E recognizes that the next layer is an IPv6 protocol according to GRE layer message data and stores the next layer in a general data register.
3-6, the submodule F judges whether the current protocol layer is the last layer, if so, the analysis ending mark is generated, and meanwhile, the data stored in the general data register is output. If not the last layer, only the general data register is cached for one beat. In the embodiment of the disclosure, since the current protocol layer is the GRE protocol and is not the last layer, the sub-module F caches one beat and returns to the sub-module a to perform message parsing of the IPv6 layer.
Resolution procedure for protocol layer 4:
4-1, the submodule A judges whether the analysis of the data packet of the last message is finished, if so, the address and the preprocessing result of the new data packet are received and stored in the general data register, if not, the general data register is cached for one beat, and if the GRE message in the example is judged that the analysis is not finished, the general data register is cached for one beat.
4-2, the submodule B processes according to the data received by the submodule A: extracting data of a general data register, obtaining that the protocol type of a protocol layer 4 is an IPv6 protocol, and generating inner layer IP address extraction identification information flag2 by a submodule B; the sub-module B marks the header length sums of the protocol layer 1, the protocol layer 2 and the protocol layer 3 as the offset 3, and sends the flag2, the initial storage address a1 and the offset 3 to the data storage module 120, and the parsing calculation module 130 adds the length of the IPv6 layer and the IP type data in the general data register.
Accordingly, the data storage module 120 extracts the IPv6 layer packet data according to the storage address a1 and the packet header length (offset 3) provided by the sub-module B, and returns the extracted IPv6 layer packet data to the parsing calculation module 130. And, the data storage module 120 extracts the IP data in the IPv6 layer packet data according to the identification information flag2 provided by the sub-module B, and stores the IP data in the RAM.
4-3, the sub-module C receives the IPv6 layer packet data provided by the data storage module 120 and caches one beat.
4-4, the submodule D analyzes and calculates the length of the IPv6 layer message header according to the IPv6 layer message data and stores the length in a general data register.
4-5, the submodule E recognizes that the next layer is TCP protocol according to the IPv6 layer message data and stores the TCP protocol in a general data register.
4-6, the submodule F judges whether the current protocol layer is the last layer, if yes, generates the analysis end mark, and outputs the data stored in the general data register, and the data storage module 120 extracts and stores the useful information of each protocol layer. If not the last layer, only the general data register is cached for one beat. In the embodiment of the disclosure, since the current protocol layer is IPv6 and is not the last layer, the sub-module F caches one beat and returns to the sub-module a to perform message parsing of the TCP layer.
Resolution procedure for protocol layer 5:
5-1, the submodule A judges whether the analysis of the data packet of the last message is finished, if so, the address and the preprocessing result of the new data packet are received and stored in the general data register, if not, the general data register is cached for one beat, and if the GRE message in the example is judged that the analysis is not finished, the general data register is cached for one beat.
5-2, the submodule B processes according to the data received by the submodule A: extracting data in the general data register to obtain the protocol type of the protocol layer 5 as TCP protocol, generating port data extraction identification information flag3 by the sub-module B, and marking the message header length addition of the protocol layer 1, the protocol layer 2, the protocol layer 3 and the protocol layer 4 as an offset 4 by the sub-module B. The sub-module B sends flag3, the starting memory address a1, and the offset 4 to the data memory module 120.
Accordingly, the data storage module 120 extracts the TCP packet data according to the storage address a1 and the packet header length (offset 4) provided by the sub-module B, and returns the TCP packet data to the parsing calculation module 130. And, the data storage module 120 adds port data of the TCP layer in the RAM according to the identification information flag3 provided by the sub-module B.
5-3, the sub-module C receives the TCP layer packet data provided by the data storage module 120 and caches one beat.
5-4, the submodule D analyzes and calculates the length of the TCP layer message header according to the TCP layer message data and stores the length in a general data register.
And 5-5, the submodule E recognizes that the current layer is the last layer according to the TCP layer message data.
5-6, the submodule F judges whether the current protocol layer is the last layer, if so, the analysis ending mark is generated, meanwhile, the data stored in the general data register is output, and the data storage module 120 extracts and stores the useful information of each protocol layer. If not the last layer, only the general data register is cached for one beat. In the embodiment of the disclosure, since the TCP layer is the last layer, the submodule F generates the analysis end identifier, and outputs the data stored in the general data register, and the data storage module 120 extracts and stores the useful information of each protocol layer.
Thus, the parsing of the GRE target message is completed, and the data storage module 120 outputs the data stored in the general data register, and in the embodiment of the present disclosure, the output data is five-tuple data of the GRE message.
When the above-mentioned N sub-modules are connected in series to implement pipeline operation, because the N sub-modules adopt serial working modes, the N sub-modules do not process the same message data packet at the same time in the process of analyzing the message header of the same layer. Therefore, in the process of analyzing any layer of message header of the target message, after a certain sub-module completes the flow operation function corresponding to the sub-module, before analyzing the next layer of message header of the target message, the sub-module can be used for analyzing other messages without influencing the analysis process of the target message, thereby realizing the parallel analysis of a plurality of messages and improving the analysis efficiency. For example, when each sub-module is configured to complete a corresponding pipeline function in M clock cycles, the N sub-modules may be configured to process N target packets received by the interface module in parallel. When the same sub-module processes N item target messages in turn, the processing time corresponding to each target message is different by M clock cycles, and M is a non-negative integer.
Take n=6, m=1 as an example. As shown in fig. 7:
in the 1 st clock period, the sub-module A analyzes the a1 layer of the target message 1.
In the 2 nd clock period, the sub-module B analyzes the a1 layer of the target message 1, and the sub-module A analyzes the B1 layer of the target message 2.
In the 3 rd clock period, the sub-module C analyzes the a1 layer of the target message 1, the sub-module B analyzes the B1 layer of the target message 2, and the sub-module A analyzes the C1 layer of the target message 3.
In the 4 th clock period, the sub-module D analyzes the a1 layer of the target message 1, the sub-module C analyzes the B1 layer of the target message 2, the sub-module B analyzes the C1 layer of the target message 3, and the sub-module A analyzes the D1 layer of the target message 4.
Within the 5 th clock cycle. The sub-module E analyzes the a1 layer of the target message 1, the sub-module D analyzes the B1 layer of the target message 2, the sub-module C analyzes the C1 layer of the target message 3, the sub-module B analyzes the D1 layer of the target message 4, and the sub-module A analyzes the E1 layer of the target message 5.
In the 6 th clock period, the sub-module F analyzes the a1 layer of the target message 1, the sub-module E analyzes the B1 layer of the target message 2, the sub-module D analyzes the C1 layer of the target message 3, the sub-module C analyzes the D1 layer of the target message 4, the sub-module B analyzes the E1 layer of the target message 5, and the sub-module A analyzes the F1 layer of the target message 6.
In the 7 th clock period, the sub-module A analyzes the a2 layer of the target message 1, the sub-module F analyzes the B1 layer of the target message 2, the sub-module E analyzes the C1 layer of the target message 3, the sub-module D analyzes the D1 layer of the target message 4, the sub-module C analyzes the E1 layer of the target message 5, and the sub-module B analyzes the F1 layer of the target message 6.
Similarly, the processing procedure of the subsequent clock cycle is not repeated.
Fig. 8 is a schematic structural diagram of a network device according to the present specification. Referring to fig. 8, at the hardware level, the network device includes a programmable logic device 810, a network interface 820, a nonvolatile memory 830, a configuration file 831, and an internal bus 840, and may include hardware required by other services. The programmable logic device 810 reads the corresponding configuration file 831 from the nonvolatile memory 830 and operates, an interface module, an analysis calculation module and a data storage module are formed on the programmable logic device 810, and a data preprocessing module may also be formed and matched with each other to implement the foregoing embodiments, and detailed implementation procedures of corresponding steps in the foregoing methods will be described herein.
The disclosure also provides a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of message parsing according to any of the above embodiments.
According to the technical scheme provided by the invention, unified processing logic corresponding to each protocol type is abstracted, and is realized by interaction cooperation among a plurality of modules configured on the programmable logic device, so that sequential analysis of message data of each layer can be realized iteratively, and the same logic resource on the programmable logic device can be reused due to consistent processing logic, so that occupation of the logic resource is not changed due to increase of the protocol type, and the utilization rate of the logic resource of the programmable logic device is greatly improved.
In the embodiment of the disclosure, the message analysis is completed by direct interaction of different modules configured on the programmable logic device, and the method for classifying and iterating the messages by the modules repeatedly utilizes the logic resources and the storage resources of the programmable logic device, improves the utilization rate of the storage resources and the logic resources of the programmable logic device, and avoids calling more logic resources and storage resources to meet the requirements of the programmable logic device.
It should be noted that although several units/modules or sub-units/modules of an electronic device are mentioned in the above detailed description, such a division is merely exemplary and not mandatory. Indeed, the features and functionality of two or more units/modules described above may be embodied in one unit/module in accordance with embodiments of the present disclosure. Conversely, the features and functions of one unit/module described above may be further divided into ones that are embodied by a plurality of units/modules.
The foregoing description of the preferred embodiments of the present disclosure is not intended to limit the disclosure, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the present disclosure.

Claims (9)

1. The message analysis method is characterized in that the method is applied to network equipment provided with a programmable logic device, and an interface module, an analysis calculation module and a data storage module are formed on the programmable logic device; the method comprises the following steps:
the interface module receives a target message, wherein the target message comprises message heads of a plurality of protocol layers which are nested in sequence;
the data storage module caches the target message output by the interface module and provides the initial storage address of the target message to the analysis and calculation module;
the analysis and calculation module is used for analyzing at least one layer of message header of the target message; the process of analyzing the message header of any layer by the analysis calculation module comprises the following steps:
sending the sum of the initial storage address and the length of the message header before any layer to the data storage module, so that the data storage module reads the message data of any layer according to the received sum of the initial storage address and the length of the message header and returns the message data to the analysis and calculation module;
analyzing the message data of any layer to determine the length of the message header of any layer;
the programmable logic device is also provided with a data preprocessing module;
the data preprocessing module preprocesses the target message output by the interface module to analyze and determine the length and protocol type of the outer layer message header of the target message and the protocol type of the first layer message header of the inner layer;
when any layer is the first layer of the inner layer, the sum of the lengths of the message heads is the length of the message head of the outer layer, and the data preprocessing module provides the sum of the lengths of the message heads to the analysis and calculation module.
2. The method of claim 1, wherein the parsing calculation module parses any layer of header further comprises:
under the condition that the protocol type adopted by any layer is a first type of protocol containing quintuple data, generating an extraction identifier and sending the extraction identifier to the data storage module, so that the data storage module processes and stores the message quintuple data of any layer based on the extraction identifier;
and generating and storing a type identifier under the condition that the protocol type adopted by any layer is a second type of protocol without quintuple data.
3. The method of claim 2, wherein the step of determining the position of the substrate comprises,
the first type of protocol includes one or more of the following: IPv4, IPv6, TCP protocols;
the second type of protocol includes one or more of the following: GRE, VXLAN, GTP protocol.
4. The method of claim 1, wherein the data storage module is configured with a storage address output interface and an offset input interface, and the parsing calculation module is configured with a storage address input interface and an offset output interface;
the data storage module providing the initial storage address of the target message to the parsing calculation module includes: the data storage module sends the initial storage address of the target message to the storage address input interface through the storage address output interface so as to provide the initial storage address to the analysis and calculation module;
the parsing and calculating module sends the sum of the initial storage address and the length of the message header before any layer to the data storage module, including: and the analysis and calculation module is used for splicing the sum of the initial storage address and the message header length and then sending the spliced message to the offset input interface through the offset output interface so as to provide the spliced message to the data storage module.
5. The method of claim 1, wherein the step of determining the position of the substrate comprises,
the analysis and calculation module comprises N sub-modules which are sequentially connected in series, each sub-module is used for executing a predefined pipeline function, and the N sub-modules are used for jointly realizing the analysis of the message header of any layer; wherein N is an integer greater than 1.
6. The method of claim 5, wherein the step of determining the position of the probe is performed,
each submodule is used for completing corresponding pipeline operation functions in M clock cycles; the N sub-modules are used for processing N target messages received by the interface module in parallel; when the same sub-module processes N item target messages in turn, the processing time corresponding to each target message is different by M clock cycles, and M is a non-negative integer.
7. The network equipment is characterized in that a programmable logic device is arranged in the network equipment, and an interface module, an analysis calculation module and a data storage module are formed on the programmable logic device in a configuration mode; wherein:
the interface module is used for receiving a target message, wherein the target message comprises message heads of a plurality of protocol layers which are nested in sequence;
the data storage module is used for caching the target message output by the interface module and providing a starting storage address of the target message to the analysis and calculation module;
the analysis and calculation module is used for analyzing at least one layer of message header of the target message; the process of analyzing the message header of any layer by the analysis calculation module comprises the following steps:
sending the sum of the initial storage address and the length of the message header before any layer to the data storage module, so that the data storage module reads the message data of any layer according to the received sum of the initial storage address and the length of the message header and returns the message data to the analysis and calculation module;
analyzing the message data of any layer to determine the length of the message header of any layer;
the programmable logic device is also provided with a data preprocessing module;
the data preprocessing module preprocesses the target message output by the interface module to analyze and determine the length and protocol type of the outer layer message header of the target message and the protocol type of the first layer message header of the inner layer;
when any layer is the first layer of the inner layer, the sum of the lengths of the message heads is the length of the message head of the outer layer, and the data preprocessing module provides the sum of the lengths of the message heads to the analysis and calculation module.
8. A network device, comprising:
a programmable logic device;
a memory for storing configuration files;
wherein the programmable logic device implements the steps of any of claims 1 to 6 by running the configuration file.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
CN202111062263.1A 2021-09-10 2021-09-10 Message parsing method and network equipment Active CN113824706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111062263.1A CN113824706B (en) 2021-09-10 2021-09-10 Message parsing method and network equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111062263.1A CN113824706B (en) 2021-09-10 2021-09-10 Message parsing method and network equipment

Publications (2)

Publication Number Publication Date
CN113824706A CN113824706A (en) 2021-12-21
CN113824706B true CN113824706B (en) 2023-07-25

Family

ID=78922057

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111062263.1A Active CN113824706B (en) 2021-09-10 2021-09-10 Message parsing method and network equipment

Country Status (1)

Country Link
CN (1) CN113824706B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115277880B (en) * 2022-06-17 2024-04-19 奇安信科技集团股份有限公司 Network message analysis method and device
CN115460085A (en) * 2022-08-20 2022-12-09 西安翔腾微电子科技有限公司 Ethernet protocol acceleration circuit and method
CN117376179A (en) * 2023-12-04 2024-01-09 成都北中网芯科技有限公司 Method, system, equipment and medium for filtering GRE protocol message

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131159A (en) * 2019-11-22 2020-05-08 中国人民解放军国防科技大学 Message parser and design method thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9444914B2 (en) * 2013-09-16 2016-09-13 Annapurna Labs Ltd. Configurable parser and a method for parsing information units
CN106789388B (en) * 2016-03-25 2020-07-03 新华三技术有限公司 Method and device for determining message detection content
CN110958213B (en) * 2018-09-27 2021-10-22 华为技术有限公司 Method for processing TCP message, TOE component and network equipment

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131159A (en) * 2019-11-22 2020-05-08 中国人民解放军国防科技大学 Message parser and design method thereof

Also Published As

Publication number Publication date
CN113824706A (en) 2021-12-21

Similar Documents

Publication Publication Date Title
CN113824706B (en) Message parsing method and network equipment
CN108809854B (en) Reconfigurable chip architecture for large-flow network processing
CN112491901B (en) Network flow fine screening device and method
CN101095310B (en) Packet parsing processor and the method for parsing grouping in the processor
CN108833299B (en) Large-scale network data processing method based on reconfigurable switching chip architecture
US8867395B2 (en) Accelerating data packet parsing
US7248585B2 (en) Method and apparatus for a packet classifier
US6771646B1 (en) Associative cache structure for lookups and updates of flow records in a network monitor
CN1593041B (en) Method, apparatus and computer program for the decapsulation and encapsulation of packets with multiple headers
US7069372B1 (en) Processor having systolic array pipeline for processing data packets
CN112561043B (en) Neural model splitting method of brain-like computer operating system
CN111935081B (en) Data packet desensitization method and device
US10944696B2 (en) Variable-length packet header vectors
US7937495B2 (en) System and method for modifying data transferred from a source to a destination
US11258707B1 (en) Systems for building data structures with highly scalable algorithms for a distributed LPM implementation
CN108762810B (en) Network message header processor based on parallel micro-engine
CN112136108A (en) Header analysis device and method
CN114296707A (en) Programmable hardware logic architecture realized based on P4 language and logic realization method
CN110324204A (en) A kind of high speed regular expression matching engine realized in FPGA and method
CN113411380B (en) Processing method, logic circuit and equipment based on FPGA (field programmable gate array) programmable session table
US20090285207A1 (en) System and method for routing packets using tags
James-Roxby et al. Time-critical software deceleration in a FCCM
CN114257560A (en) KNI-based switch network data caching implementation method
Vlachos et al. Design and performance evaluation of a Programmable Packet Processing Engine (PPE) suitable for high-speed network processors units
US11882039B1 (en) UDF-based traffic offloading methods and systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant