CN111131078A - Message hashing method and device, FPGA module and processor module - Google Patents

Message hashing method and device, FPGA module and processor module Download PDF

Info

Publication number
CN111131078A
CN111131078A CN201911352368.3A CN201911352368A CN111131078A CN 111131078 A CN111131078 A CN 111131078A CN 201911352368 A CN201911352368 A CN 201911352368A CN 111131078 A CN111131078 A CN 111131078A
Authority
CN
China
Prior art keywords
message
sent
fpga
hash
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911352368.3A
Other languages
Chinese (zh)
Other versions
CN111131078B (en
Inventor
张阿珍
张芦韦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Beijing Topsec Technology Co Ltd
Priority to CN201911352368.3A priority Critical patent/CN111131078B/en
Publication of CN111131078A publication Critical patent/CN111131078A/en
Application granted granted Critical
Publication of CN111131078B publication Critical patent/CN111131078B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to a message hashing method and device, an FPGA module and a processor module, and belongs to the field of network communication. The method comprises the following steps: when hashing a message to be sent, the FPGA queries a preset table item according to the protocol type of the message, determines a preset operation frequency corresponding to the protocol type of the message, and then determines a hash queue corresponding to an operation result after the message to be sent is operated for the preset operation frequency; and then, hashing the message to be sent to a corresponding hash queue. For a certain message, when the hash queue corresponding to the certain message is determined, the number of times of operation required by the FPGA is determined according to the actual situation of the message, instead of adopting a scheme of uniformly operating 320 times, so that unnecessary operation times can be reduced, thereby reducing the waste of FPGA operation resources and improving the calculation efficiency.

Description

Message hashing method and device, FPGA module and processor module
Technical Field
The application belongs to the field of network communication, and particularly relates to a message hashing method and device, an FPGA module and a processor module.
Background
In order to guarantee network security as much as possible, the architecture of the network security product also evolves from the traditional single-core CPU processing messages to the multi-core CPU concurrent processing messages. In order to match the capability of the multi-core CPU to process a message, an FPGA (programmable logic device) is an important element in a network security product, and needs to support multiple queues to send a message, so as to implement efficient message transmission with the multi-core CPU.
When the FPGA sends the packet to the CPU through the multiple hash queues, the packet needs to be hashed, so that the packets from the same flow are distributed to the same hash queue.
In a conventional packet hashing scheme, the FPGA performs hash calculation on a five-tuple of each packet by using an existing hashing algorithm (hash function) to obtain a hash value for identifying a hash queue corresponding to the packet. Each packet requires 320 operations, determined by the hash function itself. However, some messages do not need to be operated 320 times, so for some messages, the number of unnecessary operations is performed, and the operation resources of the FPGA are wasted.
Disclosure of Invention
In view of this, an object of the present application is to provide a message hashing method, device, FPGA module and processor module, so as to reduce resource waste of the FPGA.
The embodiment of the application is realized as follows:
in a first aspect, an embodiment of the present application provides a message hashing method, which is applied to a field programmable gate array FPGA, where the FPGA includes multiple hash queues, and the method includes: acquiring a message to be sent, and analyzing the protocol type of the message to be sent; inquiring a preset table entry according to the protocol type, and determining the preset operation times corresponding to the protocol type, wherein the table entry records the operation times corresponding to messages of various protocol types; after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result; and hashing the message to be sent to the corresponding hash queue. For a certain message, when the hash queue corresponding to the certain message is determined, the number of times required by the FPGA to operate is determined according to the actual condition of the message, rather than adopting a scheme of uniform operation for 320 times, so that unnecessary operation times can be reduced, operation time is correspondingly saved for the reduced number of times, and the calculation efficiency can be improved; meanwhile, the waste of FPGA operation resources can be saved aiming at the reduced part of operation times.
With reference to the embodiment of the first aspect, in a possible implementation manner, the FPGA includes a plurality of hash calculation units, each hash calculation unit stores the table entry, and before the message to be sent is calculated for the preset calculation times, the method further includes: distributing the message to be sent to a hash calculation unit in an idle state; correspondingly, after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result, including: and determining a hash queue corresponding to an operation result after the preset operation times of the message to be sent is operated by the hash calculation unit in the idle state.
With reference to the embodiment of the first aspect, in a possible implementation manner, the multiple hash calculation units work in parallel, and before the message to be sent is distributed to one hash calculation unit in an idle state, the method further includes: according to the acquisition sequence of the messages to be sent, message IDs are sequentially set for each message to be sent, and the numerical value of each message ID is gradually increased; correspondingly, the hashing the message to be sent to the corresponding hash queue includes: waiting for a preset time length; and sequentially hashing the messages to be sent acquired within the preset time length to respective corresponding hash queues according to the sequence of the message IDs from small to large.
With reference to the embodiment of the first aspect, in a possible implementation manner, the FPGA is in network communication with a CPU, and the method further includes: and sending the message to be sent to the CPU through the corresponding hash queue.
With reference to the embodiment of the first aspect, in a possible implementation manner, the CPU includes a plurality of processing cores, the number of the hash queues is the same as the number of the processing cores, the hash queues correspond to the processing cores one to one, and the sending the packet to be sent to the CPU through the corresponding hash queues includes: and sending the message to be sent to a processing core corresponding to the corresponding hash queue in the CPU through the corresponding hash queue.
With reference to the first aspect, in a possible implementation manner, an FPGA pointer is disposed in the FPGA, a CPU pointer is disposed in the CPU, an initial value of the FPGA pointer and an initial value of the CPU pointer are both preset values, and after the message to be sent is obtained, the method further includes: every time one message to be sent is obtained, the current numerical value of the FPGA pointer is reduced by one, and meanwhile, a descriptor DD bit is written back; correspondingly, the sending the message to be sent to the CPU through the corresponding hash queue includes: and when the current numerical value of the FPGA pointer is determined to be not equal to the current numerical value of the CPU pointer, sending the message to be sent to the CPU through the corresponding hash queue.
In a second aspect, an embodiment of the present application provides a packet hashing apparatus, which is applied to a field programmable gate array FPGA, where the FPGA includes multiple hash queues, and the apparatus includes: the device comprises an acquisition module, a control module, an operation module and a hash module. The acquisition module is used for acquiring a message to be sent and analyzing the protocol type of the message to be sent; the control module is used for inquiring a preset table entry according to the protocol type, determining the preset operation times corresponding to the protocol type, and recording the operation times corresponding to messages of various protocol types in the table entry; the operation module is used for determining a hash queue corresponding to an operation result after the preset operation times are operated on the message to be sent; and the hashing module is used for hashing the message to be sent to the corresponding hashing queue.
With reference to the second aspect, in a possible implementation manner, the FPGA includes a plurality of hash calculation units, each hash calculation unit stores the entry, and the apparatus further includes an allocation module, configured to allocate the message to be sent to one hash calculation unit in an idle state; correspondingly, the operation module is configured to determine a hash queue corresponding to an operation result after the preset operation times of the message to be sent is operated by the idle hash calculation unit.
With reference to the second aspect, in a possible implementation manner, the multiple hash calculation units work in parallel, and the apparatus further includes a setting module, configured to sequentially set a packet ID for each of the packets to be sent according to an acquisition sequence of the packets to be sent, where a numerical value of the packet ID is gradually increased; correspondingly, the hash module is used for waiting for a preset time length; and sequentially hashing the messages to be sent acquired within the preset time length to respective corresponding hash queues according to the sequence of the message IDs from small to large.
With reference to the second aspect, in a possible implementation manner, the FPGA is in network communication with a CPU, and the apparatus further includes a sending module, configured to send the message to be sent to the CPU through the corresponding hash queue.
With reference to the second aspect embodiment, in a possible implementation manner, the CPU includes a plurality of processing cores, the number of the hash queues is the same as the number of the processing cores, the hash queues correspond to the processing cores one to one, and the sending module is configured to send the packet to be sent to the processing core, corresponding to the hash queue, in the CPU through the corresponding hash queue.
With reference to the second aspect embodiment, in a possible implementation manner, an FPGA pointer is disposed in the FPGA, a CPU pointer is disposed in the CPU, an initial value of the FPGA pointer and an initial value of the CPU pointer are both preset values, and the operation module is further configured to subtract one from a current value of the FPGA pointer and write back a descriptor DD bit when obtaining one to-be-sent message; correspondingly, the sending module is configured to send the message to be sent to the CPU through the corresponding hash queue when it is determined that the current value of the FPGA pointer is not equal to the current value of the CPU pointer.
In a third aspect, an embodiment of the present application further provides an FPGA module, including: the off-chip flash memory, the memory and the processor are connected, and the processor is connected with the off-chip flash memory; the off-chip flash memory is used for storing programs, and the memory is used for storing messages; the processor loads a program stored in the off-chip flash memory to perform the method according to the first aspect and/or any possible implementation manner of the first aspect.
In a fourth aspect, an embodiment of the present application further provides a non-volatile computer-readable storage medium (hereinafter, referred to as a computer-readable storage medium), on which a computer program is stored, where the computer program is executed by an FPGA module to perform the method provided in the foregoing first aspect and/or any possible implementation manner of the first aspect.
In a fifth aspect, an embodiment of the present application further provides a processor module, which includes an FPGA module and a CPU, where the FPGA module is in communication connection with the CPU, and a program is stored in the FPGA module, and when the program is called, the FPGA module executes the method provided in the first aspect and/or any possible implementation manner in combination with the first aspect.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and drawings.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The foregoing and other objects, features and advantages of the application will be apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not intended to be to scale as practical, emphasis instead being placed upon illustrating the subject matter of the present application.
Fig. 1 shows a schematic structural diagram of an FPGA module according to an embodiment of the present application.
Fig. 2 shows a flowchart of a message hashing method according to an embodiment of the present application.
Fig. 3 shows one of schematic diagrams of internal resource allocation of an FPGA according to an embodiment of the present application.
Fig. 4 shows a second schematic diagram of internal resource allocation of an FPGA according to the embodiment of the present application.
Fig. 5 is a block diagram illustrating a structure of a message hashing apparatus according to an embodiment of the present application.
Reference numbers: 100-FPGA module; 110-a processor; 120-a memory; 130-off-chip flash memory; 400-message hashing means; 410-an obtaining module; 420-a control module; 430-operation module; 440 — hash module.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, relational terms such as "first," "second," and the like may be used solely in the description herein to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
In addition, the defects (waste of computational resources) of the hash message method in the prior art are the results obtained after the applicant has practiced and studied carefully, and therefore, the discovery process of the above defects and the solutions proposed in the following embodiments of the present application for the above defects should be the contributions of the applicant to the present application in the process of the present application.
In order to solve the above defects, embodiments of the present application provide a message hashing method, an apparatus, an FPGA module, and a processor module, so that in a process of hashing a message, resource waste caused by an FPGA is reduced. The technology can be realized by adopting corresponding software, hardware and a combination of software and hardware. The following describes embodiments of the present application in detail.
First, abbreviations of terms in the art to which the embodiments of the present application relate will be described.
IPV 4: internet Protocol Version4, an Internet Protocol.
IPV 6: internet Protocol Version6, an Internet Protocol.
TCP: transmission Control Protocol, Transmission Control Protocol.
UDP: user Datagram Protocol, User Datagram Protocol.
Next, an FPGA module 100 for implementing the message hashing method and apparatus according to the embodiment of the present application is described with reference to fig. 1. The FPGA module 100 may include: a processor 110, a memory 120, and an off-chip flash memory 130.
It should be noted that the components and structure of the FPGA module 100 shown in fig. 1 are exemplary only, and not limiting, and that the FPGA module 100 may have other components and structures as desired.
The processor 110, the memory 120, the off-chip flash memory 130, and other components that may be present in the FPGA module 100 are electrically connected to each other, directly or indirectly, to enable the transfer or interaction of data. For example, the processor 110, the memory 120, the off-chip flash memory 130, and other components that may be present may be electrically connected to each other via one or more communication buses or signal lines.
The memory 120 is used to store messages during the hash calculation process.
The off-chip flash memory 130 is used to store a program, for example, a program corresponding to a message hashing method appearing later or a message hashing apparatus appearing later. Optionally, when the message hashing means is stored in the off-chip flash memory 130, the message hashing means includes at least one software function module that can be stored in the off-chip flash memory 130 in the form of software or firmware (firmware). Optionally, the software function module included in the message hashing apparatus may also be solidified in the programmable logic of the FPGA module 100.
In addition, the off-chip flash memory 130 stores logic binary codes, when the power is turned on, the off-chip flash memory 130 automatically loads the stored programs to the processor 110 of the FPGA module through configuration pins, and the processor 110 is configured to execute executable modules stored in the off-chip flash memory 130, for example, software function modules or computer programs included in the message hashing device. When the processor 110 receives the execution instruction, it may execute the computer program, for example, to perform: acquiring a message to be sent, and analyzing the protocol type of the message to be sent; inquiring a preset table entry according to the protocol type, and determining the preset operation times corresponding to the protocol type, wherein the table entry records the operation times corresponding to messages of various protocol types; after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result; and hashing the message to be sent to the corresponding hash queue.
Of course, the method disclosed in any of the embodiments of the present application can be applied to the processor 110, or implemented by the processor 110.
The following description will be directed to a message hashing method provided in the present application.
Referring to fig. 2, an embodiment of the present invention provides a message hashing method applied to the FPGA module 100 (hereinafter, referred to as FPGA).
The steps involved will be described below in conjunction with fig. 2.
Step S110: acquiring a message to be sent, and analyzing the protocol type of the message to be sent.
Optionally, the protocol types may include: different protocol type combinations such as IPV4+ TCP, IPV4+ UDP, IPV6+ TCP, IPV6+ UDP and other special protocols.
Step S120: and querying a preset table entry according to the protocol type, and determining the preset operation times corresponding to the protocol type.
In the embodiment of the present application, an entry is pre-stored in the FPGA.
The table entry records the number of operations corresponding to the messages of various protocol types, and is used for representing the number of operations required when the message with a certain protocol type is hashed.
Wherein, for a certain message, the operation process is as follows: and taking the quintuple of the message as the input of the hash function to carry out multiple shift operations so as to obtain a hash value.
For example, in the table entry, the number of operations corresponding to the IPV4+ TCP protocol type is M, and accordingly, when a message with a certain protocol type being IPV4+ TCP is subsequently acquired, if the message is to be hashed, M shift operations are performed, and a corresponding hash value can be obtained.
Therefore, the FPGA can determine the preset operation times required by the message to be sent by analyzing the protocol type of the message to be sent.
Step S130: and after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result.
For a certain message, when the hash queue corresponding to the certain message is determined, the number of times of operation required by the FPGA is determined according to the actual situation of the message, instead of adopting a scheme of uniformly operating 320 times, so that unnecessary operation times can be reduced, thereby reducing the waste of FPGA operation resources and improving the calculation efficiency.
As an alternative embodiment, please refer to fig. 3, in the FPGA, a part of the operation resources of the FPGA may be determined as the hash calculation unit and the plurality of hash queues.
The FPGA forms a one-to-one correspondence table between each hash queue and an address with a unique numerical value, for example, the address 1 corresponds to the hash queue 1, the address 2 corresponds to the hash queue 2, and the address 3 corresponds to the hash queue 3.
The hash calculation unit is used for executing shift operation of the preset operation times according to the preset operation times determined by the message to be sent so as to obtain a hash value, and then determining a hash queue corresponding to the message to be sent by taking the hash value as an address and inquiring the relation table according to the hash value. For example, the hash value calculated for a certain packet is 32' h1234 — 5178, and in the relationship table, taking the hash value as an address, the value of the corresponding position of the address in the table is found to be 1, and then the hash queue corresponding to the packet is hash queue 1.
It is worth pointing out that the quintuple of the packets of the same flow is the same, so that the hash value obtained by the packets of the same flow is the same, and correspondingly, the packets of the same flow correspond to the same hash queue.
Due to the adoption of the mode, the waste of FPGA operation resources can be reduced, so that the saved operation resources can be used as a hash calculation unit and a hash queue in the FPGA. Therefore, as another alternative, please refer to fig. 4, the operation resources of the FPGA may be determined as a plurality of hash calculation units and a plurality of hash queues. Wherein, each hash calculation unit stores the above-mentioned table entry.
In this embodiment, when the FPGA needs to calculate the hash value of the message to be sent, the FPGA may allocate the message to be sent to a hash calculation unit in an idle state, so that the hash calculation unit in the idle state calculates the message to be sent for the preset operation times and then determines the corresponding hash queue.
It should be noted that, for some messages (e.g., control messages) with a specific protocol type, if it is expected that the messages are hashed to an assigned hash queue, at this time, the preset operation number corresponding to the specific protocol type may be set to 0, and a specific hash queue is assigned to the messages with the specific protocol type, so that when the FPGA acquires the message with the specific protocol type, the message is directly hashed to the assigned specific hash queue.
Step S140: and hashing the message to be sent to the corresponding hash queue.
After determining the hash queue corresponding to the message to be sent through the steps, the FPGA can hash the message to be sent to the corresponding hash queue.
When multiple hash calculation units exist in the FPGA, the multiple hash calculation units can operate in parallel. At this time, because the operation times of the messages to be sent are preset in advance, the time required by each message to be sent for calculating the hash value may be different, and the time intervals for obtaining the corresponding hash values are correspondingly different. At this time, the FPGA cannot guarantee that each message to be sent is hashed to its corresponding hash queue in sequence according to the arrival sequence of each message to be sent, which may cause the problem of message disorder.
In order to solve the above problem, in an embodiment of the present application, before distributing a message to be sent to a hash calculation unit in an idle state, a message ID may be sequentially set for each message to be sent according to an acquisition sequence of the message to be sent, and a value of the message ID is gradually increased. For example, a message ID is set to 1 for a first obtained message to be sent, for example, a message ID is set to 2 for a second obtained message to be sent, … …, and a message ID is set to N for an nth obtained message to be sent. Correspondingly, in order to ensure that the messages are not out of order, when the messages to be sent are hashed to the respective corresponding hash queues, the FPGA may wait for a preset time period, then use the lower 6 bits of the message IDs of all the messages to be sent obtained within the preset time period as addresses, wait for the hash value calculation result of the messages to be sent corresponding to the message IDs, write the hash value calculation result into the corresponding addresses, and then hash the messages to the respective corresponding hash queues in sequence from small to large according to the message IDs, so as to ensure that the messages are not out of order. Meanwhile, in the waiting period, the FPGA can cache 64 messages, so that the requirement of calculating the hash at the line speed is met.
Further, in one embodiment, the FPGA may be used for network communication with the CPU. In this embodiment, the FPGA sends the message to be sent to the CPU through the corresponding hash queue, thereby implementing sending of the message.
Further, the CPU may include a plurality of processing cores.
In the prior art, the number of hash queues in the FPGA is usually less than the number of processing cores of the CPU due to the influence of the operation resources. In addition, because the messages of the same flow need to be processed by the same processing core in consideration of system performance, in this case, when the FPGA sends a message to the CPU, the message may not be directly sent to the designated processing core for processing, and accordingly, a core-crossing phenomenon may occur at the CPU end, which causes a loss to the computing resource of the CPU.
In the foregoing, the message hashing method can reduce the waste of FPGA operation resources, so that the saved operation resources can be used as more hash queues, the number of the hash queues is the same as the number of cores of the CPU, and meanwhile, the hash queues and the processing cores form a one-to-one correspondence relationship. In this embodiment, the FPGA can send the message to be sent to the processing core corresponding to the hash queue in the CPU through the corresponding hash queue, thereby avoiding the loss of the computing resource of the CPU.
In addition, as an optional implementation manner, an FPGA pointer may be disposed in the FPGA, and a CPU pointer may be disposed in the CPU, where it is worth pointing out that the initial value of the FPGA pointer and the initial value of the CPU pointer are both preset values. In this embodiment, the FPGA can reduce the current value of the FPGA pointer by one and write back the DD bit of the descriptor every time the FPGA acquires a message to be sent. Correspondingly, the FPGA can send the message to be sent to the CPU through the hash queue that receives the message to be sent when it is determined that the current value of the FPGA pointer is not equal to the current value of the CPU pointer. Compared with the prior art in which the sending process of the message is triggered by generating the interrupt, the efficiency of sending the message can be improved by triggering the sending process by comparing the pointers.
In addition, each hash queue may be prioritized, and the corresponding priority may be set to be the maximum for the particular hash queue mentioned above. When the FPGA subsequently sends the message to be sent to the CPU through the corresponding hash queue, the message can be sent according to the sequence of the priorities of the hash queues from large to small, so that the message of a special protocol type can not be lost due to overlarge network flow.
According to the message hashing method provided by the embodiment of the application, when hashing a message to be sent, an FPGA queries a preset table item according to a protocol type of the FPGA, determines a preset operation time corresponding to the protocol type of the FPGA, and then determines a hash queue corresponding to an operation result after the message to be sent is operated for the preset operation time; and then, hashing the message to be sent to a corresponding hash queue. For a certain message, when the hash queue corresponding to the certain message is determined, the number of times of operation required by the FPGA is determined according to the actual situation of the message, instead of adopting a scheme of uniformly operating 320 times, so that unnecessary operation times can be reduced, thereby reducing the waste of FPGA operation resources and improving the calculation efficiency.
As shown in fig. 5, an embodiment of the present application further provides a message hashing apparatus 400, where the message hashing apparatus 400 may include: an acquisition module 410, a control module 420, a calculation module 430, and a hashing module 440.
An obtaining module 410, configured to obtain a message to be sent, and analyze a protocol type of the message to be sent;
a control module 420, configured to query a preset entry according to the protocol type, and determine a preset operation number corresponding to the protocol type, where the entry records operation numbers corresponding to messages of various protocol types;
the operation module 430 is configured to determine a hash queue corresponding to an operation result after the preset operation times of the message to be sent is operated;
a hashing module 440, configured to hash the message to be sent to the corresponding hash queue.
Optionally, in a possible implementation manner, the FPGA includes a plurality of hash calculation units, each hash calculation unit stores the table entry, and the apparatus further includes an allocation module, configured to allocate the message to be sent to one hash calculation unit in an idle state;
correspondingly, the operation module 430 is configured to determine a hash queue corresponding to an operation result after the preset operation times of the message to be sent is operated by the idle hash calculation unit.
Optionally, in a possible implementation manner, the multiple hash calculation units work in parallel, and the apparatus further includes a setting module, configured to sequentially set a message ID for each message to be sent according to an acquisition sequence of the messages to be sent, where a value of the message ID is gradually increased;
correspondingly, the hash module 440 is configured to wait for a preset duration; and sequentially hashing the messages to be sent acquired within the preset time length to respective corresponding hash queues according to the sequence of the message IDs from small to large.
Optionally, in a possible implementation manner, the FPGA is in network communication with a CPU, and the apparatus further includes a sending module, configured to send the message to be sent to the CPU through the corresponding hash queue.
Optionally, in a possible implementation manner, the CPU includes a plurality of processing cores, the number of the hash queues is the same as the number of the processing cores, the hash queues correspond to the processing cores one to one, and the sending module is configured to send the packet to be sent to the processing core corresponding to the hash queue in the CPU through the corresponding hash queue.
Optionally, in a possible implementation manner, an FPGA pointer is disposed in the FPGA, a CPU pointer is disposed in the CPU, an initial value of the FPGA pointer and an initial value of the CPU pointer are both preset values, and the operation module 430 is further configured to, every time a message to be sent is obtained, subtract one from a current value of the FPGA pointer and write back a descriptor DD bit;
correspondingly, the sending module is configured to send the message to be sent to the CPU through the corresponding hash queue when it is determined that the current value of the FPGA pointer is not equal to the current value of the CPU pointer.
The message hashing apparatus 400 provided in the embodiment of the present application has the same implementation principle and technical effect as those of the foregoing method embodiments, and for brief description, reference may be made to corresponding contents in the foregoing method embodiments for parts that are not mentioned in the apparatus embodiments.
In addition, an embodiment of the present application further provides a computer-readable storage medium, such as a flash memory, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by the FPGA module 100, the steps included in the above message hashing method are performed.
In addition, an embodiment of the present application further provides a processor module, which includes an FPGA module 100 and a CPU, where the FPGA module 100 is in communication connection with the CPU, a program is stored in the FPGA module 100, and when the program is called, the FPGA module 100 executes the method provided in the foregoing first aspect embodiment and/or any possible implementation manner in combination with the first aspect embodiment.
In summary, according to the message hashing method and apparatus, the FPGA module and the processor module provided in the embodiments of the present invention, when hashing a message to be sent, the FPGA queries a preset entry according to a protocol type of the message, determines a preset operation time corresponding to the protocol type of the message, and then determines a hash queue corresponding to an operation result after operating the message to be sent for the preset operation time; and then, hashing the message to be sent to a corresponding hash queue. For a certain message, when the hash queue corresponding to the certain message is determined, the number of times of operation required by the FPGA is determined according to the actual situation of the message, instead of adopting a scheme of uniformly operating 320 times, so that unnecessary operation times can be reduced, thereby reducing the waste of FPGA operation resources and improving the calculation efficiency.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present application, and shall be covered by the scope of the present application.

Claims (10)

1. A message hashing method is applied to a Field Programmable Gate Array (FPGA), and a plurality of hash queues are included in the FPGA, and the method comprises the following steps:
acquiring a message to be sent, and analyzing the protocol type of the message to be sent;
inquiring a preset table entry according to the protocol type, and determining the preset operation times corresponding to the protocol type, wherein the table entry records the operation times corresponding to messages of various protocol types;
after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result;
and hashing the message to be sent to the corresponding hash queue.
2. The method according to claim 1, wherein the FPGA includes a plurality of hash calculation units, each hash calculation unit stores the table entry therein, and before the message to be sent is operated for the preset operation times, the method further includes:
distributing the message to be sent to a hash calculation unit in an idle state;
correspondingly, after the preset operation times of the message to be sent are operated, determining a hash queue corresponding to an operation result, including:
and determining a hash queue corresponding to an operation result after the preset operation times of the message to be sent is operated by the hash calculation unit in the idle state.
3. The method of claim 2, wherein the plurality of hash calculation units operate in parallel, and wherein before the assigning the message to be sent to one of the hash calculation units in an idle state, the method further comprises:
according to the acquisition sequence of the messages to be sent, message IDs are sequentially set for each message to be sent, and the numerical value of each message ID is gradually increased;
correspondingly, the hashing the message to be sent to the corresponding hash queue includes:
waiting for a preset time length;
and sequentially hashing the messages to be sent acquired within the preset time length to respective corresponding hash queues according to the sequence of the message IDs from small to large.
4. The method of claim 1, wherein the FPGA is in communication with a CPU network, the method further comprising:
and sending the message to be sent to the CPU through the corresponding hash queue.
5. The method according to claim 4, wherein the CPU includes a plurality of processing cores, the number of the hash queues is the same as the number of the processing cores, the hash queues correspond to the processing cores one to one, and the sending the packet to be sent to the CPU through the corresponding hash queues includes:
and sending the message to be sent to a processing core corresponding to the corresponding hash queue in the CPU through the corresponding hash queue.
6. The method according to claim 4, wherein an FPGA pointer is provided in the FPGA, a CPU pointer is provided in the CPU, an initial value of the FPGA pointer and an initial value of the CPU pointer are both preset values, and after the message to be sent is obtained, the method further comprises:
every time one message to be sent is obtained, the current numerical value of the FPGA pointer is reduced by one, and meanwhile, a descriptor DD bit is written back;
correspondingly, the sending the message to be sent to the CPU through the corresponding hash queue includes:
and when the current numerical value of the FPGA pointer is determined to be not equal to the current numerical value of the CPU pointer, sending the message to be sent to the CPU through the corresponding hash queue.
7. A message hashing apparatus for application to a field programmable gate array FPGA including a plurality of hash queues within the FPGA, the apparatus comprising:
the acquisition module is used for acquiring a message to be sent and analyzing the protocol type of the message to be sent;
the control module is used for inquiring a preset table entry according to the protocol type, determining the preset operation times corresponding to the protocol type, and recording the operation times corresponding to messages of various protocol types in the table entry;
the operation module is used for determining a hash queue corresponding to an operation result after the preset operation times are operated on the message to be sent;
and the hashing module is used for hashing the message to be sent to the corresponding hashing queue.
8. An FPGA module is characterized by comprising an off-chip flash memory, a memory and a processor, wherein the memory is connected with the processor, and the processor is connected with the off-chip flash memory;
the off-chip flash memory is used for storing programs;
the memory is used for storing messages;
the processor loads a program stored in the off-chip flash memory to perform the method of any of claims 1-6.
9. A processor module comprising an FPGA module and a CPU, the FPGA module being in communicative connection with the CPU, the FPGA module having a program stored therein, the FPGA module performing the method of any one of claims 1-6 when the program is invoked.
10. A computer-readable storage medium, on which a computer program is stored which, when executed by an FPGA module, performs the method according to any one of claims 1-6.
CN201911352368.3A 2019-12-24 2019-12-24 Message hashing method and device, FPGA module and processor module Active CN111131078B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911352368.3A CN111131078B (en) 2019-12-24 2019-12-24 Message hashing method and device, FPGA module and processor module

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911352368.3A CN111131078B (en) 2019-12-24 2019-12-24 Message hashing method and device, FPGA module and processor module

Publications (2)

Publication Number Publication Date
CN111131078A true CN111131078A (en) 2020-05-08
CN111131078B CN111131078B (en) 2022-09-16

Family

ID=70502309

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911352368.3A Active CN111131078B (en) 2019-12-24 2019-12-24 Message hashing method and device, FPGA module and processor module

Country Status (1)

Country Link
CN (1) CN111131078B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615355A (en) * 2022-05-13 2022-06-10 恒生电子股份有限公司 Message processing method and message analysis module

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3143738A1 (en) * 2014-05-15 2017-03-22 Samsung Electronics Co., Ltd. Method of distributing data and device supporting the same
CN107659515A (en) * 2017-09-29 2018-02-02 曙光信息产业(北京)有限公司 Message processing method, device, message processing chip and server
CN107832149A (en) * 2017-11-01 2018-03-23 西安微电子技术研究所 A kind of Receive side Scaling circuits for polycaryon processor Dynamic Packet management
CN109714292A (en) * 2017-10-25 2019-05-03 华为技术有限公司 The method and apparatus of transmitting message
CN110505161A (en) * 2019-09-24 2019-11-26 杭州迪普科技股份有限公司 A kind of message processing method and equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3143738A1 (en) * 2014-05-15 2017-03-22 Samsung Electronics Co., Ltd. Method of distributing data and device supporting the same
CN107659515A (en) * 2017-09-29 2018-02-02 曙光信息产业(北京)有限公司 Message processing method, device, message processing chip and server
CN109714292A (en) * 2017-10-25 2019-05-03 华为技术有限公司 The method and apparatus of transmitting message
CN107832149A (en) * 2017-11-01 2018-03-23 西安微电子技术研究所 A kind of Receive side Scaling circuits for polycaryon processor Dynamic Packet management
CN110505161A (en) * 2019-09-24 2019-11-26 杭州迪普科技股份有限公司 A kind of message processing method and equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114615355A (en) * 2022-05-13 2022-06-10 恒生电子股份有限公司 Message processing method and message analysis module

Also Published As

Publication number Publication date
CN111131078B (en) 2022-09-16

Similar Documents

Publication Publication Date Title
US7953915B2 (en) Interrupt dispatching method in multi-core environment and multi-core processor
CN100377113C (en) Methods and apparatus to process cache allocation requests based on priority
US7765405B2 (en) Receive side scaling with cryptographically secure hashing
US7219121B2 (en) Symmetrical multiprocessing in multiprocessor systems
CN110383764B (en) System and method for processing events using historical data in a serverless system
WO2019223596A1 (en) Method, device, and apparatus for event processing, and storage medium
JP2002169694A (en) Method and system for automatic allocation of boot server to pxe client on network via dhcp server
WO2020224300A1 (en) Message shunting method, apparatus and system based on user mode protocol stack
CN111625331A (en) Task scheduling method, device, platform, server and storage medium
WO2018107945A1 (en) Method and device for implementing allocation of hardware resources, and storage medium
CN113890879B (en) Load balancing method and device for data access, computer equipment and medium
CN111949568A (en) Message processing method and device and network chip
CN111526225A (en) Session management method and device
CN111131078B (en) Message hashing method and device, FPGA module and processor module
CN114553762B (en) Method and device for processing flow table items in flow table
CN109889406B (en) Method, apparatus, device and storage medium for managing network connection
CN111865809B (en) Equipment state sensing method, system and switch based on protocol non-sensing forwarding
CN108429703B (en) DHCP client-side online method and device
CN110737530B (en) Method for improving packet receiving capacity of HANDLE identification analysis system
US10523746B2 (en) Coexistence of a synchronous architecture and an asynchronous architecture in a server
JP2007328413A (en) Method for distributing load
CN109032779B (en) Task processing method and device, computer equipment and readable storage medium
CN115150464B (en) Application proxy method, device, equipment and medium
CN110912958A (en) HTTP connection processing method, device, equipment and medium
CN111324438B (en) Request scheduling method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant