CN116032861A - Message processing method and device - Google Patents
Message processing method and device Download PDFInfo
- Publication number
- CN116032861A CN116032861A CN202310028982.4A CN202310028982A CN116032861A CN 116032861 A CN116032861 A CN 116032861A CN 202310028982 A CN202310028982 A CN 202310028982A CN 116032861 A CN116032861 A CN 116032861A
- Authority
- CN
- China
- Prior art keywords
- packet
- receiving
- descriptor
- memory
- information
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 55
- 238000000034 method Methods 0.000 claims description 28
- 230000008569 process Effects 0.000 claims description 6
- 238000012544 monitoring process Methods 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Landscapes
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The application provides a message processing method and device, comprising the following steps: when the information receiving and transmitting module receives a packet, if the packet receiving queue is in an uncongested state, the information receiving and transmitting module writes the header of the packet into a first target address segment, and writes the body of the packet into a second target address segment; the information receiving and transmitting module writes the first address information and the second address information into a first target packet receiving descriptor; the information receiving and transmitting module marks the first target packet receiving descriptor as an occupied state; after processing the message header corresponding to the packet receiving descriptor in the occupied state according to the preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding. The latency of the processor accessing the first memory is less than the latency of the processor accessing the second memory. The access delay of the processor can be reduced, and the overall forwarding efficiency is further improved.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a message.
Background
With advances in science and technology, and in particular, advances in communication technology, there is increasing interest in receiving, processing, and forwarding data messages. For example, communication technology is rapidly developed at present, and requirements for communication speed are increasing, so that the communication speed is required to have high speed. Extremely fast network speeds can provide users with a high speed upload and download experience. And the communication network will bear more and more abundant services, on the basis of which a large amount of data traffic is necessarily generated, and higher requirements and challenges are put on the forwarding processing performance of the network.
Therefore, how to improve the network forwarding performance becomes a problem of concern to those skilled in the art.
Disclosure of Invention
An objective of the present application is to provide a message processing method and apparatus, so as to at least partially improve the above-mentioned problems.
In order to achieve the above purpose, the technical solution adopted in the embodiment of the present application is as follows:
in a first aspect, an embodiment of the present application provides a packet processing method, which is applied to a packet processing device, where the packet processing device includes an information transceiver module, a processor, a first memory, and a second memory, where a packet receiving queue is provided in the second memory, where the packet receiving queue includes a preconfigured packet receiving descriptor, and the method includes:
when the information receiving and transmitting module receives a packet, if the packet receiving and transmitting queue is in an uncongested state, the information receiving and transmitting module writes a packet header of the packet into a first target address segment, and writes a packet body of the packet into a second target address segment, wherein the first target address segment belongs to a first memory, and the second target address segment belongs to a second memory;
the information receiving and transmitting module writes first address information and second address information into a first target packet receiving descriptor, wherein the first target packet receiving descriptor is any packet receiving descriptor currently in an idle state in a packet receiving queue, the first address information is identification information of a writing address of the message header, and the second address information is identification information of the writing address of the message body;
The information receiving and transmitting module marks the first target packet receiving descriptor as an occupied state;
and after processing the message header corresponding to the packet receiving descriptor in the occupied state according to a preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding.
In a second aspect, an embodiment of the present application provides a packet processing method, which is applied to a packet processing device, where the packet processing device includes an information transceiver module, a processor, a first memory, and a second memory, where a packet sending queue is provided in the second memory, where the packet sending queue includes a pre-configured packet sending descriptor, and the method includes:
after the processor processes the message header of the packet receiving descriptor in the occupied state, writing first address information and second address information corresponding to the packet receiving descriptor into a first target packet sending descriptor;
the first address information initially written into the first target packet sending descriptor belongs to the first memory, the first address information is identification information of a writing address of a message header corresponding to the packet receiving descriptor, the second address information is identification information of a writing address of a message body corresponding to the packet receiving descriptor, and the first target packet sending descriptor is any packet sending descriptor currently in an idle state in a packet sending queue;
The processor adjusts the first target packet sending descriptor to an occupied state;
the information receiving and transmitting module moves message heads corresponding to a second target packet sending descriptor with a difference number to the second memory when the packet sending queue is in a crowded state, wherein the second target packet sending descriptor is a packet sending descriptor which is in an occupied state and is ranked later;
the information receiving and transmitting module updates the first address information in the second target packet sending descriptor, and the first address information in the updated second target packet sending descriptor belongs to the second memory;
and the information receiving and transmitting module forwards the message to the packet sending descriptors in the occupied state in the packet sending queue based on a preset sequence.
In a third aspect, embodiments of the present application provide an apparatus, the apparatus comprising: the device can execute the message processing method.
Compared with the prior art, the message processing method and device provided by the embodiment of the application comprise the following steps: when the information receiving and transmitting module receives a packet, if the packet receiving queue is in an uncongested state, the information receiving and transmitting module writes a message header of the packet into a first target address segment, and writes a message body of the packet into a second target address segment, wherein the first target address segment belongs to a first memory, and the second target address segment belongs to a second memory; the information receiving and transmitting module writes first address information and second address information into a first target packet receiving descriptor, wherein the first target packet receiving descriptor is any packet receiving descriptor currently in an idle state in a packet receiving queue, the first address information is identification information of a writing address of a message header, and the second address information is identification information of the writing address of a message body; the information receiving and transmitting module marks the first target packet receiving descriptor as an occupied state; after processing the message header corresponding to the packet receiving descriptor in the occupied state according to the preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding. The latency of the processor accessing the first memory is less than the latency of the processor accessing the second memory. The access delay of the processor can be reduced, and the overall forwarding efficiency is further improved.
In order to make the above objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments will be briefly described below, it being understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered limiting in scope, and that other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a message processing apparatus according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a message processing method according to an embodiment of the present application;
FIG. 3 is a second flow chart of a message processing method according to an embodiment of the present disclosure;
FIG. 4 is a third flow chart of a message processing method according to an embodiment of the present disclosure;
FIG. 5 is a flowchart illustrating a message processing method according to an embodiment of the present disclosure;
FIG. 6 is a fifth flow chart of a message processing method according to an embodiment of the present disclosure;
fig. 7 is a flowchart illustrating a message processing method according to an embodiment of the present application.
In the figure: 10-an information receiving and transmitting module; a 20-processor; 30-a first memory; 40-a second memory.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, which are generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, as provided in the accompanying drawings, is not intended to limit the scope of the application, as claimed, but is merely representative of selected embodiments of the application. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further definition or explanation thereof is necessary in the following figures. Meanwhile, in the description of the present application, the terms "first", "second", and the like are used only to distinguish the description, and are not to be construed as indicating or implying relative importance.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
In the description of the present application, it should be noted that, the terms "upper," "lower," "inner," "outer," and the like indicate an orientation or a positional relationship based on the orientation or the positional relationship shown in the drawings, or an orientation or a positional relationship conventionally put in use of the product of the application, merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or element to be referred to must have a specific orientation, be configured and operated in a specific orientation, and therefore should not be construed as limiting the present application.
In the description of the present application, it should also be noted that, unless explicitly specified and limited otherwise, the terms "disposed," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art in a specific context.
Some embodiments of the present application are described in detail below with reference to the accompanying drawings. The following embodiments and features of the embodiments may be combined with each other without conflict.
The embodiment of the application provides a message processing device, such as a router device or an intelligent network card device. Referring to fig. 1, a schematic structure of a message processing apparatus is shown. The message processing device comprises an information transceiver module 10, a processor 20, a first memory 30 and a second memory 40, wherein the processor 20 is respectively in communication connection with the information transceiver module 10, the first memory 30 and the second memory 40, and the information transceiver module 10 is respectively in communication connection with the first memory 30 and the second memory 40. Optionally, the information transceiver module 10, the processor 20, the first memory 30 and the second memory 40 are connected by a bus. The first memory 30 is, for example, a Static Random Access Memory (SRAM), the second memory 40 is, for example, a dynamic random access memory (also referred to as a memory or DRAM), and the second memory 40 may specifically be a DDR. The latency of the processor 20 accessing the first memory 30 is less than the latency of the processor 20 accessing the second memory 40.
The processor 20 may be a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processor, DSP for short), application specific integrated circuits (Application Specific Integrated Circuit, ASIC for short), field-programmable gate arrays (Field-Programmable Gate Array, FPGA for short) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
The information transceiver module 10 may be a hardware device such as a Field programmable gate array (Field-Programmable Gate Array, FPGA) or a network card. It should be appreciated that the information transceiver module 10 may directly access the first memory 30 and the second memory 40.
In an optional scenario, the workflow of the message processing apparatus provided in the embodiment of the present application is as follows.
The first step, the information transceiver module 10 completes the receiving and uploading of the line side message;
the second step, the information receiving and transmitting module 10 stores the message into DDR;
third, the processor 20 perceives the message reception through software;
fourth, the processor 20 parses the message through software;
fifth, the processor 20 performs service lookup by software;
Sixth, the processor 20 performs service processing (such as stream learning, statistics, deep parsing, security policy, encryption and decryption, etc.) through software;
seventh, the processor 20 makes forwarding decisions (e.g. determines the port to forward) by software;
eighth, the processor 20 edits the message (e.g. edits the header) by software;
and ninth, the processor 20 issues the processed message through software, and the message sending is completed through the information transceiver module 10.
In the above flow, since the message needs to be identified based on the message header information after being received, the software look-up table is then used for forwarding. The header is typically in a memory device such as a DDR, which is one of the DRAMs. The processor 20 (CPU) has a delay in accessing the DRAM, and if the message rate to be processed is high, for example, reaches the Mpps level, the CPU access delay is a key point affecting the forwarding performance.
In an optional scenario, in order to improve forwarding efficiency, a part of critical services may be offloaded to hardware (the information transceiver module 10) for execution, where the information transceiver module 10 completes analysis, table-checking, message editing, and other matters.
However, the router service specification is larger, and the hardware table entry is usually stored in SRAM, resulting in higher hardware cost. And this way only can offload a part of traffic, for example, the hardware table of the access router is usually below MB (million byte) level, a large number of service table entries are still on DDR, the table entry on DDR can easily reach M/even G level, and most of traffic can still only be sent to CPU for processing, resulting in still lower overall forwarding efficiency.
In order to overcome the above problems, the embodiments of the present application provide a packet processing method, which is based on a packet forwarding architecture of a front-stage and back-stage first memory 30 and a second memory 40, and cooperates with a CPU Core dedicated for packet receiving to perform packet receiving or packet sending task distribution, so as to effectively improve the overall performance of soft forwarding.
In this embodiment of the present application, a packet receiving queue and an packet sending queue are set in the second memory 40, where the packet receiving queue includes a preconfigured packet receiving descriptor, and the packet sending queue includes a preconfigured packet sending descriptor.
Alternatively, a queue space may be provided in the first memory 30, which may be a circular queue (Ring Buffer) for storing a header (header) of a portion of the packet.
It should be understood that the structure shown in fig. 1 is only a schematic structural diagram of a portion of a message processing apparatus, and that the message processing apparatus may further include more or fewer components than those shown in fig. 1, or have a different configuration than that shown in fig. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
The method for processing a message provided in the embodiment of the present application may be, but is not limited to, applied to the message processing apparatus shown in fig. 1, and referring to fig. 2, the method for processing a message includes: s115, S117, S118, and S211 are specifically described below.
S115, when the information receiving and transmitting module receives the packet, if the packet receiving and transmitting queue is in an uncongested state, the information receiving and transmitting module writes the header of the packet into the first target address segment and writes the body of the packet into the second target address segment.
The first target address segment belongs to the first memory, and the second target address segment belongs to the second memory.
It should be noted that, when the processor 20 performs service processing, only the header in the packet needs to be edited. The information transceiver module 10 writes the header of the packet into a first destination address field in the first memory 30 and the body of the packet into a second destination address field in the second memory 40. The processor 20 may access the first memory 30 directly while processing, without accessing the second memory 40, because the latency of the processor 20 accessing the first memory 30 is less than the latency of the processor 20 accessing the second memory 40. Therefore, the access delay of the processor 20 can be reduced through S115, so as to improve the overall forwarding efficiency.
In one possible scenario, when the packet is shorter, it may be performed that the corresponding header needs to be stored, and the second destination address field may be configured to be empty without storing the body of the packet.
S117, the information receiving and transmitting module writes the first address information and the second address information into the first target packet receiving descriptor.
The first target packet receiving descriptor is any one of the packet receiving descriptors currently in an idle state in the packet receiving queue, the first address information is identification information of a writing address of a message header, and the second address information is identification information of the writing address of a message body.
Optionally, a certain number of packet receiving descriptors are preconfigured in the packet receiving queue, and each packet receiving descriptor is provided with a first field and a second field, wherein the first field is used for filling in first address information, and the second field is used for filling in second address information. So that the processor 20 can obtain the header corresponding to the packet collecting descriptor through the first address information when performing service processing. The second address information is used to acquire a body of a message at the time of forwarding.
Alternatively, the first address information may be a start address of a storage address segment for storing the header, and the second address information may be a start address of a storage address segment for storing the body of the message. The first address information written belongs to the first memory 30 in case the packet receiving queue is in an uncongested state, and the first address information written may belong to the second memory 40 in case the packet receiving queue is in a congested state, see in particular below. Because the processor 20 does not need to use the message body during the traffic processing, the written second address information belongs to the second memory 40 no matter what state the packet receiving queue is in.
Alternatively, the information transceiver module 10 may access the packet receiving queue, and further know the current state of each packet receiving descriptor in the packet receiving queue. Optionally, after determining the packet reception descriptor that has been switched to the occupied state by the last packet reception descriptor in the packet reception queue, the information transceiver module 10 determines the first packet reception descriptor in the idle state, which is arranged behind the first packet reception descriptor, as the first target packet reception descriptor.
Optionally, each packet receiving descriptor is provided with a third field, and the third field is used for storing an offset, where the actual writing address of the header and the initial address of the storage address segment of the storage header are offset. It will be appreciated that the length of the header may change, e.g. become longer or shorter, when the header is edited by the processor 20, so that a certain space, i.e. the offset, needs to be reserved in the memory address field where the header is stored.
By writing the offset into the first target packet reception descriptor, the CPU can conveniently and quickly determine the beginning part of the message header during processing.
Optionally, the first offset represents an offset between an actual write address of the header and the first start address; the second offset represents an offset between the actual write address of the processed header and the first start address.
S118, the information transceiver module marks the first target packet receiving descriptor as an occupied state.
Optionally, the purpose of marking the first target packet reception descriptor as an occupied state is to facilitate informing the processor 20: the information stored in the address field in the first target packet reception descriptor needs to be subjected to packet reception processing.
S211, after processing the message header corresponding to the packet receiving descriptor in the occupied state according to the preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding.
Optionally, the information transceiver module 10 may send a first type trigger instruction to the processor 20 after updating the state of the batch of packet descriptors. The processor 20 (CPU) may determine, when a trigger instruction of the first type is received or according to a cycle, whether there is a packet receiving descriptor of an occupied state in the packet receiving queue, and if so, perform processing.
It should be appreciated that the processor 20 may access the first memory 30 directly while processing, and that no access to the second memory 40 is required, as the latency of the processor 20 accessing the first memory 30 is less than the latency of the processor 20 accessing the second memory 40. The access delay of the processor 20 can be reduced, and the overall forwarding efficiency can be further improved.
Optionally, updating the packet sending queue based on the first address information, the second address information and the second offset corresponding to the packet receiving descriptor to complete the message forwarding.
In summary, the embodiment of the present application provides a method for processing a message, including: when the information receiving and transmitting module receives a packet, if the packet receiving queue is in an uncongested state, the information receiving and transmitting module writes a message header of the packet into a first target address segment, and writes a message body of the packet into a second target address segment, wherein the first target address segment belongs to a first memory, and the second target address segment belongs to a second memory; the information receiving and transmitting module writes first address information and second address information into a first target packet receiving descriptor, wherein the first target packet receiving descriptor is any packet receiving descriptor currently in an idle state in a packet receiving queue, the first address information is identification information of a writing address of a message header, and the second address information is identification information of the writing address of a message body; the information receiving and transmitting module marks the first target packet receiving descriptor as an occupied state; after processing the message header corresponding to the packet receiving descriptor in the occupied state according to the preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding. The latency of the processor accessing the first memory is less than the latency of the processor accessing the second memory. The access delay of the processor can be reduced, and the overall forwarding efficiency is further improved.
Optionally, with respect to how writing of the message information is completed when the message is received, an embodiment of the present application further provides an optional implementation, referring to fig. 3, and the message processing method further includes: s111, S112, S113, S114, and S116 are specifically described below.
S111, when the information receiving and transmitting module receives the packet, the information receiving and transmitting module determines whether the packet receiving queue is in a crowded state. If yes, executing S113; if not, S112 is performed.
Optionally, a status flag may be set on the packet receiving queue, to indicate whether the packet receiving queue is in a congestion state, and the information transceiver module 10 may determine whether the packet receiving queue is in a congestion state by reading the status flag, without repeatedly calculating the threshold comparison.
If it is in a congestion state, it means that the header information stored in the first memory 30 (SRAM) is more, and in order to avoid excessively occupying the resources of the first memory 30, the header needs to be stored in the second memory 40 (memory, DRAM), so S113 is executed. If the message is in the uncongested state, in order to save access delay of the processor 20 during the service processing, the message header may be directly stored in the first memory 30, i.e. S112 is executed.
S112, the information receiving and transmitting module determines a first target address segment in the first memory as a writing address of the message head.
Optionally, an address manager is further provided in the packet processing device, where the address manager may query the packet receiving queue and the packet sending queue, determine whether the occupied address is returned based on the states of the packet receiving descriptor and the packet sending descriptor in the packet receiving queue and the packet sending queue, and further determine the free address segment in the first memory 30, and use the free address segment as the first target address.
S113, the information receiving and transmitting module determines a third target address segment in the second memory as a writing address of the message head.
Alternatively, the address manager may query the packet receiving queue and the packet sending queue, determine whether the occupied address is returned based on the states of the packet receiving descriptor and the packet sending descriptor therein, and further determine the free address segment in the second memory 40 as the third target address.
S114, the information receiving and transmitting module determines a second target address segment in the second memory as a writing address of the message body.
Alternatively, the address manager may send the corresponding first, second and third destination address segments to the information transceiver module 10 upon receiving an address distribution request corresponding thereto.
The step S114 may be performed directly when the message is received, may be performed before the step S112 or the step S113, or may be performed in synchronization with the step S, and the order of execution is not limited. After S112 and S114 are performed, S115 is performed; after S113 and S114 are performed, S116 is performed.
S116, the information receiving and transmitting module writes the message header of the message packet into a third target address segment, and writes the message body of the message packet into a second target address segment.
It should be appreciated that, when in a crowded state, the information transceiver module 10 writes the header of the packet into the third destination address field and the body of the packet into the second destination address field, so as to avoid taking up too much space in the first memory 30.
The embodiment of the present application further provides an optional implementation manner regarding how to further reduce the delay when the packet receiving queue is in a congestion state, referring to fig. 4, where the packet receiving queue is in a congestion state, the method further includes: s121 and S122 are specifically described below.
S121, if the processor updates the packet receiving descriptor in the occupied state to the idle state, the information receiving and transmitting module moves the message header corresponding to the second target packet receiving descriptor from the second memory to the first memory.
The second target packet receiving descriptor is a descriptor of the corresponding message header storage and the second memory.
Optionally, the processor 20 updates the packet receiving descriptor in the occupied state to the idle state, that is, the processor 20 returns any packet receiving descriptor when the packet receiving queue is in the crowded state, and at this time, the header corresponding to the second target packet receiving descriptor may be moved from the second memory 40 to the first memory 30. Specifically, the movement destination address may be obtained by the information transceiver module 10 requesting the address manager.
It should be understood that, after the transfer, the original occupied address of the header corresponding to the second target packet reception descriptor in the second memory 5 is released and returned to be in the idle state.
S122, the information receiving and transmitting module updates the first address information in the second target packet receiving descriptor.
Optionally, the first address information in the second target packet reception descriptor is changed from the identification information of the original occupied address to the identification information of the moving target address.
0 it should be understood that, as the processor 20 updates the packet descriptors in the occupied state to the idle state, the information transceiver module 10 will move all the header corresponding to the second target packet descriptors from the second memory 40 to the first memory 30. Therefore, when the processor 20 performs service processing, the first memory 30 is always accessed, so that time delay can be reduced, and overall forwarding efficiency can be improved.
5 it should be understood that, at the initial time, the packet receiving queue is in an uncongested state, and regarding how the packet receiving queue is switched, a possible implementation manner is further provided in the embodiments of the present application, please refer to fig. 5, and the packet processing method further includes: s131 and S132 are specifically described below.
S131, when the packet receiving queue is in an uncongested state, the information receiving and transmitting module marks the packet receiving queue as a crowded state when the monitoring amount is larger than a preset first threshold value.
And 0, monitoring the number of packet receiving descriptors in an occupied state in a packet receiving queue, or the number of message heads stored in a first memory, or the number of storage units occupied by the message packets. The first threshold may be a global configuration or a per queue configuration.
It should be appreciated that the header is written to the first memory 30 with the packet receive queue in an uncongested state.
S132, when the packet receiving queue is in a crowded state, the information receiving and transmitting module marks the packet receiving queue as a non-crowded state when the message header corresponding to the packet receiving descriptor is not stored in the second memory.
Optionally, when the packet receiving queue is in a congestion state, the newly received packet headers are written into the second memory 40, and after the successive movement in S121, when the packet header corresponding to the packet receiving descriptor is not stored in the second memory 40, the packet receiving queue is marked as an uncongested state.
It should be noted that, in the embodiment of the present application, the message body may be stored in a continuous space or a multi-segment space (link-list of addresses) on the second memory 40. When the second address information is stored in the multi-segment space, the second address information corresponding to the packet reception descriptor may include a plurality of pieces of sub information, each piece of sub information respectively representing identification information of one segment space.
Optionally, corresponding Ring buffers are provided in the first memory 30 and the second memory 40 according to the packet receiving queues for storing the header. The Ring Buffer may be divided into a predetermined number of subsections, the free subsection in the first memory 30 may be used as a first target address section and the free subsection in the second memory 40 may be used as a third target address section. The message header and the message body in the embodiment of the application are stored separately. Each sub-segment is provided with a reserved space (Headroom), namely an offset, and when the packet is received and written into the header, the header starts to be written from behind the head.
Optionally, the embodiment of the present application further provides a message processing method, referring to fig. 6, the message processing method further includes: s221, S222, S143, S144, and S146 are specifically described below.
S221, after processing the message header of the packet receiving descriptor in the occupied state, the processor writes the first address information and the second address information corresponding to the packet receiving descriptor into the first target packet sending descriptor.
The first address information in the first target packet sending descriptor is the identification information of the writing address of the message header corresponding to the packet receiving descriptor, the second address information is the identification information of the writing address of the message body corresponding to the packet receiving descriptor, and the first target packet sending descriptor is any one of the packet sending descriptors currently in an idle state in the packet sending queue.
Alternatively, the processor 20 may access the packet queue and thereby learn the current state of each packet descriptor in the packet queue. Optionally, the processor 20 determines, after determining a most recent packet descriptor switched to the occupied state in the packet queue, a first packet descriptor in the idle state arranged therebehind as the first target packet descriptor.
Optionally, the packet sending descriptor is also provided with a first field, a second field and a third field, which function in the same way as the first field, the second field and the third field in the packet receiving descriptor.
It should be appreciated that the first address information originally written by the processor 20 in the first field of the first targeted packet descriptor belongs to the first memory. I.e. the initial header is stored in the first memory 30.
In one possible implementation, the processor 20 also writes the second offset into a third field of the first targeted packet descriptor.
Optionally, the processor 20 will also write a forwarding decision (forwarding portal) into the fourth field of the first target packet descriptor.
S222, the processor adjusts the first target packet sending descriptor to be in an occupied state.
Optionally, the purpose of adjusting the first target packet sending descriptor to the occupied state is to facilitate informing the information transceiver module 10: the information stored in the address field in the first target packet descriptor needs to be subjected to packet forwarding processing.
It should be understood that the information transceiver module 10 performs packet sending processing on the packet sending descriptors in the occupied state according to the arrangement order in the packet sending queue. The packet sending speed of the information sending and receiving module 10 may be less than the packet receiving speed of the processor 20, which may cause the packet sending queue to be in a crowded state. If this state is ignored, it may cause the space in the first memory 30 to be excessively occupied, resulting in a failure to write a newly received message into the first memory 30, so that the processor 20 cannot process the subsequent message. To overcome this problem, S143 needs to be performed.
S143, when the packet sending queue is in a crowded state, the information receiving and sending module moves the message header corresponding to the second target packet sending descriptor with the difference quantity to the second memory.
It will be appreciated that by moving the header corresponding to the differential number of second targeted packet descriptors from the first memory 30 to the second memory 40. The space of the first memory 30 may be freed up for receiving new message information so that the processor 20 may continue processing subsequent messages.
Optionally, under what conditions the information transceiver module 10 may determine that the packet sending queue is in a congestion state, the embodiment of the present application further provides a possible implementation party. The information transceiver module 10 may query the packet queue according to a predetermined period, and determine its status by the polled portion.
It should be understood that the persistent polling occupies the hardware resources of the information transceiver module 10, and in order to avoid occupying the hardware resources, a possible implementation manner is further provided in the embodiments of the present application, which may be that after the processor 20 executes S222 to update the packet sending descriptor, a second type trigger instruction is sent to the information transceiver module 10, so that the information transceiver module 10 determines whether the packet sending queue is in a crowded state.
In one possible scenario, the second type of trigger instruction also includes the number of currently updated package descriptors; further, the information transceiver module 10 may determine the difference amount based on the current update amount and the amount of the packet descriptors that remain unsent, and move the difference amount after the sorting to the second memory 40.
It should be understood that the destination address of the move may be a differential number of destination addresses allocated by the address manager, as the information transceiver module 10 makes a request to the address manager.
S144, the information receiving and transmitting module updates the first address information in the second target packet sending descriptor, and the updated first address information in the second target packet sending descriptor belongs to the second memory.
The second target package descriptor is a package descriptor which is in an occupied state and is ordered later.
It should be understood that, in order to ensure the correctness of the packet content, the first address information in the second target packet descriptor needs to be updated, specifically, the original first address information in the first field is replaced by the newly allocated destination address, and the first address information in the updated second target packet descriptor belongs to the second memory.
Alternatively, after the update is completed, the space of the first memory 30 may be released, the address space that is originally occupied becomes a free available state, and the address manager may allocate it again. For example, for receiving new message information so that the processor 20 can continue processing subsequent messages.
S146, the information receiving and transmitting module forwards the message to the packet sending descriptors in the occupied state in the packet sending queue based on the preset sequence.
Optionally, after the message forwarding is completed, the occupied space in the corresponding first memory 30 or second memory 40 may be restored, and the address space that is originally occupied becomes an available state, and the address manager may allocate the address space again.
On the basis of fig. 6, the embodiment of the present application further provides an alternative implementation, referring to fig. 7, the message processing method further includes: s141, S142, and S145 are specifically described below.
S141, the information transceiver module determines whether the number of third target packet sending descriptors in the packet sending queue is larger than a preset second threshold. If yes, executing S142; if not, S145 is performed.
The third target packet sending descriptor is a packet sending descriptor which is in an occupied state and the corresponding message header is stored in the first memory. The second threshold may be a global configuration or a per queue configuration, although it may be a fixed value.
Alternatively, S141 is not necessarily performed after S222, but S141 may be performed when the information transceiver module 10 receives the second type trigger instruction.
S142, determining that the packet sending queue is in a crowded state.
S145, determining that the packet sending queue is in an uncongested state.
After S142, S143 is performed, and after S145, S146 may be directly performed.
Optionally, S146 and the foregoing steps are independent, and the information transceiver module 10 forwards the packet according to the predetermined sequence to the packet descriptors in the occupied state in the packet queue, whether or not the packet descriptors are moved.
It should be understood that, in the message processing method provided in the present application, with assistance of dedicated integrated circuit hardware (Application Specific Integrated Circuit, ASIC), when the processor 20 (CPU) senses that a message is received, the header is already in the first memory 30 (SRAM), and after the CPU completes filling of the packet DMA TX descriptor, the hardware automatically releases the SRAM space in which the header is located. The access time delay of the CPU when processing the message can be effectively reduced, and the waiting time of the CPU is reduced, so that the soft forwarding efficiency is greatly improved, and the overall forwarding bandwidth is improved.
A message processing apparatus, such as a router device or an intelligent network card device, is provided below. The message processing device is shown in fig. 1, and can implement the message processing method; specifically, the message processing device comprises: the information transceiver module 10, the processor 20, the first memory 30 and the second memory 40. The processor 20 may be a CPU. The message processing apparatus may execute the message processing method of the above embodiment.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners as well. The apparatus embodiments described above are merely illustrative, for example, flow diagrams and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present application may be integrated together to form a single part, or each module may exist alone, or two or more modules may be integrated to form a single part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Claims (10)
1. The message processing method is characterized by being applied to a message processing device, wherein the message processing device comprises an information receiving and transmitting module, a processor, a first memory and a second memory, a packet receiving queue is arranged in the second memory, the packet receiving queue comprises a preconfigured packet receiving descriptor, and the method comprises the following steps:
when the information receiving and transmitting module receives a packet, if the packet receiving and transmitting queue is in an uncongested state, the information receiving and transmitting module writes a packet header of the packet into a first target address segment, and writes a packet body of the packet into a second target address segment, wherein the first target address segment belongs to a first memory, and the second target address segment belongs to a second memory;
The information receiving and transmitting module writes first address information and second address information into a first target packet receiving descriptor, wherein the first target packet receiving descriptor is any packet receiving descriptor currently in an idle state in a packet receiving queue, the first address information is identification information of a writing address of the message header, and the second address information is identification information of the writing address of the message body;
the information receiving and transmitting module marks the first target packet receiving descriptor as an occupied state;
and after processing the message header corresponding to the packet receiving descriptor in the occupied state according to a preset sequence, the processor updates the packet sending queue based on the first address information and the second address information corresponding to the packet receiving descriptor so as to complete message forwarding.
2. The message processing method of claim 1, wherein the method further comprises:
when the information receiving and transmitting module receives a packet, the information receiving and transmitting module determines whether the packet receiving and transmitting queue is in a crowded state or not;
if the packet receiving queue is in a non-crowded state, the information receiving and transmitting module determines a first target address segment in the first memory as a writing address of the message head;
And the information receiving and transmitting module determines a second target address segment in the second memory as a writing address of the message body.
3. The message processing method according to claim 2, wherein the method further comprises:
if the packet receiving queue is in a crowded state, the information receiving and transmitting module determines a third target address segment in the second memory as a writing address of the message head;
the information receiving and transmitting module determines a second target address segment in the second memory as a writing address of the message body;
the information receiving and transmitting module writes the message header of the message packet into a third target address segment, and writes the message body of the message packet into a second target address segment.
4. The message processing method according to claim 3, wherein in the case where the packet receiving queue is in a congestion state, the method further comprises:
if the processor updates the packet receiving descriptor in the occupied state to an idle state, the information receiving and transmitting module moves a message header corresponding to a second target packet receiving descriptor from the second memory to the first memory;
and the information receiving and transmitting module updates the first address information in the second target packet receiving descriptor.
5. The message processing method of claim 1, wherein in the case where the packet-receiving queue is in an uncongested state, the method further comprises:
and when the monitoring amount is larger than a preset first threshold value, the information receiving and transmitting module marks the packet receiving queue as a congestion state, wherein the monitoring amount is the number of packet receiving descriptors in an occupied state in the packet receiving queue, or the number of message heads stored in the first memory, or the number of memory units occupied by the message packets.
6. The message processing method of claim 5, wherein in the case where the packet receiving queue is in a congestion state, the method further comprises:
and when the message header corresponding to the packet receiving descriptor is not stored in the second memory, the information receiving and transmitting module marks the packet receiving queue as an uncongested state.
7. The message processing method is characterized by being applied to a message processing device, wherein the message processing device comprises an information receiving and transmitting module, a processor, a first memory and a second memory, a packet sending queue is arranged in the second memory, the packet sending queue comprises a pre-configured packet sending descriptor, and the method comprises the following steps:
After the processor processes the message header of the packet receiving descriptor in the occupied state, writing first address information and second address information corresponding to the packet receiving descriptor into a first target packet sending descriptor;
the first address information initially written into the first target packet sending descriptor belongs to the first memory, the first address information is identification information of a writing address of a message header corresponding to the packet receiving descriptor, the second address information is identification information of a writing address of a message body corresponding to the packet receiving descriptor, and the first target packet sending descriptor is any packet sending descriptor currently in an idle state in a packet sending queue;
the processor adjusts the first target packet sending descriptor to an occupied state;
the information receiving and transmitting module moves message heads corresponding to a second target packet sending descriptor with a difference number to the second memory when the packet sending queue is in a crowded state, wherein the second target packet sending descriptor is a packet sending descriptor which is in an occupied state and is ranked later;
the information receiving and transmitting module updates the first address information in the second target packet sending descriptor, and the first address information in the updated second target packet sending descriptor belongs to the second memory;
And the information receiving and transmitting module forwards the message to the packet sending descriptors in the occupied state in the packet sending queue based on a preset sequence.
8. The message processing method of claim 7, wherein before the moving the header corresponding to the differential number of second target packet descriptors to the second memory, the method further comprises:
the information receiving and transmitting module determines that the packet sending queue is in a crowded state when the number of third target packet sending descriptors in the packet sending queue is larger than a preset second threshold value;
the third target packet sending descriptor is a packet sending descriptor which is in an occupied state and stores a corresponding message header in the first memory.
9. The message processing method of claim 8, wherein the method further comprises:
when the information receiving and transmitting module is in an uncongested state in the packet sending queue, the information receiving and transmitting module directly forwards the packet sending descriptors in an occupied state in the packet sending queue based on a preset sequence.
10. A message processing apparatus, comprising: the message processing is configured to perform the message processing method according to any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310028982.4A CN116032861A (en) | 2023-01-09 | 2023-01-09 | Message processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310028982.4A CN116032861A (en) | 2023-01-09 | 2023-01-09 | Message processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116032861A true CN116032861A (en) | 2023-04-28 |
Family
ID=86070984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310028982.4A Pending CN116032861A (en) | 2023-01-09 | 2023-01-09 | Message processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116032861A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117076353A (en) * | 2023-10-16 | 2023-11-17 | 苏州元脑智能科技有限公司 | Descriptor configuration method and descriptor configuration device |
CN117193669A (en) * | 2023-11-06 | 2023-12-08 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
CN118138547A (en) * | 2024-05-07 | 2024-06-04 | 珠海星云智联科技有限公司 | Method, computer device and medium for packet descriptor acquisition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4755986A (en) * | 1985-09-13 | 1988-07-05 | Nec Corporation | Packet switching system |
CN107547417A (en) * | 2016-06-29 | 2018-01-05 | 中兴通讯股份有限公司 | A kind of message processing method, device and base station |
CN108055202A (en) * | 2017-12-07 | 2018-05-18 | 锐捷网络股份有限公司 | A kind of message processor and method |
US20200259766A1 (en) * | 2017-07-31 | 2020-08-13 | New H3C Technologies Co., Ltd. | Packet processing |
-
2023
- 2023-01-09 CN CN202310028982.4A patent/CN116032861A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4755986A (en) * | 1985-09-13 | 1988-07-05 | Nec Corporation | Packet switching system |
CN107547417A (en) * | 2016-06-29 | 2018-01-05 | 中兴通讯股份有限公司 | A kind of message processing method, device and base station |
US20200259766A1 (en) * | 2017-07-31 | 2020-08-13 | New H3C Technologies Co., Ltd. | Packet processing |
CN108055202A (en) * | 2017-12-07 | 2018-05-18 | 锐捷网络股份有限公司 | A kind of message processor and method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117076353A (en) * | 2023-10-16 | 2023-11-17 | 苏州元脑智能科技有限公司 | Descriptor configuration method and descriptor configuration device |
CN117076353B (en) * | 2023-10-16 | 2024-02-02 | 苏州元脑智能科技有限公司 | Descriptor configuration method and descriptor configuration device |
CN117193669A (en) * | 2023-11-06 | 2023-12-08 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
CN117193669B (en) * | 2023-11-06 | 2024-02-06 | 格创通信(浙江)有限公司 | Discrete storage method, device and equipment for message descriptors and storage medium |
CN118138547A (en) * | 2024-05-07 | 2024-06-04 | 珠海星云智联科技有限公司 | Method, computer device and medium for packet descriptor acquisition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116032861A (en) | Message processing method and device | |
CN109412964B (en) | Message control method and network device | |
CN111352889B (en) | Equipment management method, equipment, device and medium based on MCTP (Multi-port technology protocol) | |
US7464201B1 (en) | Packet buffer management apparatus and method | |
JP2013507022A (en) | Method for processing data packets within a flow-aware network node | |
EP3244590B1 (en) | Network node, endpoint node and method of receiving an interest message | |
CN111107017A (en) | Method, equipment and storage medium for processing switch message congestion | |
EP3657744B1 (en) | Message processing | |
WO2021017667A1 (en) | Service data transmission method and device | |
CN108206787A (en) | A kind of congestion-preventing approach and device | |
US9838500B1 (en) | Network device and method for packet processing | |
WO2023125380A1 (en) | Data management method and corresponding apparatus | |
CN114244752A (en) | Flow statistical method, device and equipment | |
CN113328953B (en) | Method, device and storage medium for network congestion adjustment | |
CN114422437A (en) | Method and device for forwarding heterogeneous messages | |
CN115996203B (en) | Network traffic domain division method, device, equipment and storage medium | |
CN116225346B (en) | Memory data access method and electronic equipment | |
US10817177B1 (en) | Multi-stage counters | |
CN116208574A (en) | Message processing method, device, electronic equipment and computer readable storage medium | |
CN114567614B (en) | Method and device for realizing ARP protocol processing based on FPGA | |
CN116132532A (en) | Message processing method and device and electronic equipment | |
CN116032837A (en) | Flow table unloading method and device | |
CN112003796B (en) | Broadcast message processing method, system, equipment and computer storage medium | |
CN109308180B (en) | Processing method and processing device for cache congestion | |
US7379453B1 (en) | Method and apparatus for transferring multiple packets from hardware |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |