CN114124854B - Message processing method and device, electronic equipment and readable storage medium - Google Patents

Message processing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114124854B
CN114124854B CN202111436516.7A CN202111436516A CN114124854B CN 114124854 B CN114124854 B CN 114124854B CN 202111436516 A CN202111436516 A CN 202111436516A CN 114124854 B CN114124854 B CN 114124854B
Authority
CN
China
Prior art keywords
message
data processing
network interface
information
messages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111436516.7A
Other languages
Chinese (zh)
Other versions
CN114124854A (en
Inventor
陈许蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianrongxin Xiongan Network Security Technology Co ltd
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Original Assignee
Tianrongxin Xiongan Network Security Technology Co ltd
Beijing Topsec Technology Co Ltd
Beijing Topsec Network Security Technology Co Ltd
Beijing Topsec Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianrongxin Xiongan Network Security Technology Co ltd, Beijing Topsec Technology Co Ltd, Beijing Topsec Network Security Technology Co Ltd, Beijing Topsec Software Co Ltd filed Critical Tianrongxin Xiongan Network Security Technology Co ltd
Priority to CN202111436516.7A priority Critical patent/CN114124854B/en
Publication of CN114124854A publication Critical patent/CN114124854A/en
Application granted granted Critical
Publication of CN114124854B publication Critical patent/CN114124854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3027Output queuing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/90Buffering arrangements
    • H04L49/9057Arrangements for supporting packet reassembly or resequencing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The application provides a message processing method, a message processing device, electronic equipment and a readable storage medium, and relates to the technical field of communication. The method processes the messages shunted by the network interface through at least two data processing channels, so that the flow can be shared to each data processing channel, and each data processing channel does not need to meet higher processing performance, and the hardware cost is lower. After the splitting processing, the splitting information to be processed is acquired from the splitting information queue, and the splitting information in the splitting information queue is stored according to the sequence of the received messages, so that when the messages output by at least two data processing channels are output, the splitting information to be processed is matched with the messages, and the matched target messages are output, and the messages can be sequentially output according to the sequence of the received messages. Therefore, the scheme can give consideration to low hardware cost and the messages are still orderly output after the split processing.

Description

Message processing method and device, electronic equipment and readable storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and apparatus for processing a message, an electronic device, and a readable storage medium.
Background
With the increase of network communication bandwidth demand, greater and greater pressure is brought to network message processing speed, and in order to meet the high-bandwidth and high-performance message processing performance, a multichannel parallel acceleration processing structure is gradually developed. The resource distribution of the hardware processing channels of the multi-channel parallel acceleration processing structure is usually fixed, for example, a network device includes 8 panel interfaces (including 2 interfaces of 40G, 4 interfaces of 10G, 2G, and 1G, etc.), where the interfaces of the main panel and the standby panel include 4 interfaces of 1G, 2 interfaces of 10G, 1G, etc., respectively, messages of the interfaces of all the main panels share one processing channel, and messages of the interfaces of all the standby panels share one processing channel.
In order to improve the system performance, the message can be split into different processing channels for processing, so that each processing channel can meet the bandwidth requirement of the system without supporting very high processing performance. However, the processing channels in the mode are relatively independent, and the message sequence output by each channel is not relevant, so that the sequence of the output message is disordered, and the processing of the subsequent message is seriously influenced.
Disclosure of Invention
An object of the embodiments of the present application is to provide a method, an apparatus, an electronic device, and a readable storage medium for processing a message, so as to improve a processing manner in the prior art, so that an output message sequence is disordered, and thus a problem that a subsequent message processing is seriously affected is solved.
In a first aspect, an embodiment of the present application provides a method for processing a message, where the method includes:
acquiring to-be-processed shunting information from a shunting information queue, wherein the shunting information queue is shunting information of each message stored according to a message sequence received from a network interface;
matching the to-be-processed shunt information with each message output by at least two data processing channels of the cache, and determining a target message matched with the to-be-processed shunt information; the at least two data processing channels are used for processing the messages shunted by the network interface;
and outputting the target message.
In the implementation process, the messages shunted by the network interface are processed through at least two data processing channels, so that the messages of each data processing channel are prevented from being sourced from fixed interfaces, and the flow is distributed to each data processing channel, so that each data processing channel does not need to meet higher processing performance, does not need to use more logic, operation and storage resources, and has lower hardware cost. After the splitting processing, the splitting information to be processed is acquired from the splitting information queue, and the splitting information in the splitting information queue is stored according to the sequence of the received messages, so that when the messages output by at least two data processing channels are output, the splitting information to be processed is matched with the messages, and the matched target messages are output, and the messages can be sequentially output according to the sequence of the received messages. Therefore, the scheme can give consideration to low hardware cost and the messages are still orderly output after the split processing.
Optionally, the shunting information of the message includes an identifier of a data processing channel corresponding to the message, an identifier of the message, and an identifier of an input network interface;
the step of matching the to-be-processed shunt information with each message output by at least two cached data processing channels, and determining a target message matched with the to-be-processed shunt information comprises the following steps:
and correspondingly matching the identifiers of the data processing channels, the identifiers of the messages and the identifiers of the input network interfaces, which are contained in the to-be-processed shunt information, with the identifiers of the data processing channels, the identifiers of the messages and the identifiers of the network interfaces in the shunt information corresponding to each message output by at least two data processing channels, and determining matched target messages.
In the implementation process, each identifier in the to-be-processed shunt information is matched with each identifier corresponding to the message, so that the output messages can be ensured to be output according to the sequence of the inlets, and the messages can be output orderly.
Optionally, before obtaining the to-be-processed shunting information from the shunting information queue, the method further includes:
receiving a current message from the network interface;
Judging whether the message of the network interface needs to be subjected to shunting treatment or not;
if yes, acquiring flow statistics values of all the data processing channels in the at least two data processing channels, generating shunting information of the current message, and storing the shunting information into the shunting information queue;
and determining a target data processing channel corresponding to the current message according to the flow statistic value of each data processing channel, and inputting the current message into the target data processing channel.
In the implementation process, the flow statistical value of each data processing channel is acquired to determine the destination of the current message, so that the load balance among each data processing channel can be ensured.
Optionally, the flow statistics value includes the number of the processed messages or the total byte number of the processed messages, and the determining, according to the flow statistics value of each data processing channel, the target data processing channel corresponding to the current message includes:
comparing the number of the processed messages or the total byte number of the processed messages of each data processing channel, determining the data processing channel with the minimum number of the processed messages or the minimum total byte number of the processed messages, and taking the data processing channel as the target data processing channel corresponding to the current message.
In the implementation process, the number of the processed messages and the total byte number can more accurately reflect the processing load of each data processing channel.
Optionally, the determining whether the packet of the network interface needs to be split includes:
detecting whether a shunt switch is turned on or not;
if the network interface is started, determining that the message of the network interface needs to be subjected to shunting treatment;
if not, determining that the message of the network interface does not need to be subjected to the shunting processing.
In the implementation process, whether the current message is subjected to the splitting treatment can be rapidly judged by judging whether the splitting switch is turned on, so that the treatment rate of the message is improved.
Optionally, whether to turn on the shunt switch is determined by:
opening the shunt switch upon detecting that a high-rate network interface and at least one low-rate network interface in a main panel are enabled and a high-rate network interface in a standby panel is not enabled, or upon detecting that a high-rate network interface and at least one low-rate network interface in the standby panel are enabled and a high-rate network interface in the main panel is not enabled;
the shunt switch is not turned on when only high-rate network interfaces in the primary panel are detected to be enabled, or when only high-rate network interfaces in the backup panel are detected to be enabled.
In the implementation process, whether the shunt switch is started or not is judged by detecting the working state of the network interface, so that the processing mode of the flow can be flexibly adjusted according to the flow demand, and the system performance is improved.
Optionally, after determining whether the packet of the network interface needs to be split, the method further includes:
if the fact that the message of the network interface does not need to be subjected to the shunting processing is judged, the current message is input into a default data processing channel corresponding to the network interface in the at least two data processing channels, and shunting information of the current message is generated and stored in the shunting information queue. The message which does not need to be shunted also generates corresponding shunt information to be stored in a shunt information queue, so that the ordered output of the message can be ensured during the output.
In a second aspect, an embodiment of the present application provides a packet processing device, where the device includes:
the system comprises a distribution information acquisition module, a distribution information processing module and a distribution information processing module, wherein the distribution information acquisition module is used for acquiring distribution information to be processed from a distribution information queue, and the distribution information queue is the distribution information of each message stored according to the sequence of the messages received from a network interface;
the distribution information matching module is used for matching the distribution information to be processed with each message output by at least two data processing channels of the cache, and determining a target message matched with the distribution information to be processed; the at least two data processing channels are used for processing the messages shunted by the network interface;
And the message output module is used for outputting the target message.
Optionally, the shunting information of the message includes an identifier of a data processing channel corresponding to the message, an identifier of the message, and an identifier of an input network interface; the splitting information matching module is configured to correspondingly match an identifier of a data processing channel corresponding to a message included in the splitting information to be processed, an identifier of the message, and an identifier of an input network interface with identifiers of data processing channels, identifiers of messages, and identifiers of network interfaces in splitting information corresponding to each message output by at least two data processing channels in a cache, and determine a matched target message.
Optionally, the apparatus further comprises:
the shunting processing module is used for receiving the current message from the network interface; judging whether the message of the network interface needs to be subjected to shunting treatment or not; if yes, acquiring flow statistics values of all the data processing channels in the at least two data processing channels, generating shunting information of the current message, and storing the shunting information into the shunting information queue; and determining a target data processing channel corresponding to the current message according to the flow statistic value of each data processing channel, and inputting the current message into the target data processing channel.
Optionally, the flow statistics value includes the number of processed messages or the total number of bytes of the processed messages, and the split processing module is configured to compare the number of processed messages or the total number of bytes of the processed messages of each data processing channel, determine a data processing channel with the smallest number of processed messages or the smallest total number of bytes of the processed messages, and use the data processing channel as the target data processing channel corresponding to the current message.
Optionally, the shunt processing module is used for detecting whether the shunt switch is turned on or not; if the network interface is started, determining that the message of the network interface needs to be subjected to shunting treatment; if not, determining that the message of the network interface does not need to be subjected to the shunting processing.
Optionally, whether to turn on the shunt switch is determined by:
opening the shunt switch upon detecting that a high-rate network interface and at least one low-rate network interface in a main panel are enabled and a high-rate network interface in a standby panel is not enabled, or upon detecting that a high-rate network interface and at least one low-rate network interface in the standby panel are enabled and a high-rate network interface in the main panel is not enabled;
The shunt switch is not turned on when only high-rate network interfaces in the primary panel are detected to be enabled, or when only high-rate network interfaces in the backup panel are detected to be enabled.
Optionally, the splitting processing module is further configured to, if it is determined that the packet of the network interface does not need to be split, input the current packet into a default data processing channel corresponding to the network interface in the at least two data processing channels, and generate splitting information of the current packet and store the splitting information into the splitting information queue.
In a third aspect, embodiments of the present application provide an electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the steps of the method as provided in the first aspect above.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as provided in the first aspect above.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the embodiments of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a message processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of interfaces on a main panel and a standby panel according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating parallel processing of multiple data processing channels according to an embodiment of the present disclosure;
FIG. 4 is a detailed flowchart of a message processing method according to an embodiment of the present application;
fig. 5 is a block diagram of a message processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device for executing a message processing method according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
It should be noted that the terms "system" and "network" in embodiments of the present invention may be used interchangeably. "plurality" means two or more, and "plurality" may also be understood as "at least two" in this embodiment of the present invention. "and/or", describes an association relationship of an association object, and indicates that there may be three relationships, for example, a and/or B, and may indicate: a exists alone, A and B exist together, and B exists alone. The character "/", unless otherwise specified, generally indicates that the associated object is an "or" relationship.
The embodiment of the application provides a message processing method, which processes a message split by a network interface through at least two data processing channels, so that the message of each data processing channel is prevented from being sourced from a fixed interface, and the flow is distributed to each data processing channel, so that each data processing channel does not need to meet higher processing performance and does not need to use more logic, operation and storage resources. After the splitting processing, the splitting information to be processed is acquired from the splitting information queue, and the splitting information in the splitting information queue is stored according to the sequence of the received messages, so that when the messages output by at least two data processing channels are output, the splitting information to be processed is matched with the messages, and the matched target messages are output, and the messages can be sequentially output according to the sequence of the received messages. Therefore, the scheme can give consideration to low hardware cost and the messages are still orderly output after the split processing.
Referring to fig. 1, fig. 1 is a flowchart of a message processing method according to an embodiment of the present application, where the method includes the following steps:
step S110: and obtaining the to-be-processed shunting information from the shunting information queue.
It is understood that the message processing method of the present application may be applied to processing chips in network devices, such as CPUs, graphics processors (Graphics Processing Unit, GPUs), field programmable gate arrays (Field Programmable Gate Array, FPGAs), complex programmable logic devices (Complex Programmable Logic Device, CPLDs), and the like.
The network device may generally include a main panel including a plurality of network interfaces (e.g., 1 high-rate network interface of 40G and 2 low-rate network interfaces of 10G and 1G) and a standby panel including a plurality of network interfaces (e.g., 1 high-rate network interface of 40G and 2 low-rate network interfaces of 10G and 1G), that is, the types and numbers of network interfaces of the main panel and the standby panel may be the same, as shown in fig. 2. In practical application, the network interface of the main panel corresponds to one data processing channel, the network interface of the standby panel corresponds to one data processing channel, and in a default situation, a message received by the network interface of the main panel is processed through the corresponding data processing channel, and a message received by the network interface of the standby panel is processed through the corresponding data processing channel, as shown in fig. 3, and the received message is processed in parallel through the two data processing channels.
However, when the network interface of the standby panel is not started, the corresponding data processing channel is in an idle state, and the load of the data processing channel corresponding to the network interface of the main panel may be too heavy, so that in order to save resources and avoid resource waste, the packet received by the network interface of the main panel may be split. The shunted message can enter two data processing channels for processing.
Since the lengths of the messages may be different, and the processing speeds of the messages may be different for the data processing channels, the sequence of the messages after the splitting processing is disordered, which has serious influence on the messages such as the transmission control protocol (Transmission Control Protocol, TCP) and the special user data message protocol (User Datagram Protocol, UDP). Although messages of some interfaces can be processed according to parallel data processing channels, the problem of disorder of the messages can be avoided, the processing performance requirement of the data processing channels is higher, if the system processing bandwidth requirement reaches 61G, each data processing channel must meet the processing performance of 61G, and if the high performance requirement is met, hardware resources are necessarily increased, and hardware cost is increased.
Therefore, in order to achieve low hardware cost and ensure that messages are not output out of order, in the method, received messages are subjected to splitting, and after the messages are subjected to splitting, the messages are ordered before being output to a next-stage processing module. In order to order the messages, the embodiment of the application provides a shunting information queue, wherein the shunting information queue is shunting information of each message stored according to the sequence of the messages received from the network interface.
The network interface herein may refer to a network interface on the main panel, or may refer to a network interface on the standby panel, for example, a network interface of 40G and a network interface of 10G on the main panel are currently enabled, where the network interface refers to a network interface of 40G and a network interface of 10G, and for the messages received from the two network interfaces, the corresponding split information may be recorded in the split information queue.
Step S120: and matching the to-be-processed shunting information with each message output by the at least two data processing channels, and determining a target message matched with the to-be-processed shunting information.
It can be understood that there may be more than two data processing channels in the processing chip, in practical application, if there are more network interfaces, a plurality of data processing channels may be provided, and by default, one data processing channel is used for processing a packet received by at least one corresponding network interface. When the splitting process is required, at least two data processing channels can be used for processing the messages split by the network interface, namely, the messages received from the network interface can be split into at least two data processing channels for processing.
Because after data is shunted, the processing speeds of the messages by the data processing channels are different, so that the messages output by the data processing channels are not output according to the input sequence, in order to enable the messages to be output according to the input sequence, the messages output by the data processing channels can be stored in a buffer memory, then the shunt information to be processed is acquired from a shunt information queue, the shunt information to be processed is the first shunt information at the outlet of the shunt information queue, then the shunt information to be processed is matched with the messages in the buffer memory, and the matched messages are target messages.
The distribution information queue is a first-in first-out queue, so that the distribution information is stored according to the sequence of the received messages, the distribution information to be processed which is first-out is obtained each time for matching, and the matched target message is the message which needs to be output currently, so that the output messages can be ensured to be output according to the input sequence of the messages.
Step S130: and outputting the target message.
The target message is a message received from the network interface first, and should be output to the next processing module first, so the target message can be directly output to the next processing module.
After the target message is output, the to-be-processed shunt information can be obtained from the shunt information queue, then the message matching is carried out continuously, and the next target message to be output is determined, so that the message can be output to the next processing module according to the input sequence of the message, and the problem of disorder of the message is avoided.
In the implementation process, the message is subjected to the split-flow processing, so that flow blocking is avoided, the data processing pressure is shared by a plurality of data processing channels, and therefore, each data processing channel does not need to meet higher processing performance, does not need to use more logic, operation and storage resources, and can also improve the data processing bandwidth of the whole system and reduce hardware cost. After the splitting processing, the splitting information to be processed is acquired from the splitting information queue, and the splitting information in the splitting information queue is stored according to the sequence of the received messages, so that when the messages output by at least two data processing channels are output, the splitting information to be processed is matched with the messages, and the matched target messages are output, so that the messages can be sequentially output according to the sequence of the received messages, and the messages are ensured not to be disordered. Therefore, the scheme can meet the large bandwidth requirement of the system by using low hardware cost and still orderly output the messages after the shunting processing.
On the basis of the above embodiment, the splitting information of the message may include an identifier of a data processing channel corresponding to the message, an identifier of the message, and an identifier of an input network interface, and when matching, the identifier of the data processing channel corresponding to the message, the identifier of the message, and the identifier of the input network interface included in the splitting information to be processed may be matched with the identifier of the data processing channel, the identifier of the message, and the identifier of the network interface in the splitting information corresponding to each message output by the at least two cached data processing channels, so as to determine a matched target message.
For example, the identifier of the data processing channel included in the to-be-processed splitting information may be understood as an ID of the message, for example, 20, and the identifier of the network interface may be a1, where the data processing channel identifier a may be matched with the identifier of the data processing channel of each cached message, the identifier of the message 20 may be matched with the identifier of each cached message, the identifier of the network interface a1 may be matched with the identifier of the network interface of each cached message, if the identifier of the data processing channel of a certain message in the cache is also a, the identifier of the message is also 20, and the identifier of the network interface is also a1, it indicates that the to-be-processed splitting information is matched with the message, and the message may be regarded as a matched target message, and then the message may be output to the next processing module.
It will be appreciated that the identity of the message may be unique, and that upon receipt of a message from the network interface, the split information of the message may be generated and stored in the split information queue. The identification of the message may refer to the number of the message, e.g. the message is numbered according to the sequence of the received messages, and the number is unique. Therefore, when matching, the identification of the message in the shunt information to be processed can be matched with the identification of each cached message, so that the target message which is uniquely matched with the shunt information to be processed can be found.
In the implementation process, each identifier in the to-be-processed shunt information is matched with each identifier corresponding to the message, so that the output messages can be ensured to be output according to the sequence of the inlets, and the messages can be output orderly.
On the basis of the above embodiment, under the condition that the traffic of the network interface is not large, the current message is not needed to be split, but only needs to be split under the condition that the traffic is large, so that when the current message is received from the network interface, whether the message of the network interface needs to be split or not can be judged first, if yes, the traffic statistic value of each data processing channel in at least two data processing channels is obtained, split information of the current message is generated and stored in a split information queue, and then a target data processing channel corresponding to the current message can be determined according to the traffic statistic value of each data processing channel, and the current message is input into the target data processing channel.
For example, after receiving a message from a network interface, the network interface is used as a current message, and if it is determined that the flow of the network interface needs to be split at this time, a flow statistic value of each data processing channel is obtained, for example, the flow processing condition of each data processing channel can be counted. In order to facilitate more objective evaluation of the processing load of each data processing channel, the flow statistics of the data processing channel may be counted in the split period, and the flow statistics in the non-split period are not taken as a reference.
If the data processing channel processes not only the message of the network interface corresponding to the data processing channel but also other divided messages, the flow of the data processing channel is counted, the obtained flow statistic value is obtained, the flow statistic value is regularly obtained in the dividing period, then the updated flow statistic value is updated and stored later, and when the current message is divided, the updated flow statistic value can be directly obtained.
When the current message is subjected to the shunt processing, the flow statistical values of all the data processing channels can be compared, and then the data processing channel with the smallest flow statistical value is selected as the target data processing channel, namely the data processing channel with the smallest flow statistical value is considered to have smaller processing load. During the splitting process, the load balance of each data processing channel should be ensured as much as possible, so as to avoid the problem of overload of the data processing channels caused by splitting the messages to the same data processing channel.
In the implementation process, the flow statistical value of each data processing channel is acquired to determine the destination of the current message, so that the load balance among each data processing channel can be ensured.
Based on the above embodiments, the traffic statistics may include the number of processed messages or the total number of bytes of the processed messages, and the data processing channel with the smallest number of processed messages or the smallest total number of bytes of the processed messages may be determined according to the number of processed messages or the total number of bytes of the processed messages of each data processing channel, and the data processing channel is used as the target data processing channel corresponding to the current message.
The following description is divided into three cases.
1. The traffic statistic includes the number of messages processed.
In this case, the number of processed messages in each data processing channel may be counted, then when the current message is split, the data processing channel with the smallest number of processed messages is selected, and the data processing channel is regarded as the channel with the smallest load, and may be used as the target data processing channel, and then the current message is sent to the target data processing channel for processing. After the target data processing channel processes the current message, the number of the processed messages for the target data processing channel is increased by 1.
2. The traffic statistic includes the total number of bytes of the processed message.
In this case, the total number of bytes of the processed message of each data processing channel may be counted, then when the current message is shunted, the data processing channel with the smallest total number of bytes of the processed message is selected, and the data processing channel is considered as the channel with the smallest load, and may be used as the target data processing channel, and then the current message is sent to the target data processing channel for processing. After the target data processing channel processes the current message, the total byte number of the processed message of the target data processing channel is increased.
3. The traffic statistics include the number of processed messages and the total number of bytes of the processed messages.
In this case, when selecting the data processing channel with the smallest number of processed messages and the smallest total number of bytes of the processed messages, the number of processed messages and the total number of bytes of each data processing channel may be processed correspondingly, so as to obtain an average statistical value, for example, the total number of bytes is divided by the number to obtain an average number of bytes, and then when splitting the current message, the data processing channel with the smallest average number of bytes (the smallest average number of bytes is regarded as the smallest number of processed messages and the smallest total number of bytes of processed messages) is selected, and the data processing channel is regarded as the channel with the smallest load, and then the current message is sent to the target data processing channel for processing. After the target data processing channel processes the current message, the total byte number of the processed message of the target data processing channel is increased by the byte number of the current message, and the number of the processed message is increased by 1.
In the implementation process, the number of the processed messages and the total byte number can more accurately reflect the processing load of each data processing channel.
On the basis of the above embodiment, when judging whether the message of the network interface needs to be split, the method can also judge whether the split switch is turned on or not by detecting, if the split switch is turned on, determining that the message of the network interface needs to be split, and if the split switch is not turned on, determining that the message of the network interface does not need to be split.
The shunting switch can be understood as a mark used for marking whether shunting is needed in the system, and after the shunting switch is turned on, a subsequently received message is automatically processed according to a shunting flow, namely, a data processing channel to which the message is sent is selected according to a flow statistic value of each data processing channel. And when the shunt switch is not started, not carrying out shunt processing on the message, and sending the message to a preset default data processing channel.
Therefore, after the current message is received from the network interface, whether the shunt switch is started or not can be detected, if so, the shunt processing is directly carried out according to the shunt flow, if not, the message of the network interface can be judged not to need the shunt processing, at the moment, the current message can be input into the default data processing channels corresponding to the network interface in at least two data processing channels, and the shunt information of the current message is generated and stored in the shunt information queue.
For example, for the network interface on the main panel, the default data processing channel preconfigured in the main panel is channel a, for the network interface on the standby panel, the default data processing channel preconfigured in the main panel is channel B, if the current message is received from the network interface on the main panel, and the shunt switch is not turned on at this time, the current message is directly sent to channel a for processing, if the current message is received from the network interface on the standby panel, and if the shunt switch is not turned on, the current message is sent to channel B for processing.
However, if the current shunt switch is turned on, if the current message is received from the network interface on the main panel, and at this time, the channel B is determined to be the channel with the minimum flow statistics value according to the flow statistics values of the channel a and the channel B, then the channel B is determined to be the target data processing channel corresponding to the current message, and then the current message is sent to the channel B for processing, and after the shunt switch is turned on, the current message is not sent according to the default data processing channel.
It should be noted that, when the flow statistics of each data processing channel are counted, the flow statistics may also be counted according to the time when the diverter switch is turned on, for example, in the time period when the diverter switch is turned off, although the flow statistics of each data processing channel may also be counted, the flow statistics obtained in this period may not be used as a diverter reference, so after the diverter switch is turned on, the flow statistics of each data processing channel may be cleared and then counted again.
For example, after the system is operated, the flow is not large at this moment, the flow distribution switch is not opened, along with the operation of the system, the flow statistics value of each data processing channel is counted, after detecting that the flow distribution switch is opened at a certain time point, the flow statistics value of each data processing channel counted before can be cleared, at this moment, the acquisition of the flow statistics value is carried out again, and then in the stage of opening the flow distribution switch, the flow distribution processing of the message is carried out according to the acquired flow statistics value again. If the flow becomes smaller after a period of time, the flow dividing switch is closed, the flow statistic value is continuously obtained at the moment, and the flow dividing switch is reset after the next time of opening.
In the implementation process, whether the current message is subjected to the splitting treatment can be rapidly judged by judging whether the splitting switch is turned on, so that the treatment rate of the message is improved.
On the basis of the above embodiment, whether to turn on the shunt switch may be determined by: when the high-rate network interface and the at least one low-rate network interface in the main panel are detected to be started and the high-rate network interface in the standby panel is not started, or when the high-rate network interface and the at least one low-rate network interface in the standby panel are detected to be started and the high-rate network interface in the main panel is not started, a shunt switch is started; the shunt switch is not turned on when the high-rate network interface in the primary only panel is detected to be enabled, or when the high-rate network interface in the backup only panel is detected to be enabled.
The network device defaults to preferentially enable network interfaces on the primary panel, which are typically only enabled when network interfaces on the primary panel are not available, e.g., the primary panel and the backup panel have the same number and type of network interfaces, and the network device may enable 40G network interfaces on the backup panel when 40G network interfaces on the primary panel are not available.
The high-rate network interface refers to the highest-rate network interface, and the low-rate network interface refers to other network interfaces except the highest-rate network interface, for example, the main panel and the standby panel each comprise 1 40G interface, 2 10G interfaces and 1G interface, wherein the 40G interfaces are high-rate network interfaces, and the rest 10G interfaces and 1G interfaces are low-rate network interfaces.
The high-rate network interface can be used for receiving larger traffic, and the low-rate interface can not receive larger traffic, so if the high-rate network interface and at least one low-rate network interface in the main panel are detected to be started and the high-rate network interface in the standby panel is not started, or the high-rate network interface and at least one low-rate network interface in the standby panel are started and the high-rate network interface in the main panel is not started, the excessive traffic of the high-rate network interface is indicated, the processing performance of the processing chip is reduced, and therefore a shunting switch can be started to shunt the traffic, hardware resources such as logic, operation and storage can be saved, and meanwhile, the processing performance of the whole system can be effectively improved.
It should be noted that, when there are a plurality of low-rate network interfaces, the detection of the enabling condition of the low-rate network interfaces may be that when at least one low-rate network interface is detected to be enabled and the high-rate network interface is also enabled, the shunt switch is turned on.
And when detecting that only the high-rate network interface is enabled (such as enabling only the high-rate network interface in the main panel or enabling only the high-rate network interface in the standby panel), the current flow is not large, no shunting is needed, and then the shunting switch is not started. Of course, when the flow is large, after the low-rate network interface is started and the shunt switch is turned on, if the low-rate network interface is detected to be not started subsequently, only one of the high-rate network interfaces is started, the shunt switch is turned off, and if the low-rate network interface is detected to be turned on subsequently, the shunt switch is turned on again.
In addition, during the splitting, the traffic of the high-rate network interface can be split, for example, after the traffic of the high-rate network interface of the main panel is split according to the channel a corresponding to the network interface of the main panel, the message received from the high-rate network interface of the main panel can be sent to the channel B corresponding to the network interface of the standby panel for processing. Similarly, when the traffic of the high-rate network interface of the standby panel is split, the message received from the high-rate network interface of the standby panel can be sent to the channel A for processing. After the splitting processing, the processing performance requirement on each data processing channel is not high, if each data processing channel is not required to meet the processing performance of 61G, the splitting mode can reduce the processing bandwidth of the parallel processing channels, and the higher system bandwidth requirement can be met by adopting a lower-speed processing chip or fewer logic, operation and storage resources.
In other embodiments, if the high-rate network interface in the main panel is enabled, the low-rate network interface in the main panel needs to be started along with the increase of the data traffic, and if the low-rate network interface in the main panel is started, the splitting flow needs to be triggered, so that the processing resources of the processing chip are reduced. The message input channel A received from the high-speed network interface of the main panel can be processed, and the message input channel B acquired from the low-speed network interface of the standby panel can be processed, so that the processing pressure of the channel A can be shared through the channel B, and all traffic cannot flow into the channel A for processing, and the situation that the processing pressure of the channel A is increased cannot be caused.
In the implementation process, whether the shunt switch is started or not is judged by detecting the working state of the network interface, so that the processing mode of the flow can be flexibly adjusted according to the flow demand, and the system performance is improved.
The above-described process is described below with a specific example.
As shown in fig. 4, fig. 4 is a flowchart of a complete message processing method. The data processing channel a is described below as an example of the splitting, and the principle of the splitting manner of each channel is similar in practice, and will not be described here.
(1) The starting conditions of the network interfaces of the main panel and the standby panel can be monitored in real time through an interface monitoring mechanism so as to judge whether to start the shunt switch. If it is detected that both the high-rate network interface and at least one low-rate network interface of the main panel are enabled, but the high-rate network interfaces of the standby panel are not enabled (it should be noted here that the same type of network interfaces in the main panel and the standby panel cannot be enabled at the same time in general, if all the network interfaces of the main panel are enabled, all the network interfaces of the standby panel are not enabled), the diverter switch is turned on, so as to allow the traffic of the high-rate network interface of the main panel to be diverted into the data processing channel B, thereby relieving the processing pressure of the data processing channel a and improving the processing bandwidth of the whole system. Otherwise, when any network interface in the main panel except the high-speed network interface is not monitored to be started, the shunt switch is not started.
(2) The channel flow monitoring mechanism is responsible for monitoring the real-time flow of each data processing channel, namely, acquiring the flow statistical value of each data processing channel, feeding back the flow statistical value of each data processing channel to the flow diversion arbitration module in the flow diversion module, and determining which data processing channel the current message is distributed to by referring to the flow statistical value of each data processing channel fed back by the flow diversion arbitration module. In addition, when the shunt switch is closed, the real-time flow statistic value returned by the channel flow monitoring mechanism is not used as a shunt reference, so that after the shunt switch is opened, the channel flow monitoring mechanism can zero the flow statistic value of each data processing channel and then count the flow statistic value.
(3) Flow diversion: the flow diversion arbitration module and the diversion information generation module are included.
And under the condition that the flow dividing switch is opened, the flow dividing arbitration module determines the direction of the current message according to the flow statistic value of each data processing channel, namely, the current message is sent to the data processing channel with the minimum flow statistic value.
The distribution information generating module mainly generates corresponding distribution information of the received message, wherein the distribution information comprises an identifier of a network interface (namely an port id), an identifier of a data processing channel (namely a channel id) and an identifier of the message (namely a tag value).
In order to match the subsequent messages, the shunting information generating module stores shunting information into a shunting information caching module in the caching module after generating the shunting information of the messages.
If the current message needs to be split and split to the data processing channel B, the message is input to the data processing channel B for processing, and if the current message does not need to be split, the current message is input to a default data processing channel (namely the data processing channel A) for processing. The processing mode of the data processing channel to the message can comprise the processing of analyzing the message, extracting the characteristics of the message, looking up a table and the like.
(4) And (3) flow cache: the flow buffer memory comprises a flow buffer memory module and a flow distribution information buffer memory module, wherein the flow buffer memory module is used for storing a message processed by a data processing channel and flow distribution information corresponding to the message, the flow distribution information buffer memory module is used for storing the flow distribution information of the message, and the flow distribution information buffer memory module is a flow distribution information queue and can be understood as a FIFO module.
(5) Sequencing flow: the traffic transmission arbitration module is included. The flow sending arbitration module obtains first diversion information arranged at the outlet position from the diversion information caching module, matches the first diversion information with the diversion information of the messages in the flow caching module, determines a target message matched with the first diversion information, outputs the target message to the next-stage processing module, and then matches the next diversion information.
The shunting information in the shunting information buffer module is generated and stored according to the message sequence of the entrance, so that the sequence of the messages output by the flow sending arbitration module is the same as the sequence of the messages received at the entrance.
Therefore, the method can dynamically adjust flow diversion according to the working state of the network interface, provide the utilization rate of hardware resources, and ensure that messages are not disordered after diversion. And, the low bandwidth parallel processing channel design can meet the higher total bandwidth requirement of the system.
Referring to fig. 5, fig. 5 is a block diagram illustrating a message processing apparatus 200 according to an embodiment of the present application, where the apparatus 200 may be a module, a program segment, or a code on an electronic device. It should be understood that the apparatus 200 corresponds to the above embodiment of the method of fig. 1, and is capable of performing the steps involved in the embodiment of the method of fig. 1, and specific functions of the apparatus 200 may be referred to in the above description, and detailed descriptions thereof are omitted herein as appropriate to avoid redundancy.
Optionally, the apparatus 200 includes:
the offload information obtaining module 210 is configured to obtain offload information to be processed from an offload information queue, where the offload information queue is offload information of each packet stored according to a packet sequence received from the network interface;
The splitting information matching module 220 is configured to match the splitting information to be processed with each packet output by at least two data processing channels of the cache, and determine a target packet matched with the splitting information to be processed; the at least two data processing channels are used for processing the messages shunted by the network interface;
the message output module 230 is configured to output the target message.
Optionally, the shunting information of the message includes an identifier of a data processing channel corresponding to the message, an identifier of the message, and an identifier of an input network interface; the splitting information matching module 220 is configured to correspondingly match the identifier of the data processing channel, the identifier of the message, and the identifier of the input network interface corresponding to the message included in the splitting information to be processed with the identifier of the data processing channel, the identifier of the message, and the identifier of the network interface in the splitting information corresponding to each message output by the at least two data processing channels, and determine a matched target message.
Optionally, the apparatus 200 further includes:
the shunting processing module is used for receiving the current message from the network interface; judging whether the message of the network interface needs to be subjected to shunting treatment or not; if yes, acquiring flow statistics values of all the data processing channels in the at least two data processing channels, generating shunting information of the current message, and storing the shunting information into the shunting information queue; and determining a target data processing channel corresponding to the current message according to the flow statistic value of each data processing channel, and inputting the current message into the target data processing channel.
Optionally, the flow statistics value includes the number of processed messages or the total number of bytes of the processed messages, and the split processing module is configured to compare the number of processed messages or the total number of bytes of the processed messages of each data processing channel, determine a data processing channel with the smallest number of processed messages or the smallest total number of bytes of the processed messages, and use the data processing channel as the target data processing channel corresponding to the current message.
Optionally, the shunt processing module is used for detecting whether the shunt switch is turned on or not; if the network interface is started, determining that the message of the network interface needs to be subjected to shunting treatment; if not, determining that the message of the network interface does not need to be subjected to the shunting processing.
Optionally, whether to turn on the shunt switch is determined by:
opening the shunt switch upon detecting that a high-rate network interface and at least one low-rate network interface in a main panel are enabled and a high-rate network interface in a standby panel is not enabled, or upon detecting that a high-rate network interface and at least one low-rate network interface in the standby panel are enabled and a high-rate network interface in the main panel is not enabled;
The shunt switch is not turned on when only high-rate network interfaces in the primary panel are detected to be enabled, or when only high-rate network interfaces in the backup panel are detected to be enabled.
Optionally, the splitting processing module is further configured to, if it is determined that the packet of the network interface does not need to be split, input the current packet into a default data processing channel corresponding to the network interface in the at least two data processing channels, and generate splitting information of the current packet and store the splitting information into the splitting information queue.
It should be noted that, for convenience and brevity, a person skilled in the art will clearly understand that, for the specific working procedure of the apparatus described above, reference may be made to the corresponding procedure in the foregoing method embodiment, and the description will not be repeated here.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an electronic device for executing a message processing method according to an embodiment of the present application, where the electronic device may include: at least one processor 310, such as a CPU, at least one communication interface 320, at least one memory 330, and at least one communication bus 340. Wherein the communication bus 340 is used to enable direct connection communication of these components. The communication interface 320 of the device in the embodiment of the present application is used for performing signaling or data communication with other node devices. The memory 330 may be a high-speed RAM memory or a nonvolatile memory (non-volatile memory), such as at least one disk memory. Memory 330 may also optionally be at least one storage device located remotely from the aforementioned processor. The memory 330 has stored therein computer readable instructions which, when executed by the processor 310, perform the method process described above in fig. 1.
It will be appreciated that the configuration shown in fig. 6 is merely illustrative, and that the electronic device may also include more or fewer components than shown in fig. 6, or have a different configuration than shown in fig. 6. The components shown in fig. 6 may be implemented in hardware, software, or a combination thereof.
Embodiments of the present application provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs a method process performed by an electronic device in the method embodiment shown in fig. 1.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, are capable of performing the methods provided by the above-described method embodiments, for example, comprising: acquiring to-be-processed shunting information from a shunting information queue, wherein the shunting information queue is shunting information of each message stored according to a message sequence received from a network interface; matching the to-be-processed shunt information with each message output by at least two data processing channels of the cache, and determining a target message matched with the to-be-processed shunt information; the at least two data processing channels are used for processing the messages shunted by the network interface; and outputting the target message.
In summary, the embodiments of the present application provide a method, an apparatus, an electronic device, and a readable storage medium for processing a packet split by a network interface through at least two data processing channels, so that the packet of each data processing channel is prevented from originating from a fixed interface, and the traffic is shared to each data processing channel, so that each data processing channel does not need to meet higher processing performance, does not need to use more logic, operation, and storage resources, and reduces the hardware cost. After the splitting processing, the splitting information to be processed is acquired from the splitting information queue, and the splitting information in the splitting information queue is stored according to the sequence of the received messages, so that when the messages output by at least two data processing channels are output, the splitting information to be processed is matched with the messages, and the matched target messages are output, and the messages can be sequentially output according to the sequence of the received messages. Therefore, the scheme can give consideration to low hardware cost and the messages are still orderly output after the split processing.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The above-described apparatus embodiments are merely illustrative, for example, the division of the units is merely a logical function division, and there may be other manners of division in actual implementation, and for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be through some communication interface, device or unit indirect coupling or communication connection, which may be in electrical, mechanical or other form.
Further, the units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Furthermore, functional modules in various embodiments of the present application may be integrated together to form a single portion, or each module may exist alone, or two or more modules may be integrated to form a single portion.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The foregoing is merely exemplary embodiments of the present application and is not intended to limit the scope of the present application, and various modifications and variations may be suggested to one skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principles of the present application should be included in the protection scope of the present application.

Claims (9)

1. A method for processing a message, the method comprising:
obtaining to-be-processed shunt information from a shunt information queue, wherein the shunt information queue is shunt information of each message stored according to a message sequence received from a network interface, the shunt information queue is a first-in first-out queue, and the to-be-processed shunt information is first shunt information at an outlet of the shunt information queue;
matching the to-be-processed shunt information with each message output by at least two data processing channels of the cache, and determining a target message matched with the to-be-processed shunt information; the at least two data processing channels are used for processing the messages shunted by the network interface;
outputting the target message;
the message distribution information comprises an identifier of a data processing channel corresponding to the message, an identifier of the message and an identifier of an input network interface;
the step of matching the to-be-processed shunt information with each message output by at least two cached data processing channels, and determining a target message matched with the to-be-processed shunt information comprises the following steps:
and correspondingly matching the identifiers of the data processing channels, the identifiers of the messages and the identifiers of the input network interfaces, which are contained in the to-be-processed shunt information, with the identifiers of the data processing channels, the identifiers of the messages and the identifiers of the network interfaces in the shunt information corresponding to each message output by at least two data processing channels, and determining matched target messages.
2. The method of claim 1, wherein before obtaining the pending offload information from the offload information queue, further comprises:
receiving a current message from the network interface;
judging whether the message of the network interface needs to be subjected to shunting treatment or not;
if yes, acquiring flow statistics values of all the data processing channels in the at least two data processing channels, generating shunting information of the current message, and storing the shunting information into the shunting information queue;
and determining a target data processing channel corresponding to the current message according to the flow statistic value of each data processing channel, and inputting the current message into the target data processing channel.
3. The method according to claim 2, wherein the flow statistics include a number of processed messages or a total number of bytes of the processed messages, and the determining, according to the flow statistics of each data processing channel, the target data processing channel corresponding to the current message includes:
comparing the number of the processed messages or the total byte number of the processed messages of each data processing channel, determining the data processing channel with the minimum number of the processed messages or the minimum total byte number of the processed messages, and taking the data processing channel as the target data processing channel corresponding to the current message.
4. The method of claim 2, wherein determining whether the packet of the network interface requires a forking process comprises:
detecting whether a shunt switch is turned on or not;
if the network interface is started, determining that the message of the network interface needs to be subjected to shunting treatment;
if not, determining that the message of the network interface does not need to be subjected to the shunting processing.
5. The method of claim 4, wherein determining whether to turn on the shunt switch is performed by:
opening the shunt switch upon detecting that a high-rate network interface and at least one low-rate network interface in a main panel are enabled and a high-rate network interface in a standby panel is not enabled, or upon detecting that a high-rate network interface and at least one low-rate network interface in the standby panel are enabled and a high-rate network interface in the main panel is not enabled;
the shunt switch is not turned on when only high-rate network interfaces in the primary panel are detected to be enabled, or when only high-rate network interfaces in the backup panel are detected to be enabled.
6. The method of claim 2, wherein after determining whether the packet of the network interface needs to be split, further comprising:
If the fact that the message of the network interface does not need to be subjected to the shunting processing is judged, the current message is input into a default data processing channel corresponding to the network interface in the at least two data processing channels, and shunting information of the current message is generated and stored in the shunting information queue.
7. A message processing apparatus, the apparatus comprising:
the system comprises a distribution information acquisition module, a network interface and a distribution information processing module, wherein the distribution information acquisition module is used for acquiring distribution information to be processed from a distribution information queue, the distribution information queue is the distribution information of each message stored according to the sequence of the messages received from the network interface, the distribution information queue is a first-in first-out queue, and the distribution information to be processed is the first distribution information at the outlet of the distribution information queue;
the distribution information matching module is used for matching the distribution information to be processed with each message output by at least two data processing channels of the cache, and determining a target message matched with the distribution information to be processed; the at least two data processing channels are used for processing the messages shunted by the network interface;
the message output module is used for outputting the target message;
The message distribution information comprises an identifier of a data processing channel corresponding to the message, an identifier of the message and an identifier of an input network interface;
the splitting information matching module is specifically configured to correspondingly match an identifier of a data processing channel corresponding to a message included in the splitting information to be processed, an identifier of the message, and an identifier of an input network interface with identifiers of data processing channels, identifiers of the messages, and identifiers of the network interfaces in splitting information corresponding to each message output by at least two data processing channels in a cache, and determine a matched target message.
8. An electronic device comprising a processor and a memory storing computer readable instructions that, when executed by the processor, perform the method of any of claims 1-6.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the method according to any of claims 1-6.
CN202111436516.7A 2021-11-29 2021-11-29 Message processing method and device, electronic equipment and readable storage medium Active CN114124854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111436516.7A CN114124854B (en) 2021-11-29 2021-11-29 Message processing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111436516.7A CN114124854B (en) 2021-11-29 2021-11-29 Message processing method and device, electronic equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN114124854A CN114124854A (en) 2022-03-01
CN114124854B true CN114124854B (en) 2024-02-09

Family

ID=80367739

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111436516.7A Active CN114124854B (en) 2021-11-29 2021-11-29 Message processing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114124854B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072176A (en) * 2007-04-02 2007-11-14 华为技术有限公司 Report processing method and system
CN101291194A (en) * 2008-05-20 2008-10-22 华为技术有限公司 Method and system for keeping sequence of report
CN105656804A (en) * 2014-11-20 2016-06-08 中兴通讯股份有限公司 Message processing method and device
CN108737296A (en) * 2017-09-27 2018-11-02 新华三技术有限公司 A kind of data transmission method, device and the network equipment
CN111464456A (en) * 2020-03-31 2020-07-28 杭州迪普科技股份有限公司 Flow control method and device
WO2020252635A1 (en) * 2019-06-17 2020-12-24 西门子股份公司 Method and apparatus for constructing network behavior model, and computer readable medium
CN112753198A (en) * 2018-09-30 2021-05-04 华为技术有限公司 Load balancing and message reordering method and device in network
WO2021159478A1 (en) * 2020-02-14 2021-08-19 华为技术有限公司 Message order preservation method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101072176A (en) * 2007-04-02 2007-11-14 华为技术有限公司 Report processing method and system
CN101291194A (en) * 2008-05-20 2008-10-22 华为技术有限公司 Method and system for keeping sequence of report
CN105656804A (en) * 2014-11-20 2016-06-08 中兴通讯股份有限公司 Message processing method and device
CN108737296A (en) * 2017-09-27 2018-11-02 新华三技术有限公司 A kind of data transmission method, device and the network equipment
CN112753198A (en) * 2018-09-30 2021-05-04 华为技术有限公司 Load balancing and message reordering method and device in network
WO2020252635A1 (en) * 2019-06-17 2020-12-24 西门子股份公司 Method and apparatus for constructing network behavior model, and computer readable medium
WO2021159478A1 (en) * 2020-02-14 2021-08-19 华为技术有限公司 Message order preservation method and apparatus
CN111464456A (en) * 2020-03-31 2020-07-28 杭州迪普科技股份有限公司 Flow control method and device

Also Published As

Publication number Publication date
CN114124854A (en) 2022-03-01

Similar Documents

Publication Publication Date Title
US7889659B2 (en) Controlling a transmission rate of packet traffic
EP3588881A1 (en) Technologies for reordering network packets on egress
US20190190852A1 (en) Data processing method and physical machine
US8885480B2 (en) Packet priority in a network processor
US20230283578A1 (en) Method for forwarding data packet, electronic device, and storage medium for the same
CN106603409B (en) Data processing system, method and equipment
EP3588879A1 (en) Technologies for buffering received network packet data
CN116233018A (en) Message processing method and device, electronic equipment and storage medium
CN114124854B (en) Message processing method and device, electronic equipment and readable storage medium
CN116723162A (en) Network first packet processing method, system, device, medium and heterogeneous equipment
US7990987B2 (en) Network processor having bypass capability
CN110347518B (en) Message processing method and device
CN110297785A (en) A kind of finance data flow control apparatus and flow control method based on FPGA
CN114301812B (en) Method, device, equipment and storage medium for monitoring message processing result
US20190044872A1 (en) Technologies for targeted flow control recovery
US7283562B2 (en) Method and apparatus for scaling input bandwidth for bandwidth allocation technology
CN113378194B (en) Encryption and decryption operation acceleration method, system and storage medium
CN110336759B (en) RDMA (remote direct memory Access) -based protocol message forwarding method and device
CN114363379A (en) Vehicle data transmission method and device, electronic equipment and medium
CN108616461B (en) Policy switching method and device
US10951526B2 (en) Technologies for efficiently determining a root of congestion with a multi-stage network switch
US20210406093A1 (en) Computing machine, method and non-transitory computer-readable medium
CN111106977A (en) Data stream detection method, device and storage medium
CN111817906B (en) Data processing method, device, network equipment and storage medium
US20230112747A1 (en) Method for allocating resource for storing visualization information, apparatus, and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240106

Address after: 071800 Conference Center 1-184, South Section of Baojin Expressway, Xiong'an Area, Xiong'an New District, Baoding City, Hebei Province

Applicant after: Tianrongxin Xiongan Network Security Technology Co.,Ltd.

Applicant after: Beijing Topsec Network Security Technology Co.,Ltd.

Applicant after: Topsec Technologies Inc.

Applicant after: BEIJING TOPSEC SOFTWARE Co.,Ltd.

Address before: 100000 4th floor, building 3, yard 1, Shangdi East Road, Haidian District, Beijing

Applicant before: Beijing Topsec Network Security Technology Co.,Ltd.

Applicant before: Topsec Technologies Inc.

Applicant before: BEIJING TOPSEC SOFTWARE Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant