CN110661731B - Message processing method and device - Google Patents

Message processing method and device Download PDF

Info

Publication number
CN110661731B
CN110661731B CN201910918865.9A CN201910918865A CN110661731B CN 110661731 B CN110661731 B CN 110661731B CN 201910918865 A CN201910918865 A CN 201910918865A CN 110661731 B CN110661731 B CN 110661731B
Authority
CN
China
Prior art keywords
message
processing
queue
processed
messages
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910918865.9A
Other languages
Chinese (zh)
Other versions
CN110661731A (en
Inventor
刘南雁
范杰
赵强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Everbright Xinglong Trust Co ltd
Original Assignee
Everbright Xinglong Trust Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Everbright Xinglong Trust Co ltd filed Critical Everbright Xinglong Trust Co ltd
Priority to CN201910918865.9A priority Critical patent/CN110661731B/en
Publication of CN110661731A publication Critical patent/CN110661731A/en
Application granted granted Critical
Publication of CN110661731B publication Critical patent/CN110661731B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/72Admission control; Resource allocation using reservation actions during connection setup
    • H04L47/722Admission control; Resource allocation using reservation actions during connection setup at the destination endpoint, e.g. reservation of terminal resources or buffer space
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention relates to a message processing method and a device thereof, wherein the method comprises the following steps: step S1: receiving a message and caching the received message; step S2: the processing unit processes the message; step S3: the processed message is buffered for transmission to an external network. The invention sets a preprocessing buffer queue to preprocess the message in advance, thereby directionally buffering the message according to factors such as the type and time of the message and the like under the condition of not influencing the message processing speed and directly enabling the queue to correspond to a processor set; the method can dynamically adjust the limit of the processor set according to the number of the messages waiting for processing in the queue to be processed, and greatly improves the processing efficiency.

Description

Message processing method and device
[ technical field ] A method for producing a semiconductor device
The present invention belongs to the field of communication technology, and in particular, to a message processing method and apparatus.
[ background of the invention ]
In the current communication network, there are many complex network devices, such as routers, network managers, switches, firewalls, and various servers. These devices support various network protocols, so as to realize interconnection and intercommunication between network elements. The establishment of the network tunnel based on the video network needs to be completed through the control of the signaling message. At present, a video networking terminal receives a signaling message from a video networking node server in a kernel mode, and then transmits the signaling message to the video networking terminal in a user mode through a data copy mode and a socket interface. And the video network terminal analyzes the signaling message in the user mode. With the rapid development of network technology, the demand for information content and data quantity is more and more, the speed is faster and faster, and when data messages are transmitted in a network, the phenomenon of instantaneous congestion often occurs in network equipment. In the process of transmitting the message in the network, the message forwarding method not only has a simple message forwarding function, but also needs message deep processing functions such as message checking, comparison, searching and the like. Therefore, higher requirements are put on network equipment, packet loss is possibly caused by a congestion phenomenon at a network equipment end, so that network transmission performance is influenced, the throughput rate of a system is reduced, finally, the user experience is greatly reduced, the resource allocation is more reasonable and more accurate due to limited system resources, and the resource allocation which is not optimized may cause overload of a certain task, so that unnecessary message loss is caused. Because hardware resources are limited, the allocation of resources is often required to be more reasonable and more accurate, and the resource allocation which is not optimized may cause overload of a certain task to cause unnecessary message loss; conflicts may arise when different execution units access the i/o queue at the same time. How to improve the message processing speed under the condition of limited hardware resources under the condition of gradually generating contradiction between the development speed of hardware technology and the requirement of a user on network data is a problem to be solved at present. The invention sets a preprocessing buffer queue to preprocess the message in advance, thereby directionally buffering the message according to factors such as the type and time of the message and the like under the condition of not influencing the message processing speed and directly enabling the queue to correspond to a processor set; meanwhile, by means of corresponding queues and sets, the limited degree of processor set dynamic adjustment can be carried out according to the number of messages waiting for processing in the queue to be processed, and the processing efficiency of each type of messages can be ensured on the premise of ensuring the use efficiency of the processor; and the quick mapping of the ports is carried out in the processing process through the quick routing table, so that the sending efficiency is greatly improved.
[ summary of the invention ]
In order to solve the above problems in the prior art, the present invention provides a method for processing a packet, where the method includes:
step S1: receiving a message and caching the received message;
step S2: the processing unit processes the message;
step S3: the processed message is buffered for transmission to an external network.
Furthermore, the first queue to be processed and the second queue to be processed are used for performing inflow cache, the first queue to be processed is used for temporarily storing messages, and the messages are stored in each second sub-queue of the second queue to be processed after being preprocessed.
Further, the second sub-queue is associated with a processing type of the packet.
Further, the step S1 is specifically: storing the received message into a first queue to be processed; and preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue.
Further, the preprocessing of the packet specifically includes: judging the type of the message; and if the message does not need to be locally processed, directly sending the message to a port of a network node, otherwise, carrying out processing type prejudgment based on the input parameters of the classification model, and sending the message to a corresponding second sub-queue in a second queue to be processed.
Further, the messages in the first queue to be processed are preprocessed one by one;
a message processing apparatus, the apparatus comprising: the system comprises a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an exit queue and ports; wherein: the second queue to be processed comprises a plurality of second sub-queues;
the processing unit is used for processing the message; the processing units are divided into a plurality of processing unit sets, each second sub-queue corresponds to one processing unit set, and the processing unit set is used for processing the messages in the corresponding second sub-queue; the processing unit set has higher processing capacity on the message types in the corresponding second sub-queues than the processing capacity on the message types in other second sub-queues;
the exit queue is used for carrying out outflow caching on the processed messages, so that the processed messages do not occupy the internal or shared storage space of the processing unit;
the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue;
the processing unit is divided into a plurality of processing unit sets; the storage sharing overhead and communication overhead between processing elements are different, the communication and sharing overhead between processing elements within a set is relatively small, and the communication and sharing overhead between processing elements between sets is relatively large.
Further, the dividing is dynamically divided: the dynamic division is specifically as follows: and dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed.
Further, the length of the second sub-queue in the second pending queue is dynamically adjustable.
The beneficial effects of the invention include: a preprocessing buffer queue is arranged to carry out message preprocessing in advance, so that the message is directionally buffered according to factors such as the type and time of the message without influencing the message processing speed, and the queue and a processor set are directly corresponding to each other; meanwhile, by means of corresponding queues and sets, the limited degree of processor set dynamic adjustment can be carried out according to the number of messages waiting for processing in the queue to be processed, and the processing efficiency of each type of messages can be ensured on the premise of ensuring the use efficiency of the processor; and the quick mapping of the ports is carried out in the processing process through the quick routing table, so that the sending efficiency is greatly improved.
[ description of the drawings ]
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this application, and are not to be considered limiting of the invention, in which:
fig. 1 is a schematic diagram of a message processing method according to the present invention.
Fig. 2 is a schematic diagram of a message processing apparatus according to the present invention.
[ detailed description ] embodiments
The present invention will now be described in detail with reference to the drawings and specific embodiments, wherein the exemplary embodiments and descriptions are provided only for the purpose of illustrating the present invention and are not to be construed as limiting the present invention.
A detailed description is given of a message processing method applied in the present invention, as shown in fig. 1, the method includes:
step S1: receiving a message and caching the received message; specifically, the method comprises the following steps: storing the received message into a first queue to be processed; preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue;
the network node receives the message and performs inflow cache, processing and outflow cache on the received message; the network node is provided with a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an outlet queue and a port; wherein: the second queue to be processed comprises a plurality of second sub-queues;
preferably: the main control unit is one or more than one processing unit; the main control unit is used for carrying out control operations such as message preprocessing and the like;
the fast routing unit is used for sending the message from the exit queue to a port of the network node; the port is used for connecting with an external network and sending the message to the external network through the port;
the first queue to be processed and the second queue to be processed are used for inflow caching, the first queue to be processed is used for temporarily storing messages, and the messages are stored in each second sub-queue of the second queue to be processed after being preprocessed; the second sub-queue is related to the processing type of the message and is used for storing the message of the corresponding processing type; the second sub-queue is coupled with the processing unit to send the message to the corresponding processor set;
the processing unit is used for processing the message; the processing units are divided into a plurality of processing unit sets, each second sub-queue corresponds to one processing unit set, and the processing unit set is used for processing the messages in the corresponding second sub-queue; the processing unit set has higher processing capacity on the message types in the corresponding second sub-queues than the processing capacity on the message types in other second sub-queues; that is, the processing units in the processing unit set can also process other types of messages, but the processing capability for the corresponding type of message is higher than that for other types of messages; in the prior art, a received message is often simply processed in a k-in-k-out mode, the number of queues and the length of the queues are fixed and cannot be adapted to the type of the message and the time law of the occurrence of the message, so that the resource utilization efficiency of message processing is very low; by the method, the message congestion degree can be directly monitored based on the queue available length so as to dynamically adjust the corresponding processing unit set; such dynamic adjustment may be based on a time period, and/or based on real-time parameters;
the exit queue is used for performing outflow caching on the processed messages, so that the processed messages do not occupy the storage space inside or shared by the processing units any more, and loss and congestion which may be caused are avoided;
preferably: the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue;
the preprocessing of the message specifically comprises the following steps: judging the type of the message; if the message does not need to be locally processed, the message is directly sent to a port of a network node, otherwise, the processing type is pre-judged based on the input parameters of the classification model, and the message is sent to a corresponding second sub-queue in a second queue to be processed; preferably, the input parameters are the type of the message, the size of the message, the sending time of the message and the like; the type of the network message and the occurrence time of the network message are closely related, but the factor is not considered in the message processing in the prior art, and the message processing efficiency is limited due to the fact that the judgment is carried out on the attributes of other messages, which are not time factors, of the message at the current moment; if the time information is taken into consideration, the efficiency of processing and classifying can be greatly improved; the message type comprises whether processing is needed or not, the processing is bias calculation, the processing is bias storage, the processing is streaming processing, the processing of the message is a processing flow of a specific type and the like; that is, the processing type is not directly determined according to the type of the packet, and only whether further processing is required can be determined according to the type of the packet;
preferably: preprocessing the messages in the first queue to be processed one by one; sequentially acquiring message identifiers, and judging message types based on the identifiers; directly storing the message types which do not need to be processed into an exit queue; the processing type is pre-judged for other message types, and the message is stored in a corresponding second sub-queue of the second queue to be processed according to the processing type obtained by pre-judgment; the mark of the message comprises a part for indicating the type; the portion for indicating a type is extensible; the message type and the processing type are not in one-to-one correspondence;
the pre-judging of the processing type based on the input parameters specifically comprises the following steps: using one or more of the type, the size and/or the sending time of the message as input parameters, inputting a rough classification model for classification to obtain a dichotomous classification for indicating whether the processing classification is a first processing classification or a non-first processing classification; before the first processing classification is used, sample data containing input parameters and corresponding coarse classification results are used for training a coarse classification model so that the coarse classification model has certain classification capability; preferably: the output of the coarse classification model is zero or one, which respectively represents the first processing classification or the non-first processing classification; under the condition that the classification result of the rough classification model is not the first processing classification, inputting the input parameters into the fine classification model for classification to obtain a second processing classification; the second classification result is used for indicating that the classification result is one of a plurality of processing types; the number of the plurality of processing types indicated by the second classification result is added to be equal to the number of the second sub-queues; the first processing classification is the processing classification which accounts for the largest proportion in the message types; although the messages have various types, the processing classification of the important proportion is roughly divided, the rough division speed is very high, and the classification efficiency is greatly improved by the method of serially connecting and matching the rough classification models; the classification effect is greatly improved by the matching of the two-part thickness classification and the processing classification with the largest proportion; preferably: the first processing classification is an atypical classification; the typical classification is classified into a specific second sub-queue, and the general messages without the typical characteristics are not specially classified and are directly screened into a general processing unit set through rough classification;
the storing of the preprocessed message into the second queue to be processed or the exit queue specifically includes: judging the type of the message; directly storing the message which does not need to be processed into an exit queue; for other types of messages, pre-judging the processing types through the classification model, and storing the messages into corresponding second sub-queues of a second queue to be processed according to the processing types obtained through pre-judgment to wait for the next processing;
step S2: the processing unit processes the message; specifically, the method comprises the following steps: when the processing unit is in an idle state, acquiring a message from the second queue to be processed and processing the message; the processing units in the processing unit set independently or cooperatively process the message, and when the processing units are idle, the processing units directly go to the second sub-queue to obtain the message to be processed without being controlled by the main control unit;
the network node comprises a plurality of processing units, the processing units share a storage space and are in communication connection, and the plurality of processing units are partially identical and partially different; that is, the processing units appear heterogeneous as a whole, but appear locally as many of the same processing units; for example: having 4 processing units, two of type a and two of type B;
the processing unit is divided into a plurality of processing unit sets; the storage sharing overhead and the communication overhead between the processing units are different, the communication and sharing overhead between the processing units in the sets is relatively small, and the communication and sharing overhead between the processing units in the sets is relatively large;
the division is dynamic division: the dynamic division is specifically as follows: dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed; correspondingly: the length of a second sub-queue in the second queue to be processed can be dynamically adjusted; preferably: the length of the second sub-queue is dynamically adjusted based on the time characteristic of the message and the number of the messages to be processed in the current second sub-queue; the time characteristic of the message indicates the condition that the message type corresponding to the second sub-queue reaches the network node along with the change of time;
the different processing units have different architectures and different processing capabilities for different types of messages; the processing units are substantially identical between the same set of processing units, and the processing units in the same set of processing units are substantially identical; the corresponding processing units can be organized according to the type of the message; the processing unit is basically used for the message type which is good for processing by the processor, when the message quantity of the message type which is good for processing is small and the messages to be processed of other types are more, the dynamic division of the processing unit can be carried out, so that the processing unit in the processing unit is divided into other processing unit sets; at this time, although the processing unit which is divided has weak processing capability on other types of messages, the backlog situation of other types of messages to be processed can still be relieved; the division principle is that the communication overhead and the storage overhead between the newly added processing unit and other processing units are within an acceptable range as much as possible; for example: within a set threshold range;
preferably: the dynamic partitioning is a dynamic partitioning on a basic partitioning basis; specifically, the method comprises the following steps: initially, dividing the processing capacity of a processing unit and the architecture of a network unit into a plurality of processing unit sets; on the basis, the dynamic division is that the processing units contained in the processing unit set are dynamically adjusted according to the number of the messages to be processed in the second queue to be processed, so that the number of the processing units in the processing unit set is matched with the number of the messages to be processed corresponding to the number of the processing units in the processing unit set; the architecture of the network unit comprises a communication mode and a storage sharing mode;
preferably: the processing unit set is provided with a general processing unit set, the processing units in the general processing unit set have equal processing capacity to various messages, and when dynamic division is carried out, the processing units in the general processing unit set are divided into and out preferentially;
preferably: delivering the messages classified by atypical processing to the general processing unit set for processing;
the obtaining and processing of the message from the cache specifically includes: acquiring a message from a second queue to be processed and a second sub-queue corresponding to a processing unit set to which the processing unit belongs, and processing the message;
the processing unit set has higher processing capacity on the messages in the corresponding second sub-queues than the processing capacity on the messages in other second sub-queues;
step S3: caching the processed message to send to an external network; specifically, the method comprises the following steps: after the processing unit finishes processing the message, storing the message into an exit queue, and sending the message in the exit queue to different ports through the query fast routing unit so as to send the message to an external network;
because the network node is provided with a plurality of ports, messages can be sent in parallel; through the cache of the exit queue, the message does not occupy the storage space of the processing unit after being processed; the fast routing unit selects a port for sending the message based on the message sending information according to the message sending information obtained by processing the message, so that the sending efficiency of the message to a sending target is highest;
preferably: filling the fast routing unit in the process of processing the message by the processing unit;
preferably: the sending processing of the messages in the exit queue is out of order; when the port selected for sending is in an unavailable state, skipping the processing of the current message so as to continue routing and sending the subsequent message; in order to avoid that some messages are skipped all the time, the number of skipping times can be limited or the skipped messages can be placed in a temporary cache so as to make the processing priority of the messages in the temporary cache highest, so that the messages can be sent and processed;
the above description is only a preferred embodiment of the present invention, and all equivalent changes or modifications of the structure, characteristics and principles described in the present invention are included in the scope of the present invention.

Claims (4)

1. A method for processing a packet, the method comprising:
step S1: receiving a message and caching the received message;
step S2: the processing unit processes the message;
step S3: caching the processed message to send to an external network;
the first queue to be processed and the second queue to be processed are used for inflow caching, the first queue to be processed is used for temporarily storing messages, and the messages are stored in each second sub-queue of the second queue to be processed after being preprocessed; the second sub-queue is related to the processing type of the message and is used for storing the message of the corresponding processing type;
the step S1 specifically includes: storing the received message into a first queue to be processed; preprocessing the message, and storing the preprocessed message into a second queue to be processed or an exit queue;
the preprocessing of the message specifically comprises the following steps: judging the type of the message; if the message does not need to be locally processed, the message is directly sent to a port of a network node, otherwise, the processing type is pre-judged based on the input parameters of the classification model, and the message is sent to a corresponding second sub-queue in a second queue to be processed; the input parameters based on the classification model are used for processing type prejudgment, and specifically comprise: using one or more of the type, the size and/or the sending time of the message as input parameters, inputting a rough classification model for classification to obtain a dichotomous classification for indicating whether the processing classification is a first processing classification or a non-first processing classification; before the first processing classification is used, sample data containing input parameters and corresponding coarse classification results are used for training a coarse classification model so that the coarse classification model has certain classification capability;
under the condition that the classification result of the rough classification model is not the first processing classification, inputting the input parameters into the fine classification model for classification to obtain a second processing classification; the second processing classification result is used for indicating that the classification result is one of a plurality of processing types;
the step S2 specifically includes: when the processing unit is in an idle state, acquiring a message from the second queue to be processed and processing the message; the processing units in the processing unit set independently or cooperatively process the messages, and when the processing units are idle, the processing units directly go to the second sub-queue to obtain the messages to be processed;
the processing unit is used for processing the message; the plurality of processing units are divided into a plurality of processing unit sets, the storage sharing overhead and the communication overhead between the processing units are different, the communication and sharing overhead between the processing units inside the sets is relatively small, and the communication and sharing overhead between the processing units between the sets is relatively large; each second sub queue corresponds to a processing unit set respectively, and the processing unit set is used for processing the messages in the corresponding second sub queue; the processing unit set has higher processing capacity on the messages in the corresponding second sub-queues than the processing capacity on the messages in other second sub-queues;
monitoring the message congestion degree directly based on the queue available length so as to dynamically adjust the corresponding processing unit set; such dynamic adjustment is based on a time period, and/or dynamic adjustment based on real-time parameters; the dividing is dynamic dividing; the dynamic division is specifically as follows: dividing the number of the processing units in the corresponding processing unit set according to the number of the messages to be processed in the second sub-queue, so that the number of the processing units is matched with the number of the messages to be processed; correspondingly, the length of a second sub-queue in the second queue to be processed can be dynamically adjusted; the length of the second sub-queue is dynamically adjusted based on the time characteristic of the message and the number of the messages to be processed in the current second sub-queue; the time characteristic of the message indicates the time-varying arrival condition of the message corresponding to the second sub-queue at the network node;
the dynamic partitioning is a dynamic partitioning on a basic partitioning basis; specifically, the method comprises the following steps: initially, dividing a plurality of processing units into a plurality of processing unit sets according to the processing capacity and the architecture of the network unit; dynamically dividing on the basis of basic division, and dynamically adjusting the processing units in the processing unit set according to the number of messages to be processed in the second queue to be processed so as to enable the number of the processing units in the processing unit set to be matched with the number of the messages to be processed corresponding to the number of the processing units in the processing unit set; the architecture of the network element includes a communication mode and a storage sharing mode.
2. The message processing method according to claim 1, characterized in that the messages in the first queue to be processed are preprocessed one by one.
3. A message processing apparatus for executing the message processing method according to claim 1 or 2, the apparatus comprising: the system comprises a plurality of processing units, a main control unit, a first queue to be processed, a second queue to be processed, a fast routing unit, an exit queue and ports; the second queue to be processed comprises a plurality of second sub-queues; the main control unit is one or more than one processing unit; the main control unit is used for preprocessing the message; the fast routing unit selects a port for sending the message based on message sending information obtained by processing the message, so that the message sending efficiency to a sending target is highest; filling the fast routing unit in the process of processing the message by the processing unit;
the exit queue is used for carrying out outflow caching on the processed messages, so that the processed messages do not occupy the internal or shared storage space of the processing unit;
the first queue to be processed is also used for being directly connected with the outlet queue so as to directly store the received message into the outlet queue.
4. The message processing apparatus according to claim 3, wherein the sending process of the messages in the egress queue is out-of-order; when the selected port is in an unavailable state, skipping the processing of the current message to continuously route and send the subsequent message; in order to avoid that some messages are skipped all the time, the number of skipping is limited or the skipped messages are put in a temporary cache so as to make the processing priority of the messages in the temporary cache the highest, so that the messages can be sent and processed.
CN201910918865.9A 2019-09-26 2019-09-26 Message processing method and device Active CN110661731B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910918865.9A CN110661731B (en) 2019-09-26 2019-09-26 Message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910918865.9A CN110661731B (en) 2019-09-26 2019-09-26 Message processing method and device

Publications (2)

Publication Number Publication Date
CN110661731A CN110661731A (en) 2020-01-07
CN110661731B true CN110661731B (en) 2020-09-29

Family

ID=69039361

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910918865.9A Active CN110661731B (en) 2019-09-26 2019-09-26 Message processing method and device

Country Status (1)

Country Link
CN (1) CN110661731B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112580098A (en) * 2020-12-23 2021-03-30 光大兴陇信托有限责任公司 Business data comparison method and system

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN108632165A (en) * 2018-04-23 2018-10-09 新华三技术有限公司 A kind of message processing method, device and equipment

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101610435B (en) * 2009-07-17 2012-05-16 清华大学 Queue-type all-optical buffer
CN102082698A (en) * 2009-11-26 2011-06-01 上海大学 Network data processing system of high performance core based on improved zero-copy technology
CN105337896A (en) * 2014-07-25 2016-02-17 华为技术有限公司 Message processing method and device
CN105511954B (en) * 2014-09-23 2020-07-07 华为技术有限公司 Message processing method and device
CN106411778B (en) * 2016-10-27 2019-07-19 东软集团股份有限公司 The method and device of data forwarding
CN108881060A (en) * 2018-06-29 2018-11-23 新华三信息安全技术有限公司 A kind of method and device handling communication message

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101706742A (en) * 2009-11-20 2010-05-12 北京航空航天大学 Method for dispatching I/O of asymmetry virtual machine based on multi-core dynamic partitioning
CN108632165A (en) * 2018-04-23 2018-10-09 新华三技术有限公司 A kind of message processing method, device and equipment

Also Published As

Publication number Publication date
CN110661731A (en) 2020-01-07

Similar Documents

Publication Publication Date Title
EP1784735B1 (en) Apparatus and method for supporting memory management in an offload of network protocol processing
US9185047B2 (en) Hierarchical profiled scheduling and shaping
US7533176B2 (en) Method for supporting connection establishment in an offload of network protocol processing
US7493427B2 (en) Apparatus and method for supporting received data processing in an offload of network protocol processing
US20190044879A1 (en) Technologies for reordering network packets on egress
EP4024763A1 (en) Network congestion control method, node, system and storage medium
US20200252337A1 (en) Data transmission method, device, and computer storage medium
WO2021098730A1 (en) Switching network congestion management method and apparatus, device, and storage medium
US10467161B2 (en) Dynamically-tuned interrupt moderation
US8072972B2 (en) Configurable hardware scheduler calendar search algorithm
CN110661731B (en) Message processing method and device
WO2020082839A1 (en) Message processing method, related device and computer storage medium
WO2022057131A1 (en) Data congestion processing method and apparatus, computer device, and storage medium
CN104052683A (en) Network Processor and Method for Processing Packet Switching in Network Switching System
US11218394B1 (en) Dynamic modifications to directional capacity of networking device interfaces
US11528187B1 (en) Dynamically configurable networking device interfaces for directional capacity modifications
US20030058879A1 (en) Scalable hardware scheduler time based calendar search algorithm
CN111131081A (en) Method and device for supporting multi-process high-performance unidirectional transmission
JP2000083055A (en) Router
WO2019152942A2 (en) Dynamic software architecture reconfiguration for converged cable access platform (ccap)
JP2020088517A (en) Communication apparatus, and control method and program of communication apparatus
US20220278911A1 (en) System and method to distribute traffic flows among a plurality of applications in a data center system
US9922000B2 (en) Packet buffer with dynamic bypass
WO2024098757A1 (en) Network cluster system, message transmission method, and network device
US20240121185A1 (en) Hardware distributed architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant