CN105511954B - Message processing method and device - Google Patents

Message processing method and device Download PDF

Info

Publication number
CN105511954B
CN105511954B CN201410490087.5A CN201410490087A CN105511954B CN 105511954 B CN105511954 B CN 105511954B CN 201410490087 A CN201410490087 A CN 201410490087A CN 105511954 B CN105511954 B CN 105511954B
Authority
CN
China
Prior art keywords
processed
task
message
processing
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410490087.5A
Other languages
Chinese (zh)
Other versions
CN105511954A (en
Inventor
徐奕
徐正华
赵广
王明辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201410490087.5A priority Critical patent/CN105511954B/en
Publication of CN105511954A publication Critical patent/CN105511954A/en
Application granted granted Critical
Publication of CN105511954B publication Critical patent/CN105511954B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a message processing method and device. The method comprises the steps of arranging a pre-preprocessor outside the multi-core CPU, generating a task to be processed by the pre-preprocessor according to message information of a received message to be processed, distributing a sequence code for the task to be processed according to an identification and a receiving sequence of a physical port corresponding to the task to be processed, scheduling the task to be processed to the multi-core CPU, and carrying out order preserving processing on the task processed by the multi-core CPU according to the sequence code of the task to be processed. By adopting the technical scheme of the invention, the message scheduling and order preserving functions are realized by the front preprocessor, the multi-core CPU can execute tasks in parallel, the competition and resource interlocking among the multi-core are relieved, the processing capacity of the multi-core CPU is effectively improved, in addition, the multi-core CPU can effectively improve the task processing speed and avoid the waste of processor resources by executing the tasks in parallel, and the function unloading of the CPU is realized.

Description

Message processing method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a packet.
Background
With the continuous development of information technology and the continuous expansion of network communication bandwidth, the requirement on the Processing performance of communication equipment is higher and higher, and a multi-core Central Processing Unit (CPU) is widely applied to communication equipment due to the characteristics of high performance, good universality, good software portability and the like. In the existing multi-core CPU, each processing unit (core) is an independent structure, and there is a problem of competition between cores in operation processes of acquiring and releasing tasks and resources among cores, which affects the processing performance of the multi-core CPU. The time sequences of processing tasks among the cores are mutually independent, so that the situation that the tasks distributed to the multi-core CPU for processing are firstly processed and then completed, and the tasks distributed to the multi-core CPU for processing are firstly processed and completed can be caused, and the problem of processing disorder among the tasks of the same or related types can be caused.
At present, the classification and processing functions of tasks to be processed (such as message receiving, sending, order preserving, flow classification, etc.) of a multi-core CPU are generally realized in a software manner. Specifically, resources inside the multi-core CPU are uniformly managed by software, and the functions of message transceiving, classification, and processing are completed in a resource interlocking manner.
However, the problems of serial execution and memory sharing existing when software realizes an order-preserving function can cause the interlocking among multiple cores and influence the parallel processing capability in a multi-core CPU; moreover, for functions with high real-time requirements (such as functions of sending and receiving messages, timers and the like), when scheduling allocation and order-preserving processing are performed on messages which need to be processed by the multi-core CPU in a software mode, exclusive processor resources need to be allocated for processing, and the problem of processor resource waste is brought.
Disclosure of Invention
The embodiment of the invention provides a message processing method and a message processing device, which are used for solving the problems that when the messages required to be processed by a multi-core CPU are scheduled and processed in an order-preserving manner, the multi-core CPU cannot process tasks in parallel, the task processing speed is low and the resource of a processor is wasted.
The embodiment of the invention provides the following specific technical scheme:
in a first aspect, a method for processing a packet is provided, including: receiving a message to be processed from a physical port, and generating a task to be processed according to message information of the message to be processed, wherein the task to be processed comprises the message information and a sequence code; the sequence code is a unique identifier distributed to the task to be processed according to the identifier of the physical port and the sequence of receiving the message to be processed; distributing the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed; scheduling the tasks to be processed in the queues to be processed to a processing unit which has a mapping relation with the queues to be processed in a multi-core Central Processing Unit (CPU), and processing the tasks to be processed by the processing unit; receiving a processed task sent by the multi-core CPU, wherein the processed task comprises a processing type, a processed queue identifier and the sequence code; and performing order preservation processing on the processed task according to the sequence code, and finishing the processing action corresponding to the processing type on the processed task according to the processing type in the processed task after the order preservation processing.
With reference to the first aspect, in a first possible implementation manner, a PCIE protocol for high-speed interconnection of external components is used, the to-be-processed packet is written into a memory of a multi-core CPU in a DMA manner through a direct memory access, and a storage address of the to-be-processed packet in the memory is acquired; the to-be-processed task also comprises the storage address; sending the task to be processed to a processing unit which has a mapping relation with the queue to be processed in a multi-core CPU by adopting a DMA mode; the processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
With reference to the first possible implementation manner of the first aspect, in a second possible implementation manner, when the processing type of the processed task is sending, the processed task is added to a processed queue corresponding to the processed queue identifier according to the processed queue identifier of the processed task and the sequence code; and if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed message, acquiring the processed message according to the storage address, and sending the processed message to the physical port.
With reference to the first possible implementation manner of the first aspect, in a third possible implementation manner, when the processing type of the processed task is rejoining the queue, the processed task is added to the processed queue corresponding to the processed queue identifier according to the processed queue identifier of the processed task and the sequence code.
With reference to the first possible implementation manner of the first aspect, in a fourth possible implementation manner, when the processing type in the processed task is deletion, the to-be-processed task is deleted from the to-be-processed queue, and the multi-core CPU is instructed to release the storage space of the processed packet.
In a second aspect, a message processing apparatus is provided, including:
a receiving unit, configured to receive a message to be processed from a physical port; the generating unit is used for generating a task to be processed according to the message information of the message to be processed received by the receiving unit, wherein the task to be processed comprises the message information and a sequence code; the sequence code is a unique identifier distributed to the task to be processed according to the identifier of the physical port and the sequence of receiving the message to be processed; the distribution unit is used for distributing the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed generated by the generation unit; the scheduling unit is used for scheduling the tasks to be processed in the queues to be processed which are distributed and completed by the distribution unit to a processing unit which has a mapping relation with the queues to be processed in a multi-core Central Processing Unit (CPU), and the processing unit processes the tasks to be processed; the receiving unit is further configured to receive a processed task sent by the multi-core CPU, where the processed task includes a processing type, a processed queue identifier, and the sequence code; and the task processing unit is used for carrying out order preservation processing on the processed tasks received by the receiving unit according to the sequence codes and finishing the processing actions corresponding to the processing types on the processed tasks according to the processing types in the processed tasks after the order preservation processing.
With reference to the second aspect, in a first possible implementation manner, the method further includes an obtaining unit, configured to: writing the message to be processed into a memory of a multi-core CPU (central processing unit) by adopting an external component high-speed interconnection PCIE (peripheral component interface express) protocol and storing a DMA (direct memory access) mode through a direct memory, and acquiring a storage address of the message to be processed in the memory; the to-be-processed task also comprises the storage address; after the order preserving processing, when the task processing unit completes the processing action corresponding to the processing type for the processed task according to the processing type in the processed task, the method specifically includes: sending the task to be processed to a processing unit which has a mapping relation with the queue to be processed in a multi-core CPU by adopting a DMA mode; the processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
With reference to the first possible implementation manner of the second aspect, in a second possible implementation manner, the task processing unit is specifically configured to: when the processing type in the processed task is sending, adding the processed task to a processed queue corresponding to the processed queue identifier according to the processed queue identifier in the processed task and the sequence code; and if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed message, acquiring the processed message according to the storage address, and sending the processed message to the physical port.
With reference to the first possible implementation manner of the second aspect, in a third possible implementation manner, the task processing unit is specifically configured to: and when the processing type in the processed task is rejoining the queue, adding the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier in the processed task and the sequence code.
With reference to the first possible implementation manner of the second aspect, in a fourth possible implementation manner, the task processing unit is specifically configured to: and when the processing type in the processed task is deletion, deleting the task to be processed from the queue to be processed, and indicating the multi-core CPU to release the storage space of the processed message.
In the embodiment of the invention, a task to be processed is generated according to the message information of a received message to be processed, a sequence code is distributed to the generated task to be processed according to the identification and the receiving sequence of a physical port corresponding to the task to be processed, and the task to be processed is classified according to the message information corresponding to the task to be processed; scheduling the classified tasks to be processed to the multi-core CPU; and carrying out order preservation processing on the processed tasks of the multi-core CPU according to the sequence codes of the tasks to be processed. By adopting the technical scheme of the invention, the device except the multi-core CPU realizes the scheduling and order-preserving functions of the messages, so that the multi-core CPU can execute tasks out of order and in parallel, the competition and resource interlocking among the multi-cores in the multi-core CPU are relieved, the parallel processing capability of the multi-core CPU is effectively improved, in addition, the multi-core CPU can effectively improve the task processing speed by executing the tasks in parallel, and the problem of resource waste of a processor is avoided.
Drawings
FIG. 1 is a diagram illustrating an architecture of a message processing system according to an embodiment of the present invention;
FIG. 2 is a flow chart of message processing in an embodiment of the present invention;
FIG. 3 is a diagram illustrating a task structure according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating assignment of sequence codes to pending tasks according to an embodiment of the present invention;
FIG. 5 is a diagram illustrating enqueue operations during an order preserving process according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating dequeue operations during an order preserving process according to an embodiment of the present invention;
FIG. 7 is a flow chart of message processing in a specific application scenario according to an embodiment of the present invention;
FIG. 8 is a diagram illustrating a structure of a message processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a message processing device in the embodiment of the present invention.
Detailed Description
The method and the device aim to solve the problems that in the prior art, when tasks needing to be processed by a multi-core CPU are processed in an order-preserving mode, the multi-core CPU cannot process the tasks in a parallel mode and processor resources are wasted. In the embodiment of the invention, a pre-preprocessor is arranged outside a multi-core CPU (Central processing Unit), generates tasks to be processed according to message information of received messages to be processed, distributes sequence codes for the tasks to be processed according to the identification and receiving sequence of a physical port corresponding to the tasks to be processed, and classifies the tasks to be processed according to the message information corresponding to the tasks to be processed; and the pre-preprocessor schedules the classified tasks to be processed to the multi-core CPU and carries out order preservation processing on the tasks processed by the multi-core CPU according to the sequence codes of the tasks to be processed. By adopting the technical scheme of the invention, the message scheduling and order preserving functions are realized by the front preprocessor, the multi-core CPU can execute tasks in parallel, the tasks can be out of order without being processed in sequence, the competition and resource interlocking among the multi-core are relieved, the processing capacity of the multi-core CPU is effectively improved, in addition, the multi-core CPU can effectively improve the task processing speed and avoid the waste of processor resources by executing the tasks in parallel, and the function unloading of the CPU is realized.
Referring to fig. 1, which is a schematic diagram of a message processing system architecture according to an embodiment of the present invention, the message processing system includes a pre-processor and a multi-core CPU; the pre-preprocessor is used for generating tasks to be processed according to messages to be processed transmitted by the physical port, classifying and scheduling the tasks to be processed and performing order preserving processing on the processed tasks of the multi-core CPU; and the multi-core CPU is used for processing the tasks to be processed transmitted by the pre-preprocessor.
The preprocessor can be implemented by a programmable Processor, such as a Field Programmable Gate Array (FPGA), or by a Network Processor (NP) or other general Application Specific Integrated Circuit (ASIC).
The embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
Referring to fig. 2, an embodiment of the present invention provides a message processing method, including:
step 200: the pre-preprocessor receives a message to be processed from the physical port and generates a task to be processed according to message information of the message to be processed.
In the embodiment of the present invention, the pre-processor and the multi-core CPU are connected by a Peripheral Component Interconnect Express (PCI Express, abbreviated as PCIE) bus. The method comprises the steps that a pre-preprocessor receives a message to be processed from a physical port, and writes the received message to be processed into a memory of a multi-core CPU (central processing unit) by adopting a PCIE (peripheral component interface express) protocol and a direct memory Access (DMA for short); and storing the storage address of the message to be processed in the memory of the multi-core CPU. The physical port is a high-speed ethernet port, for example, an ethernet port with a rate of 1 gigabit per second (Gbit/s), 10Gbit/s, or 40 Gbit/s.
The memory address of the message to be processed in the memory of the multi-core CPU can be obtained by using a circumferential address, and the circumferential address is managed by the multi-core CPU, that is, when the multi-core CPU receives the message to be processed written by the pre-processor, the memory space is allocated in the memory in real time for the message to be processed, which needs to be written, and the memory address of the memory space is notified to the pre-processor. The circular addresses are a plurality of discontinuous storage addresses, and the discontinuous storage addresses form circular addresses which are connected end to end.
The multi-core CPU may also pre-allocate a continuous memory space in the memory of the multi-core CPU to the pre-preprocessor for storing the message to be processed, where the memory address of the memory space is managed by the pre-preprocessor, and when the pre-preprocessor needs to store the message to be processed in the memory of the multi-core CPU, the pre-preprocessor stores the received message to be processed in the memory space and records the memory address.
The pre-preprocessor analyzes the message to be processed and extracts message information from the message to be processed, wherein the message information comprises message length, message type, physical port identification, message storage address in the memory and to-be-processed task storage address in the memory; optionally, the message information further includes information such as an Internet Protocol (IP) address, a Medium Access Control (MAC) address, and a priority. And the prepositive preprocessor generates a task to be processed according to the message information. The message information required for generating the task to be processed may be preset according to a specific application scenario, for example, in a case where a multi-core CPU is required to perform two-layer forwarding on a message to be processed, the message information extracted from the message to be processed by the pre-processor includes an MAC address. If the message information in the task to be processed generated by the pre-preprocessor lacks the necessary information when the multi-core CPU processes the task to be processed, the multi-core CPU can directly acquire the message to be processed from the storage address according to the storage address in the task to be processed, and the message information required by the message to be processed is acquired.
In this embodiment, each task to be processed is of a fixed length and is described in a predefined format, that is, the first preset format, where the length of the task to be processed may be set according to a specific application scenario, and fig. 3 is a schematic diagram illustrating the first preset format used by the task to be processed, where the length of the task to be processed is 4 × 64 bits. As shown in fig. 3, in this task, the meaning of each item representation is as follows:
the task flag bit is used for identifying the state of the task, such as the state of the task is a new enqueue state or the state of the task is a scheduled state.
The task processing type control word indicates a processing type of a task, which includes various processing types such as sending, deleting, receiving, and re-queuing.
And the order-preserving type of the queue to be processed is used for identifying whether the queue to be processed is a queue needing order preservation.
And the queue identification of the queue to be processed is the queue number of the queue to be processed where the task to be processed is located.
The order-preserving type of the processed queue is used for identifying whether the processed queue is a queue needing order preservation.
The queue identification of the processed queue is the queue number of the processed queue in which the task to be processed is located.
The task sequence code is the sequence code distributed by the pre-preprocessor for the task to be processed.
The task order-preserving type is used for identifying whether the task is a task needing order preservation.
The identifier of the physical port represents the identifier of the physical port receiving the message to be processed.
And the storage address of the task in the memory identifies the storage position of the task in the memory.
The storage address of the message corresponding to the task in the memory represents the storage position of the message corresponding to the task in the memory.
The message length represents the number of bytes occupied by the message.
In practical application, the task sequence code, the physical port identifier, the message length, the storage address of the message corresponding to the task in the memory, and the storage address of the task in the memory are optional items, and other items are selected according to specific application scenarios. In addition, the number of bits that each item included in the task needs to occupy and which bits to occupy specifically may be set according to a specific application scenario, for example, in the task shown in fig. 3, a task flag bit occupies 1bit, a task processing type control word occupies 3 bits, a packet priority occupies 4 bits, an order preserving type of a queue to be processed occupies 2 bits, a queue identifier of the queue to be processed is 10 bits, an order preserving type of a processed queue occupies 2 bits, a queue identifier of a processed queue occupies 10 bits, a task sequence code occupies 32 bits, a task order preserving type occupies 16 bits, an identifier of a physical port is 16 bits, a storage address of the task in a memory occupies 16 bits, a packet length occupies 16 bits, and a storage address of the corresponding packet in the memory occupies 32 bits.
The pre-preprocessor analyzes the message to be processed, extracts message information, and generates a task to be processed in a first preset format shown in fig. 3 according to the message information.
By adopting the technical scheme, the message information of the message to be processed is extracted according to the specific application scene, and the task to be processed with the fixed length is generated according to the message information.
The task to be processed includes not only the message information but also a sequence code, the sequence code is allocated to the task to be processed by the pre-preprocessor according to the identification and the receiving sequence of the physical port of the message to be processed, and the sequence code is a unique identification for identifying the task to be processed.
Referring to fig. 4, which is a schematic diagram illustrating that a sequence code is allocated to a to-be-processed task according to an embodiment of the present invention, as shown in the figure, a pre-processor sequentially receives 0 to N to-be-processed messages through all physical ports, respectively extracts message information of each to-be-processed message to generate a to-be-processed task, and then sequentially allocates a sequence code to each to-be-processed task according to a receiving sequence of the to-be-processed message corresponding to the to-be-processed task.
Since there is no need to perform order preserving operation between the packets received by different physical ports, optionally, the pre-processor allocates sequence codes to all the tasks to be processed according to the receiving order of each task to be processed, for all the tasks to be processed having the same physical port identifier. For example, the to-be-processed tasks corresponding to the physical port 0 include task 1 to be processed, task 2 to be processed, and task 3 to be processed, and the receiving order of the three tasks to be processed is time-ordered, and therefore, the sequence code SN0 is sequentially allocated to task 2 to be processed, the sequence code SN1 is allocated to task 1 to be processed, and the sequence code SN2 is allocated to task 3 to be processed.
Step 210: and the pre-preprocessor allocates the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed.
In the embodiment of the invention, the task to be processed is generated according to the message information extracted from the message to be processed, so that the pre-preprocessor distributes the task to be processed to the queue to be processed corresponding to the message information according to the message information in the task to be processed. The pre-preprocessor can allocate a queue to be processed for the task to be processed according to the identifier of the physical port of the message to be processed corresponding to the task to be processed, that is, the tasks to be processed with the same identifier of the physical port are allocated to the same queue to be processed; the pre-preprocessor can distribute the queues to be processed for the tasks to be processed according to the priorities of the tasks to be processed, namely, the tasks to be processed with the same priority are distributed to the same queues to be processed; the pre-processor can also allocate a to-be-processed queue for the to-be-processed task according to the message type of the to-be-processed task, that is, the to-be-processed tasks with the same message type are allocated to the same to-be-processed queue, for example, if the message type corresponding to the to-be-processed task 1 is an IP message, the to-be-processed task 1 is allocated to the to-be-processed queue 1, and if the message type corresponding to the to-be-processed task 2 is an ethernet message, the to-be-processed task 2 is allocated to the to-be-processed queue 2; tasks to be processed corresponding to other message types are distributed to a queue 3 to be processed; the pre-preprocessor can also allocate a queue to be processed for the task to be processed according to the MAC address of the task to be processed, at the moment, because the MAC address is longer, the MAC address can be processed by adopting a Hash algorithm, and the queue is allocated for the task to be processed according to the processed result.
All the tasks to be processed contained in the queue to be processed can be arranged from large to small according to the sequence code, and can also be arranged from small to large according to the sequence code.
Further, when the pre-processor allocates queues to be processed for tasks to be processed according to other message information except the identifier of the physical port, the pre-processor should ensure that the sequence codes of the tasks corresponding to the identifier of the same physical port are arranged from large to small or from small to large in each queue to be processed.
Step 220: and the pre-preprocessor schedules the tasks to be processed to a processing unit which has a mapping relation with the queues to be processed in the multi-core CPU, and the processing unit processes the tasks to be processed.
In the embodiment of the present invention, since the length of the to-be-processed task is smaller than the length of the to-be-processed message, when the to-be-processed queue includes a plurality of to-be-processed tasks, all or part of the to-be-processed tasks in the to-be-processed queue may be spliced into a spliced message in a second preset format according to a preset rule; and scheduling the task to be processed to a processing unit corresponding to the queue to be processed in the multi-core CPU by adopting a PCIE protocol and in a DMA mode. Wherein, the spliced message in the second preset format is a Transaction layer message (TLP for short); in the process of splicing a plurality of to-be-processed tasks into the spliced message in the second preset format, because the size of the storage space occupied by the spliced message in the second preset format is a fixed value, the number of to-be-processed tasks required for splicing one spliced message is related to the size of the storage space occupied by one to-be-processed task.
In this embodiment of the present invention, the scheduling, by the pre-processor, the to-be-processed task to the processing unit corresponding to the to-be-processed queue in the multi-core CPU specifically includes: in a multi-core CPU, each processing unit corresponds to a processing unit storage address; the method comprises the steps that a pre-preprocessor acquires a processing unit storage address corresponding to a queue to be processed and sends a task to be processed to the acquired processing unit storage address; and the processing unit in the multi-core processor acquires the to-be-processed task scheduled by the pre-preprocessor from the corresponding processing unit storage address and processes the to-be-processed task.
Optionally, when the pre-processor schedules the to-be-processed task to a processing unit corresponding to the to-be-processed queue in the multi-core CPU, the scheduling flag of the to-be-processed task is modified to be the already-scheduled flag.
By adopting the technical scheme, in the process of transmitting the packet in the DMA mode, the TLP has the specified packet format, and the shorter the effective data contained in the TLP is, the lower the bandwidth utilization rate is, so that the effective data length of the TLP can be increased by splicing a plurality of tasks to be processed, thereby reducing the bandwidth waste and improving the bandwidth utilization rate.
Because different processing units may have the same function, in order to ensure load balance of each processing unit having the same function in the multi-core CPU, optionally, the pre-processor schedules each queue to be processed by using a preset allocation algorithm to allocate tasks to each processing unit having the same function in the multi-core CPU in a balanced manner; wherein the allocation algorithm may be a hash algorithm.
Processing the tasks to be processed by a processing unit in the multi-core CPU, namely respectively processing each task to be processed contained in the spliced message, acquiring the messages to be processed corresponding to each task to be processed according to the storage address of the messages to be processed contained in each task to be processed, and processing the acquired messages to be processed according to the message information in each task to be processed to obtain processed messages; and storing the processed message in a memory corresponding to the storage address of the message to be processed. The processing of the message to be processed includes reading, modifying and the like.
Step 230: the pre-preprocessor receives processed tasks sent by the multi-core CPU.
In the embodiment of the invention, after the multi-core CPU finishes processing the task sent by the pre-preprocessor, the processed task is sent to the pre-preprocessor by adopting a PCIE protocol. The processed task comprises a processing type, a processed queue identifier and a sequence code; the processing types comprise a sending processing type, a deleting processing type and a rejoining queue processing type, and the processing types are configured by the multi-core CPU.
Step 240: the pre-preprocessor carries out order preserving processing on the processed task according to the sequence code, and finishes the processing action corresponding to the processing type on the processed task after the order preserving processing.
In the embodiment of the invention, the processing types comprise a plurality of different processing types such as sending, deleting, rejoining a queue and the like. In the pre-processor, since the processing types are different and the processing measures to be applied to the processed tasks are different, the pre-processor performs the processing operation on the processed tasks differently according to the processing types of the processed tasks, specifically:
when the processing type of the processed task is sending, the pre-preprocessor adds the processed task to a processed queue corresponding to a processed queue identification queue according to a processed queue identification in the processed task; if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed task, acquiring a processed message according to the storage address, sending the processed message to a corresponding physical port, and deleting the task to be processed in the queue to be processed where the task corresponding to the processed task is located;
when the processing type of the processed task is re-adding into the queue, the pre-preprocessor adds the processed task to a corresponding position in the processed queue corresponding to the processed queue identification queue according to the processed queue identification and the sequence code in the processed task; the processed queue identifier may be the same as or different from a pending queue identifier where the pending task is located; for example, the queue to be processed in which the task 1 to be processed is located is queue 1, the multi-core CPU processes the task 1 to be processed and then writes a queue identifier queue x into the task 1 to be processed, where the queue identifier x may correspond to the queue 1, that is, the task 1 to be processed is still stored in the queue 1, and the queue identifier x may also correspond to another queue (e.g., queue 2) other than the queue 1, that is, the task 1 to be processed is stored in the queue 2;
and when the processing type of the processed task is deletion, deleting the corresponding task to be processed from the queue to be processed according to the sequence code of the processed task, and instructing the multi-core CPU to release the storage space of the processed message corresponding to the processed task.
When the pending queue identifier where the pending task is located is different from the processed queue identifier, the pre-processor adds the processed task to a corresponding position in the processed queue, specifically: a pre-preprocessor acquires a sequence code of a processed task; wherein, the sequence code of the processed task is the sequence code of the task to be processed; acquiring the position information of the processed task in the processed queue according to the sequence code of the processed task; adding the processed task to a position corresponding to the position information in the processed queue; when the pending queue identifier of the pending task is the same as the processed queue identifier, the pre-processor stores the processed task to a corresponding position (i.e. enqueue operation) in the pending queue where the pending task is located, and identifies the task flag bit of the processed task as a rescheduling identifier.
In the above process, all the processed tasks in the processed queue may be stored in the order of the sequence code from small to large, or in the order of the sequence code from large to small. When the processed tasks in the processed queue are stored according to the sequence codes from small to large, the sequence code of the processed task is larger than the sequence code of the previous processed task of the processed task in the processed queue and smaller than the sequence code of the next processed task of the processed task in the processed queue. When all the processed tasks in the processed queue are stored according to the sequence codes from large to small, the sequence code of the processed task is smaller than the sequence code of the previous processed task of the processed task in the processed queue and is larger than the sequence code of the next processed task of the processed task in the processed queue.
In order to facilitate scheduling of the processed tasks, optionally, the processed tasks of the same processing type are allocated to the same processed queue; for example, the processed tasks included in the processed queue 1 are all the processed tasks of the sending processing type, and the processed tasks included in the processed queue 2 are all the processed tasks of the receiving processing type. Referring to fig. 5, in the process of adding the processed task to the processed queue, the sequence code of the processed task is sequentially compared with the sequence codes of the processed tasks already stored in the processed queue, so as to determine the position of the processed task in the processed queue. For example, referring to fig. 5, the sequence code (SN) of the processed task is 100, the sequence codes of all the processed tasks stored in the processed queue are arranged in descending order, the sequence code 100 is compared with the sequence code of each processed task in turn from the head of the processed queue, so as to determine the position of the processed task in the processed queue between the processed task with the sequence code of 96 and the processed task with the sequence code of 120, and thus the processed task is added to the processed queue.
By adopting the technical scheme, the processed queue where the processed task is located is determined according to the processing type in the processed task, and the position information of the processed task in the processed queue is determined according to the sequence code, so that all the processed tasks stored in the processed queue are arranged according to the sequence code size sequence, the order-preserving function of the message is realized, the multi-core CPU can execute the tasks out of order and in parallel, the competition and the resource interlocking before multi-core in the multi-core CPU are reduced, the parallel processing capability of the multi-core CPU is effectively improved, and the problem of resource waste of the processor is avoided.
Referring to fig. 6, when the pre-processor performs dequeue operations such as sending processing on the processed tasks, if the processed tasks in the processed queue are stored in the order from small to large according to the sequence codes, the sequence code of the current processed task needs to be compared with the sequence code of the processed task at the head of the processed queue, and when the sequence code of the current processed task is equal to the sequence code of the processed task at the head of the processed queue, the pre-processor processes the current processed task. For example, referring to fig. 6, the sequence code (SN) of the current processed task is 2, the sequence codes of the processed tasks stored in the processed queue are arranged in descending order, the sequence code 2 is compared with the sequence code of the processed task at the head of the processed queue, and the pre-preprocessor processes the current processed task because the sequence code of the processed task at the head of the processed queue is also 2; for another example, the sequence code (SN) of the current processed task is 120, the sequence codes of the processed tasks stored in the processed queue are arranged in descending order, the sequence code 120 is compared with the sequence code of the processed task at the head of the processed queue, and the sequence code of the processed task at the head of the processed queue is 2, so the pre-processor performs buffer processing on the current processed task.
By adopting the technical scheme, the asynchronous communication mode is adopted between the pre-preprocessor and the multi-core CPU, so that the delay waiting time of a link can be effectively reduced, the difference between the multi-core CPUs of different manufacturers is shielded on hardware, and the characteristics of good flexibility and strong expandability are achieved; and the preposed preprocessor realizes the scheduling functions of message receiving and sending and the like, thereby reducing the data processing burden of the multi-core CPU.
Based on the above technical solution, referring to fig. 7, the following describes the message processing process in detail in conjunction with a specific application scenario:
step 700: the pre-preprocessor receives a message to be processed from the physical port.
Step 710: the preposed preprocessor adopts a PCIE DMA mode to write the received message to be processed into a memory in the multi-core CPU and acquire a storage address of the message to be processed in the memory of the multi-core CPU.
Step 720: the pre-preprocessor analyzes the message to be processed, extracts message information in the message to be processed, and generates a task to be processed in a first preset format according to the extracted message information.
Step 730: and the preposed preprocessor allocates sequence codes for the tasks to be processed according to the identification and the receiving sequence of the physical port of the message to be processed.
Step 740: and the pre-preprocessor allocates the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed.
Step 750: and the pre-preprocessor schedules the tasks to be processed to a processing unit which has a mapping relation with the queues to be processed in the multi-core CPU, and the processing unit processes the tasks to be processed.
Step 760: the pre-preprocessor receives processed tasks sent by the multi-core CPU.
Step 770: when the processing type of the processed task is the sending processing type, the pre-preprocessor adds the processed task to a corresponding position in the processed queue corresponding to the processed queue identifier again according to the processed queue identifier in the processed task; if the sequence code of the processed task is the minimum sequence code in the queue, the pre-preprocessor acquires a storage address in the processed task, acquires a processed message according to the storage address, and sends the processed message to a corresponding physical port.
Step 780: and when the processing type of the processed task is the deletion processing type, the pre-preprocessor deletes the corresponding to-be-processed task from the to-be-processed task queue corresponding to the processed task, and instructs the multi-core CPU to release the storage space of the processed message corresponding to the processed task.
Step 790: and when the processing type of the processed task is a re-adding queue processing type, the pre-preprocessor re-adds the processed task to a corresponding position in the processed queue corresponding to the processed queue identification according to the sequence code of the processed task and the processed queue identification in the processed task.
In the embodiment of the invention, when the multi-core CPU determines that the message does not need to be processed according to the message information in the task to be processed, if the original message to be processed only needs to be forwarded to other equipment, the message does not need to be processed, the pre-preprocessor directly processes the task to be processed, then adds the task to be processed into the queue to be processed, and forwards the message to be processed.
Based on the above technical solution, referring to fig. 8, an embodiment of the present invention further provides a message processing apparatus, including a receiving unit 80, a generating unit 81, an allocating unit 82, a scheduling unit 83, and a task processing unit 84, where:
a receiving unit 80, configured to receive a message to be processed from a physical port;
a generating unit 81, configured to generate a to-be-processed task according to the message information of the to-be-processed message received by the receiving unit 80, where the to-be-processed task includes the message information and a sequence code; the sequence code is a unique identifier distributed to the task to be processed according to the identifier of the physical port and the sequence of receiving the message to be processed;
the allocating unit 82 is configured to allocate the to-be-processed task to a corresponding to-be-processed queue according to the message information and the sequence code in the to-be-processed task generated by the generating unit 81;
the scheduling unit 83 is configured to schedule the to-be-processed tasks in the to-be-processed queue allocated by the allocating unit 82 to a processing unit in a multi-core CPU, where a mapping relationship exists between the to-be-processed tasks and the to-be-processed queue, and the processing unit processes the to-be-processed tasks;
the receiving unit 80 is further configured to receive a processed task sent by the multicore CPU, where the processed task includes a processing type, a processed queue identifier, and the sequence code;
the task processing unit 84 is configured to perform order preservation processing on the processed task received by the receiving unit 80 according to the sequence code, and complete a processing action corresponding to the processing type on the processed task according to the processing type in the processed task after the order preservation processing.
Wherein, the above apparatus further comprises an obtaining unit 85, configured to: writing the message to be processed into a memory of a multi-core CPU (central processing unit) by adopting a PCIE (peripheral component interface express) protocol and storing a DMA (direct memory access) mode through a direct memory, and acquiring a storage address of the message to be processed in the memory; the to-be-processed task also comprises the storage address;
after the sequential processing, when the task processing unit 84 completes the processing action corresponding to the processing type for the processed task according to the processing type in the processed task, the method specifically includes: sending the task to be processed to a processing unit which has a mapping relation with the queue to be processed in a multi-core CPU by adopting a DMA mode; the processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
The task processing unit 84 is specifically configured to: when the processing type in the processed task is sending, adding the processed task to a processed queue corresponding to the processed queue identifier according to the processed queue identifier in the processed task and the sequence code; and if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed message, acquiring the processed message according to the storage address, and sending the processed message to the physical port.
The task processing unit 84 is specifically configured to: and when the processing type in the processed task is rejoining the queue, adding the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier in the processed task and the sequence code.
The processed queue obtained by the task processing unit 84 stores the tasks in the order of the sequence codes from small to large, or stores the tasks in the order of the sequence codes from large to small.
The task processing unit 84 is specifically configured to: and when the processing type in the processed task is deletion, deleting the task to be processed from the queue to be processed, and indicating the multi-core CPU to release the storage space of the processed message.
Based on the above technical solution, referring to fig. 9, an embodiment of the present invention further provides a message processing apparatus, including a processor 91, a physical port 92, and a bus 93, where:
the processor 91 and the physical port 92 are connected to each other by a bus 93; the bus 93 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 9, but this does not indicate only one bus or one type of bus.
The physical port 92 is used for transmitting and receiving messages.
The processor 91 may be a network processor NP, and the processor 91 may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, which is used to implement the message processing method shown in fig. 2 of the present invention, and includes: receiving a message to be processed from the physical port 92; generating a task to be processed according to the message information of the message to be processed, wherein the task to be processed comprises the message information and a sequence code; the sequence code is a unique identifier distributed to the task to be processed according to the identifier of the physical port and the sequence of receiving the message to be processed;
distributing the tasks to be processed to corresponding queues to be processed according to the generated message information and sequence codes in the tasks to be processed;
scheduling the tasks to be processed in the distributed queues to be processed to a processing unit which has a mapping relation with the queues to be processed in a multi-core CPU, and processing the tasks to be processed by the processing unit;
receiving a processed task sent by the multi-core CPU, wherein the processed task comprises a processing type, a processed queue identifier and the sequence code;
and performing order preservation processing on the processed tasks according to the sequence codes, and finishing processing actions corresponding to the processing types on the processed tasks according to the processing types in the processed tasks after the order preservation processing.
When the processor 91 is a network processor NP, the message processing apparatus further comprises a memory (not shown in the figure) for storing a program. In particular, the program may include program code comprising computer operating instructions. The memory may include a Random Access Memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. The processor 91 executes the application program stored in the memory to implement the message processing method shown in fig. 2 of the present invention as described above.
The processor 91 specifically uses a PCIE protocol, writes the to-be-processed packet into a memory of the multi-core CPU in a DMA manner through a Direct Memory Access (DMA), and obtains a storage address of the to-be-processed packet in the memory; the to-be-processed task also comprises the storage address.
The processor 91 specifically uses a DMA mode to send the task to be processed to a processing unit in the multi-core CPU, where the processing unit has a mapping relationship with the queue to be processed. The processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
Further, when the processing type in the processed task is sending, the processor 91 adds the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier in the processed task and the sequence code; and if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed message, acquiring the processed message according to the storage address, and sending the processed message to the physical port.
When the processing type of the processed task is rejoining the queue, the processor 91 adds the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier of the processed task and the sequence code.
And the processed queue stores the tasks according to the sequence of the sequence codes from small to large or stores the tasks according to the sequence of the sequence codes from large to small.
When the processing type in the processed task is deletion, the processor 91 deletes the to-be-processed task from the to-be-processed queue and instructs the multi-core CPU to release the storage space of the processed packet.
The illustrated message processing device also includes a communication interface 94; the communication interface 94 is used to communicate with the multicore CPU.
In summary, in the embodiment of the present invention, the pre-processor receives a to-be-processed packet from the physical port, and generates a to-be-processed task according to packet information of the to-be-processed packet; the pre-preprocessor allocates the task to be processed to a corresponding queue to be processed according to the message information and the sequence code in the task to be processed; the pre-preprocessor dispatches the tasks to be processed to a processing unit which has a mapping relation with the queues to be processed in the multi-core CPU, and the processing unit processes the tasks to be processed; a pre-preprocessor receives a processed task sent by a multi-core CPU; and the pre-preprocessor carries out order preserving processing on the processed task according to the sequence code, and finishes the processing action corresponding to the processing type on the processed task according to the processing type carried in the processed task after the order preserving processing. By adopting the technical scheme of the invention, the message scheduling and order preserving functions are realized by the front preprocessor, the multi-core CPU can execute tasks in parallel, the tasks can be out of order without being processed in sequence, the competition and resource interlocking among the multi-core are relieved, the processing capacity of the multi-core CPU is effectively improved, in addition, the multi-core CPU can effectively improve the task processing speed and avoid the waste of processor resources by executing the tasks in parallel, and the function unloading of the CPU is realized.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present invention without departing from the scope of the embodiments of the invention. Thus, if such modifications and variations of the embodiments of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to encompass such modifications and variations.

Claims (10)

1. A message processing method is characterized by comprising the following steps:
a pre-preprocessor receives a message to be processed from a physical port, and generates a task to be processed according to message information of the message to be processed, wherein the task to be processed comprises the message information and a sequence code;
the sequence code in the task to be processed is a unique identifier distributed to the task to be processed by the pre-preprocessor according to the identifier of the physical port and the sequence of receiving the message to be processed;
the pre-preprocessor allocates the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed;
the pre-preprocessor schedules the tasks to be processed in the queues to be processed to a processing unit which has a mapping relation with the queues to be processed in a multi-core Central Processing Unit (CPU), and the processing unit processes the tasks to be processed; wherein, the processing of the task to be processed comprises reading and modifying;
the pre-preprocessor receives a processed task sent by the multi-core CPU, wherein the processed task comprises a processing type, a processed queue identifier and a sequence code;
the pre-preprocessor carries out order preservation processing on the processed tasks according to the sequence codes in the processed tasks, and finishes processing actions corresponding to the processing types on the processed tasks after the order preservation processing; the preprocessor is a device except the multi-core CPU.
2. The method of claim 1, further comprising:
the prepositive preprocessor adopts an external component high-speed interconnection PCIE protocol, writes the message to be processed into a memory of a multi-core CPU in a Direct Memory Access (DMA) mode, and acquires a storage address of the message to be processed in the memory; the to-be-processed task also comprises the storage address;
the pre-processor schedules the to-be-processed task in the to-be-processed queue to a processing unit in a multi-core CPU, which has a mapping relationship with the to-be-processed queue, and the processing unit processes the to-be-processed task, which specifically includes:
the pre-preprocessor sends the task to be processed to a processing unit which has a mapping relation with the queue to be processed in a multi-core CPU in a DMA mode;
the processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
3. The method according to claim 2, wherein the pre-processor completes the processing action corresponding to the processing type on the processed task after the order preserving processing, and specifically comprises:
when the processing type in the processed task is sending, the pre-processor adds the processed task to a processed queue corresponding to the processed queue identifier according to the processed queue identifier and the sequence code in the processed task;
if the sequence code of the processed task is the minimum sequence code in the processed queue, the pre-preprocessor acquires the storage address of the processed message, acquires the processed message according to the storage address, and sends the processed message to the physical port.
4. The method according to claim 2, wherein the pre-processor completes the processing action corresponding to the processing type on the processed task after the order preserving processing, and specifically comprises:
and when the processing type in the processed task is rejoin to the queue, the pre-processor adds the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier and the sequence code in the processed task.
5. The method according to claim 2, wherein the pre-processor completes, after the order preserving processing, the processing action corresponding to the processing type for the processed task according to the processing type in the processed task, and specifically includes:
and when the processing type in the processed task is deletion, the pre-preprocessor deletes the task to be processed from the queue to be processed and instructs the multi-core CPU to release the storage space of the processed message.
6. A message processing apparatus, comprising:
a receiving unit, configured to receive a message to be processed from a physical port;
the generating unit is used for generating a task to be processed according to the message information of the message to be processed received by the receiving unit, wherein the task to be processed comprises the message information and a sequence code; the sequence code in the task to be processed is a unique identifier distributed to the task to be processed according to the identifier of the physical port and the sequence of receiving the message to be processed;
the distribution unit is used for distributing the tasks to be processed to corresponding queues to be processed according to the message information and the sequence codes in the tasks to be processed, which are generated by the generation unit;
the scheduling unit is used for scheduling the tasks to be processed in the queues to be processed, which are distributed and completed by the distributing unit, to a processing unit which has a mapping relation with the queues to be processed in a multi-core Central Processing Unit (CPU), and the processing unit processes the tasks to be processed; wherein, the processing of the task to be processed comprises reading and modifying;
the receiving unit is further configured to receive a processed task sent by the multi-core CPU, where the processed task includes a processing type, a processed queue identifier, and a sequence code;
the task processing unit is used for carrying out order preservation processing on the processed tasks received by the receiving unit according to the sequence codes in the processed tasks and finishing the processing actions corresponding to the processing types on the processed tasks according to the processing types in the processed tasks after the order preservation processing;
the message processing device is a pre-preprocessor, and the pre-preprocessor is a device except the multi-core CPU.
7. The apparatus of claim 6, further comprising an acquisition unit to: writing the message to be processed into a memory of a multi-core CPU (central processing unit) by adopting an external component high-speed interconnection PCIE (peripheral component interface express) protocol and storing a DMA (direct memory access) mode through a direct memory, and acquiring a storage address of the message to be processed in the memory; the to-be-processed task also comprises the storage address;
after the order preserving processing, the task processing unit completes the processing action corresponding to the processing type for the processed task according to the processing type in the processed task, and specifically includes:
sending the task to be processed to a processing unit which has a mapping relation with the queue to be processed in a multi-core CPU by adopting a DMA mode; the processing unit acquires the message to be processed according to the storage address of the message to be processed in the task to be processed, and processes the message to be processed according to message information in the task to be processed to obtain a processed message; and storing the processed message in the memory corresponding to the storage address of the message to be processed.
8. The apparatus of claim 7, wherein the task processing unit is specifically configured to:
when the processing type in the processed task is sending, adding the processed task to a processed queue corresponding to the processed queue identifier according to the processed queue identifier and the sequence code in the processed task; and if the sequence code of the processed task is the minimum sequence code in the processed queue, acquiring a storage address of the processed message, acquiring the processed message according to the storage address, and sending the processed message to the physical port.
9. The apparatus of claim 7, wherein the task processing unit is specifically configured to:
and when the processing type in the processed task is rejoining the queue, adding the processed task to the processed queue corresponding to the processed queue identifier according to the processed queue identifier and the sequence code in the processed task.
10. The apparatus of claim 7, wherein the task processing unit is specifically configured to:
and when the processing type in the processed task is deletion, deleting the task to be processed from the queue to be processed, and indicating the multi-core CPU to release the storage space of the processed message.
CN201410490087.5A 2014-09-23 2014-09-23 Message processing method and device Active CN105511954B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410490087.5A CN105511954B (en) 2014-09-23 2014-09-23 Message processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410490087.5A CN105511954B (en) 2014-09-23 2014-09-23 Message processing method and device

Publications (2)

Publication Number Publication Date
CN105511954A CN105511954A (en) 2016-04-20
CN105511954B true CN105511954B (en) 2020-07-07

Family

ID=55719959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410490087.5A Active CN105511954B (en) 2014-09-23 2014-09-23 Message processing method and device

Country Status (1)

Country Link
CN (1) CN105511954B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106339265A (en) * 2016-08-30 2017-01-18 中国银行股份有限公司 Method and device for processing combined task
CN109327405B (en) * 2017-07-31 2022-08-12 迈普通信技术股份有限公司 Message order-preserving method and network equipment
CN107656896B (en) * 2017-09-12 2020-07-07 新华三信息安全技术有限公司 Multi-core processor and message processing method
CN107704421B (en) * 2017-09-12 2021-04-27 新华三信息安全技术有限公司 Multi-core processor and message processing method
CN107689962B (en) * 2017-09-25 2021-03-19 深圳市盛路物联通讯技术有限公司 Data stream filtering method and system
CN107896199B (en) * 2017-10-20 2021-03-16 深圳市风云实业有限公司 Method and device for transmitting message
CN108833299B (en) * 2017-12-27 2021-12-28 北京时代民芯科技有限公司 Large-scale network data processing method based on reconfigurable switching chip architecture
CN110545242B (en) * 2018-05-29 2021-12-24 杭州海康威视数字技术股份有限公司 Target analysis method and intelligent analysis equipment
CN108984450B (en) * 2018-06-08 2020-10-23 华为技术有限公司 Data transmission method, device and equipment
CN109086128B (en) * 2018-08-28 2021-06-18 迈普通信技术股份有限公司 Task scheduling method and device
CN109729021A (en) * 2018-12-27 2019-05-07 北京天融信网络安全技术有限公司 A kind of message processing method and electronic equipment
CN109688069A (en) * 2018-12-29 2019-04-26 杭州迪普科技股份有限公司 A kind of method, apparatus, equipment and storage medium handling network flow
CN110046053B (en) 2019-04-19 2021-11-12 上海兆芯集成电路有限公司 Processing system for distributing tasks and access method thereof
CN110058931B (en) 2019-04-19 2022-03-22 上海兆芯集成电路有限公司 Processing system for task scheduling and acceleration method thereof
CN110083387B (en) 2019-04-19 2021-11-12 上海兆芯集成电路有限公司 Processing system using polling mechanism and access method thereof
CN110032453B (en) 2019-04-19 2022-05-03 上海兆芯集成电路有限公司 Processing system for task scheduling and distribution and acceleration method thereof
CN110032452B (en) * 2019-04-19 2021-08-24 上海兆芯集成电路有限公司 Processing system and heterogeneous processor acceleration method
CN110661731B (en) * 2019-09-26 2020-09-29 光大兴陇信托有限责任公司 Message processing method and device
CN110908797B (en) * 2019-11-07 2023-09-15 浪潮电子信息产业股份有限公司 Call request data processing method, device, equipment, storage medium and system
CN110908939B (en) * 2019-11-27 2020-10-09 新华三半导体技术有限公司 Message processing method and device and network chip
CN111221759B (en) * 2020-01-17 2021-05-28 深圳市风云实业有限公司 Data processing system and method based on DMA
CN111541749B (en) * 2020-04-14 2023-05-02 杭州涂鸦信息技术有限公司 Data communication method and system of embedded equipment and related equipment
CN112328520B (en) * 2020-09-30 2022-02-11 郑州信大捷安信息技术股份有限公司 PCIE equipment, and data transmission method and system based on PCIE equipment
CN112511460B (en) * 2020-12-29 2022-09-09 安徽皖通邮电股份有限公司 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
CN113630376B (en) * 2021-06-16 2023-04-07 新华三信息安全技术有限公司 Network security device and message processing method thereof
CN113991839B (en) * 2021-10-15 2023-11-14 许继集团有限公司 Device and method for improving remote control opening reliability
CN114338559B (en) * 2021-12-15 2024-03-22 杭州迪普信息技术有限公司 Message order preserving method and device
CN114490467B (en) * 2022-01-26 2024-03-19 中国电子科技集团公司第五十四研究所 Message processing DMA system and method of multi-core network processor
CN114610661A (en) * 2022-03-10 2022-06-10 北京百度网讯科技有限公司 Data processing device and method and electronic equipment
CN114510337B (en) * 2022-04-15 2023-03-21 深圳美云集网络科技有限责任公司 Task execution method, system and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101175033A (en) * 2007-11-27 2008-05-07 中兴通讯股份有限公司 Message order-preserving method and device thereof
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102480430A (en) * 2010-11-24 2012-05-30 迈普通信技术股份有限公司 Method and device for realizing message order preservation
CN103986647A (en) * 2014-05-21 2014-08-13 大唐移动通信设备有限公司 Message transmission method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030158883A1 (en) * 2002-02-04 2003-08-21 Drudis Antoni N. Message processing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101175033A (en) * 2007-11-27 2008-05-07 中兴通讯股份有限公司 Message order-preserving method and device thereof
CN101854302A (en) * 2010-05-27 2010-10-06 中兴通讯股份有限公司 Message order-preserving method and system
CN102480430A (en) * 2010-11-24 2012-05-30 迈普通信技术股份有限公司 Method and device for realizing message order preservation
CN103986647A (en) * 2014-05-21 2014-08-13 大唐移动通信设备有限公司 Message transmission method and device

Also Published As

Publication number Publication date
CN105511954A (en) 2016-04-20

Similar Documents

Publication Publication Date Title
CN105511954B (en) Message processing method and device
US10572290B2 (en) Method and apparatus for allocating a physical resource to a virtual machine
JP6517934B2 (en) Apparatus and method for buffering data in a switch
CN109697122B (en) Task processing method, device and computer storage medium
CN108647104B (en) Request processing method, server and computer readable storage medium
US11321256B2 (en) Persistent kernel for graphics processing unit direct memory access network packet processing
CN107846443B (en) Distributed processing in a network
US20090086737A1 (en) System-on-chip communication manager
US10341264B2 (en) Technologies for scalable packet reception and transmission
US10664945B2 (en) Direct memory access for graphics processing unit packet processing
US20170344266A1 (en) Methods for dynamic resource reservation based on classified i/o requests and devices thereof
US20170366477A1 (en) Technologies for coordinating access to data packets in a memory
CN113157465A (en) Message sending method and device based on pointer linked list
US11310164B1 (en) Method and apparatus for resource allocation
CN111970213A (en) Queuing system
CN114338559B (en) Message order preserving method and device
CN109862044B (en) Conversion device, network equipment and data transmission method
CN112804166A (en) Message transmitting and receiving method, device and storage medium
CN112114971A (en) Task allocation method, device and equipment
KR101634672B1 (en) Apparatus for virtualizing a network interface, method thereof and computer program for excuting the same
CN113535370A (en) Method and equipment for realizing multiple RDMA network card virtualization of load balancing
CN104506452A (en) Message processing method and message processing device
CN103825842A (en) Data flow processing method and device for multi-CPU system
KR102091152B1 (en) Method and apparatus for processing packet using multi-core in hierarchical networks
CN110445729B (en) Queue scheduling method, device, equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant