CN112511460B - Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment - Google Patents

Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment Download PDF

Info

Publication number
CN112511460B
CN112511460B CN202011585304.0A CN202011585304A CN112511460B CN 112511460 B CN112511460 B CN 112511460B CN 202011585304 A CN202011585304 A CN 202011585304A CN 112511460 B CN112511460 B CN 112511460B
Authority
CN
China
Prior art keywords
packet
message
message forwarding
channel
sending
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011585304.0A
Other languages
Chinese (zh)
Other versions
CN112511460A (en
Inventor
胡军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Wantong Post And Telecommunications Co ltd
Original Assignee
Anhui Wantong Post And Telecommunications Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Wantong Post And Telecommunications Co ltd filed Critical Anhui Wantong Post And Telecommunications Co ltd
Priority to CN202011585304.0A priority Critical patent/CN112511460B/en
Publication of CN112511460A publication Critical patent/CN112511460A/en
Application granted granted Critical
Publication of CN112511460B publication Critical patent/CN112511460B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/622Queue service order
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/522Barrier synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/628Queue scheduling characterised by scheduling criteria for service slots or service orders based on packet size, e.g. shortest packet first

Abstract

The invention discloses a lock-free shared message forwarding method of single-transceiving-channel multi-core network communication equipment, which comprises the following steps: (1) caching the message into a cache region, generating a packet receiving logic index by the cache region, and distributing a message forwarding task by each single-core processor; (2) inquiring the cache region by the message forwarding task of the single-core processor to determine whether a message to be received exists; (3) receiving messages by the message forwarding tasks of the single-core processors; (4) the message forwarding task of each single-core processor introduces the acquired message into a cache region, and the cache region generates a packet sending logic index; (5) enabling the message forwarding task of the single-core processor to perform packet sending message query and confirmation; (6) the network chip acquires the message and transmits the message outwards; (7) and enabling any one single-core processor to carry out forwarding query. The method of the invention has good message forwarding performance.

Description

Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
Technical Field
The invention relates to the field of message forwarding methods of multi-core general processors, in particular to a lock-free shared message forwarding method of single-transceiving-channel multi-core network communication equipment.
Background
The multi-core network communication device generally includes a network chip and a multi-core cpu integrated in the network chip or externally connected to the network chip, and the network chip receives an external message and forwards the external message to the outside through the multi-core cpu and the network chip. At present, in a multi-core network communication device with better performance, a distribution module, an order preserving module and an internal communication module are supported on network chip hardware, the distribution module distributes messages to each single-core cpu, and the order preserving module carries out order preserving marking on the received messages, so that when the network chip sends the messages outwards, the messages can be ensured to be sent outwards according to a data flow sequence when the messages are received. Each single-core cpu is respectively provided with an independent packet receiving channel and an independent packet sending channel, so that each single-core cpu receives the message of the network chip through the respective packet receiving channel and sends the message to the network chip through the respective packet sending channel for outward forwarding, and the processes are coordinated through the communication of the internal communication module of the network chip. Therefore, the multi-core network communication equipment with better performance can send the message without using a lock, and the message does not need to be copied in the forwarding process, but the multi-core network communication equipment is very dependent on hardware support and has high cost.
In most low-cost multi-core network communication devices, a network chip does not support a distribution module, an order preserving module and an internal communication module on hardware, and each single-core cpu in the multi-core cpus shares the same single transceiving channel for interacting data with the network chip, the single transceiving channel generally consists of a single packet receiving channel, a single packet sending channel and a cache region, and the performance of each single-core cpu can only meet common requirements. When the multi-core network communication equipment with low cost forwards a message, because each single-core cpu shares one single transceiving channel, if two single-core cpus simultaneously operate through the single transceiving channel, data of the two single-core cpus are mutually exclusive in the single transceiving channel, and therefore locking operation is needed, otherwise problems of repeated packet receiving, packet loss, even task running and hanging occur, and the message forwarding performance is inevitably reduced due to locking.
In addition, when forwarding a message, the low-cost multi-core network communication device also needs to copy the message. The specific processing of the message is simple, the message header is analyzed by the single-core cpu, then the whole packet is delivered together, the length of the read message header is small and is only dozens of bytes generally, once the message needs to be copied, the whole packet needs to be accessed, the length is very large, and the general packet can reach 1500 bytes. However, the common cpu processor copies the instruction of the message, the overhead of the memory access instruction is relatively large compared with that of the common instruction, the common simple instruction only needs one cycle, and the memory access instruction may take tens of hundreds of cycles under the condition that the cache is unknown. Therefore, the duplicate packets need to be avoided as much as possible, otherwise the forwarding performance is affected.
In order to implement that a low-cost multi-core network communication device does not add a lock or copy a message when forwarding the message, in the prior art, a method is provided for dedicating one single-core cpu in the low-cost multi-core network communication device to distribute the message to other single-core cpus, and dedicating another single-core cpu to order preservation when sending the message. The method for forwarding the message through the special single-core cpu has the following defects: 1. when 2 single-core cpus are needed, 2, the forwarding delay is long, and 3, the multi-core network communication device contains a large number of single-core cpus, the convergence of traffic to a dedicated single-core cpu for distributing and order-preserving results in performance bottleneck, which may fail to process the traffic.
Disclosure of Invention
The invention aims to provide a lock-free shared message forwarding method of single-transceiving-channel multi-core network communication equipment, which aims to solve the problem in the prior art that the multi-core network communication equipment realizes the message forwarding without lock and message copying.
In order to achieve the purpose, the technical scheme adopted by the invention is as follows:
the method for forwarding the lockless shared message of the single-transceiving channel multi-core network communication equipment comprises a network chip and a multi-core processor, wherein each single-core processor in the multi-core processor shares a single transceiving channel and bi-directionally transmits data with the network chip, the single transceiving channel consists of a single receiving channel, a single sending channel and a buffer area shared by the single receiving channel and the single sending channel, each single-core processor obtains data from the network chip through the single receiving channel, and each single-core processor sends the data to the network chip through the single sending channel, and the method is characterized by comprising the following steps:
(1) when the network chip receives a plurality of messages transmitted from the outside, caching each message into a cache region through a single packet receiving channel in sequence to serve as a message to be received, generating a packet receiving logic index of each message in the cache region according to the sequence, and simultaneously respectively distributing a message forwarding task to each single-core processor by a driving program of the network chip so that one single-core processor only operates one message forwarding task;
(2) the driver program enables the message forwarding task of any one single-core processor to inquire the cache region through the single packet receiving channel so as to determine whether a message to be received exists in the cache region, if the message to be received exists, the message forwarding task of the determined single-core processor transfers the determination result value to other single-core processors, namely the maximum receivable packet receiving logic index of all the message forwarding tasks increases the value;
(3) the message forwarding tasks of the single-core processors sequentially rotate to sequentially pass through the single packet receiving channels to obtain packet receiving logic indexes of the messages from the cache region, and meanwhile the message forwarding tasks of the single-core processors sequentially pass through the single packet receiving channels to obtain corresponding messages to be received from the cache region on the basis of the obtained packet receiving logic indexes;
(4) enabling the message forwarding tasks of the single-core processors to rotate and sequentially pass through the single-packet sending channel and the buffer area as the messages to be sent in the same sequence as the step (3) by the driving program, generating a packet sending logic index of each message to be sent in the sequence by the buffer area, and sending the generated packet sending logic index to the network chip by the buffer area in the sequence;
(5) the driver program enables the message forwarding task of any single-core processor to inquire the cache region through the single packet sending channel so as to determine whether the message to be sent exists in the cache region, and if the message to be sent exists, the message forwarding task of the single-core processor for determination transmits a determination result to the network chip;
(6) the network chip sequentially acquires the messages of each packet to be transmitted from the cache region through the single packet transmission channel based on the packet transmission logic indexes sequentially generated in the step (4), and transmits the acquired messages outwards;
(7) and the driving program enables the message forwarding task of any single-core processor to inquire the cache region through the single-sending packet channel so as to confirm whether the messages are all sent, and if the messages are confirmed to be all sent, the message forwarding task of the confirmed single-core processor informs the cache region through the single-receiving packet channel so that the cache region recovers the cache and distributes the cache to the single-receiving packet channel.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: and (3) enabling the message forwarding tasks of the same single-core processor to execute the step (2), the step (5) and the step (7) by the driver, or enabling the message forwarding tasks of different single-core processors to respectively execute the step (2), the step (5) and the step (7) by the driver.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: in the step (2), the step (5) and the step (7), except for executing the message forwarding tasks of the single-core processors in the corresponding step, the message forwarding tasks of the other single-core processors are not allowed to be executed.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: in the step (4), each message forwarding task has tid, and for a message forwarding task 0, a message forwarding task 1, a message forwarding task 2 and a … … message forwarding task n-1, tid is respectively 0, 1, 2 and … … n-1, wherein n is the number of cores used for forwarding, and the packet receiving logic index of the current unprocessed message forwarding task is initialized to be equal to the tid of the current message forwarding task when being processed for the first time;
when each message forwarding task receives a packet, the packet receiving logic index of the current unprocessed message forwarding task is taken as an initial value, the maximum receivable packet receiving logic index of all the message forwarding tasks is taken as a maximum value, the logic indexes are circularly taken to sequentially process the messages in the single packet receiving channel cache, and after each message is processed, the packet receiving logic index value of the unprocessed message forwarding task is set to be the original value plus n.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: initializing the current unprocessed packet forwarding task packet sending logic index to be equal to the packet forwarding task tid when the packet sending logic index is processed for the first time; when each message forwarding task is sent, taking the packet sending logic index of the current unprocessed message forwarding task as the packet sending logic index to carry out the editing operation of the sending buffer area, and then setting the value of the packet sending logic index of the unprocessed message forwarding task to be equal to the original index value plus n.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: the sending buffer editing operation comprises the steps of converting a physical index according to a sending logic index to find a sending descriptor, setting a message pointer of the sending descriptor as a receiving message pointer value, and normally setting other fields of the sending descriptor.
The lock-free shared message forwarding method of the single-transceiving-channel multi-core network communication equipment is characterized by comprising the following steps: in the step (5), the value obtained by subtracting the last packet sending logic index value from the minimum value of the packet sending logic index values of the current packet forwarding tasks which are not processed by each packet forwarding task is taken, the value is confirmed to send the packet to a single packet sending channel for processing, and the last packet sending logic index value is set to be equal to the minimum value.
Compared with the existing forwarding method for realizing the lock-free and copy-free message by the low-cost multi-core network communication equipment, the method avoids the distribution and sending by using a special core, and solves the problems that the performance bottleneck is caused by the CPU core occupation, the forwarding delay is increased and the flow is converged to the special core in the prior art. Meanwhile, the method can realize the purposes of no lock, order preservation and zero copy when the multi-core high-performance message is forwarded, thereby better exerting the superposition performance of the multi-core processor.
Drawings
FIG. 1 is a block flow diagram of the method of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
As shown in fig. 1, a lock-free shared packet forwarding method for a single-transceiving-channel multi-core network communication device includes a network chip and a multi-core processor, each single-core processor in the multi-core processor shares a single transceiving channel and bi-directionally transmits data between the network chip, the single transceiving channel is composed of a single packet receiving channel, a single packet sending channel and a buffer area shared by the single packet receiving channel and the single packet sending channel, each single-core processor obtains data from the network chip through the single packet receiving channel, and each single-core processor sends data to the network chip through the single packet sending channel, the method of the present invention includes the following steps:
(1) when the network chip receives a plurality of messages transmitted from the outside, caching each message into a cache region through a single packet receiving channel in sequence to serve as a message to be received, generating a packet receiving logic index of each message in the cache region according to the sequence, and simultaneously respectively distributing a message forwarding task to each single-core processor by a driving program of the network chip so that one single-core processor only operates one message forwarding task;
(2) the driver program enables the message forwarding task of any one single-core processor to inquire the cache region through the single packet receiving channel so as to determine whether a message to be received exists in the cache region, if the message to be received exists, the message forwarding task of the determined single-core processor transfers the determination result value to other single-core processors, namely the maximum receivable packet receiving logic index of all the message forwarding tasks increases the value;
(3) enabling the message forwarding tasks of the single-core processors to sequentially rotate and sequentially pass through the single packet receiving channel to obtain packet receiving logic indexes of the messages from the cache region, and meanwhile enabling the message forwarding tasks of the single-core processors to sequentially obtain the corresponding messages to be received from the cache region through the single packet receiving channel based on the obtained packet receiving logic indexes;
(4) enabling the message forwarding tasks of the single-core processors to be converted by the driving program, enabling the obtained messages to sequentially pass through the single-packet sending channel in turn according to the same sequence as the step (3) and be introduced into the cache region as the messages of the packets to be sent, generating the packet sending logic index of each packet to be sent by the cache region according to the sequence, and sending the generated packet sending logic index to the network chip by the cache region according to the sequence;
(5) the driver program enables the message forwarding task of any one single-core processor to inquire the cache region through the single packet sending channel so as to determine whether the message to be sent exists in the cache region or not, and if the message to be sent exists, the message forwarding task of the single-core processor for determination transmits a determination result to the network chip;
(6) the network chip sequentially acquires the messages of each packet to be transmitted from the cache region through the single packet transmission channel based on the packet transmission logic indexes sequentially generated in the step (4), and transmits the acquired messages outwards;
(7) and the driving program enables the message forwarding task of any single-core processor to inquire the cache region through the single-sending packet channel so as to confirm whether the messages are all sent, and if the messages are confirmed to be all sent, the message forwarding task of the confirmed single-core processor informs the cache region through the single-receiving packet channel so that the cache region recovers the cache and distributes the cache to the single-receiving packet channel.
In the invention, the driver program enables the message forwarding tasks of the same single-core processor to execute the step (2), the step (5) and the step (7), or the driver program enables the message forwarding tasks of different single-core processors to respectively execute the step (2), the step (5) and the step (7).
In the invention, in the step (2), the step (5) and the step (7), except the message forwarding tasks of the single-core processors corresponding to the step are executed, the message forwarding tasks of the other single-core processors are not allowed to be executed.
In step (4) of the invention, each message forwarding task has tid, and for message forwarding task 0, message forwarding task 1, message forwarding task 2 and … …, message forwarding task n-1, tid is respectively 0, 1, 2 and … … n-1, wherein n is the number of cores for forwarding, and the tid of the current unprocessed packet receiving logic index of the current message forwarding task is initialized to be equal to the tid of the current message forwarding task when the packet receiving logic index of the current unprocessed packet forwarding task is processed for the first time;
when each message forwarding task receives a packet, the packet receiving logic index of the current unprocessed message forwarding task is taken as an initial value, the maximum receivable packet receiving logic index of all the message forwarding tasks is taken as a maximum value, the logic indexes are circularly taken to sequentially process the messages in the single packet receiving channel cache, and after each message is processed, the packet receiving logic index value of the unprocessed message forwarding task is set as the original value plus n.
Initializing the current unprocessed packet forwarding task packet sending logic index to be equal to the packet forwarding task tid when the packet sending logic index is processed for the first time; when each message forwarding task is sent, taking the packet sending logic index of the current unprocessed message forwarding task as the packet sending logic index to carry out the editing operation of the sending buffer area, and then setting the value of the packet sending logic index of the unprocessed message forwarding task to be equal to the original index value plus n.
The sending buffer editing operation comprises the steps of converting a physical index according to a sending logic index to find a sending descriptor, setting a message pointer of the sending descriptor as a receiving message pointer value, and normally setting other fields of the sending descriptor.
In step (5), the invention takes the value obtained by subtracting the last packet sending logic index value from the minimum value in the packet sending logic index values of the current packet forwarding tasks which are not processed by each packet forwarding task, confirms that the value sends the packet to a single packet sending channel for processing, and sets the last packet sending logic index value equal to the minimum value.
In the invention, the single receiving and sending channel is composed of a single receiving channel and a single sending channel.
When a single packet receiving channel receives a message, an array of packet receiving descriptors is generated, the array of packet receiving descriptors is composed of a plurality of packet receiving descriptors, and each packet receiving descriptor comprises a pointer of a message buffer area, a message length and a packet receiving port number, and the number of the packet receiving descriptors is 8 bytes. Each receive packet descriptor initializes a pointer to a packet buffer. When the packet receiving channel is initialized, the head pointer and the size of the array of the packet receiving descriptor are transmitted to the network chip. The network chip takes the packet as a ring queue of the received packet. The packet receiving driver of the single packet receiving channel can inquire the number of the available messages in the array, sequentially obtain the messages with the specified number, and return the resources of the messages with the specified number.
When the single-transmission packet channel receives the message, an array of packet sending descriptors is generated, the array of packet sending descriptors is composed of a plurality of packet sending descriptors, and each packet sending descriptor contains a pointer of a message buffer area, the length of the message and the number of packet receiving ports, and the number of the packet sending descriptors is 8 bytes in total. Each packet descriptor initializes a pointer to a packet buffer. When the packet sending channel is initialized, the packet sending descriptor array head pointer and the size of the array are transmitted to the network chip, and the network chip is used as a ring queue for sending packets. The packet sending driver of the single packet sending channel can inform the network chip of the number of the messages in the sequence sending array, inquire the number of the sent messages in the network chip array and inform the network chip of the recovery of the sent message resources.
Because of the modern computer memory mapping technology, the virtual address used when the CPU accesses the message cache, the physical address used when the network chip accesses the message cache, and the memory address pointed to actually.
The index values of the packet receiving descriptor and the packet sending descriptor are respectively two, one is a physical index and is 0 to 511 used for accessing arrays such as the processing descriptor; one is a logical index, the index is always increased, 64 bits can determine a message, and disorder caused by queue reverse volume can be avoided. The logical index may be physically indexed by the upper physical index mask (511 in this example), which in turn does not result in a logical index. The logical index is obtained by the packet receiving descriptor and the packet sending descriptor, and the logical index is required to be converted into the physical index only when the array is accessed during implementation, otherwise, the logical index overflows (the index of the array exceeds the maximum value of the array elements).
The embodiments of the present invention are described only for the preferred embodiments of the present invention, and not for the limitation of the concept and scope of the present invention, and various modifications and improvements made to the technical solution of the present invention by those skilled in the art without departing from the design concept of the present invention shall fall into the protection scope of the present invention, and the technical content of the present invention which is claimed is fully set forth in the claims.

Claims (7)

1. The method for forwarding the lockless shared message of the single-transceiving channel multi-core network communication equipment comprises a network chip and a multi-core processor, wherein each single-core processor in the multi-core processor shares a single transceiving channel and bi-directionally transmits data with the network chip, the single transceiving channel consists of a single receiving channel, a single sending channel and a buffer area shared by the single receiving channel and the single sending channel, each single-core processor obtains data from the network chip through the single receiving channel, and each single-core processor sends the data to the network chip through the single sending channel, and the method is characterized by comprising the following steps:
(1) when the network chip receives a plurality of messages transmitted from the outside, caching each message into a cache region through a single packet receiving channel in sequence to serve as a message to be received, generating a packet receiving logic index of each message in the cache region according to the sequence, and simultaneously respectively distributing a message forwarding task to each single-core processor by a driving program of the network chip so that one single-core processor only operates one message forwarding task;
(2) the driver program enables the message forwarding task of any one single-core processor to inquire the cache region through the single packet receiving channel so as to determine whether a message to be received exists in the cache region or not, if the message to be received exists, the determined message forwarding task of the single-core processor transfers the determination result value to other single-core processors, namely the maximum receivable packet receiving logic index of all the message forwarding tasks is increased by the value;
(3) the message forwarding tasks of the single-core processors sequentially rotate to sequentially pass through the single packet receiving channels to obtain packet receiving logic indexes of the messages from the cache region, and meanwhile the message forwarding tasks of the single-core processors sequentially pass through the single packet receiving channels to obtain corresponding messages to be received from the cache region on the basis of the obtained packet receiving logic indexes;
(4) enabling the message forwarding tasks of the single-core processors to rotate and sequentially pass through the single-packet sending channel and the buffer area as the message to be sent according to the same sequence as the step (3) by the driving program, generating the packet sending logic index of the message of each packet to be sent by the buffer area according to the sequence, and sending the generated packet sending logic index to the network chip by the buffer area according to the sequence;
(5) the driver program enables the message forwarding task of any single-core processor to inquire the cache region through the single packet sending channel so as to determine whether the message to be sent exists in the cache region, and if the message to be sent exists, the message forwarding task of the single-core processor for determination transmits a determination result to the network chip;
(6) the network chip sequentially acquires the messages of each packet to be transmitted from the cache region through the single packet transmission channel based on the packet transmission logic indexes sequentially generated in the step (4), and transmits the acquired messages outwards;
(7) and the driving program enables the message forwarding task of any single-core processor to inquire the cache region through the single-sending packet channel so as to confirm whether the messages are all sent, and if the messages are confirmed to be all sent, the message forwarding task of the confirmed single-core processor informs the cache region through the single-receiving packet channel so that the cache region recovers the cache and distributes the cache to the single-receiving packet channel.
2. The lock-free shared message forwarding method for the single-transceiving channel multi-core network communication device according to claim 1, wherein: and (3) enabling the message forwarding tasks of the same single-core processor to execute the step (2), the step (5) and the step (7) by the driver, or enabling the message forwarding tasks of different single-core processors to respectively execute the step (2), the step (5) and the step (7) by the driver.
3. The lock-free shared message forwarding method for the single-transceiving channel multi-core network communication device according to claim 1, wherein: in the step (2), the step (5) and the step (7), except for executing the message forwarding tasks of the single-core processors in the corresponding step, the message forwarding tasks of the other single-core processors are not allowed to be executed.
4. The lock-free shared message forwarding method for single-transceiving channel multi-core network communication equipment according to claim 1, wherein the method comprises the following steps: in the step (4), each message forwarding task has tid, and for a message forwarding task 0, a message forwarding task 1, a message forwarding task 2 and a … … message forwarding task n-1, tid is respectively 0, 1, 2 and … … n-1, wherein n is the number of cores used for forwarding, and the packet receiving logic index of the current unprocessed message forwarding task is initialized to be equal to the tid of the current message forwarding task when being processed for the first time;
when each message forwarding task receives a packet, the packet receiving logic index of the current unprocessed message forwarding task is taken as an initial value, the maximum receivable packet receiving logic index of all the message forwarding tasks is taken as a maximum value, the logic indexes are circularly taken to sequentially process the messages in the single packet receiving channel cache, and after each message is processed, the packet receiving logic index value of the unprocessed message forwarding task is set to be the original value plus n.
5. The lock-free shared message forwarding method for the single-transceiving channel multi-core network communication device according to claim 4, wherein: initializing the current unprocessed packet forwarding task packet sending logic index to be equal to the packet forwarding task tid when the packet sending logic index is processed for the first time; when each message forwarding task is sent, taking the packet sending logic index of the current unprocessed message forwarding task as the packet sending logic index to carry out the editing operation of the sending buffer area, and then setting the value of the packet sending logic index of the unprocessed message forwarding task to be equal to the original index value plus n.
6. The lock-free shared message forwarding method for the single-transceiving channel multi-core network communication device according to claim 5, wherein: the sending buffer editing operation comprises the steps of converting a physical index according to a sending logic index to find a sending descriptor, setting a message pointer of the sending descriptor as a receiving message pointer value, and normally setting other fields of the sending descriptor.
7. The lock-free shared message forwarding method for the single transceiving channel multi-core network communication device according to any one of claims 1 to 6, wherein: in the step (5), the value obtained by subtracting the last packet sending logic index value from the minimum value of the packet sending logic index values of the current packet forwarding tasks which are not processed by each packet forwarding task is taken, the value is confirmed to send the packet to a single packet sending channel for processing, and the last packet sending logic index value is set to be equal to the minimum value.
CN202011585304.0A 2020-12-29 2020-12-29 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment Active CN112511460B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011585304.0A CN112511460B (en) 2020-12-29 2020-12-29 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011585304.0A CN112511460B (en) 2020-12-29 2020-12-29 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment

Publications (2)

Publication Number Publication Date
CN112511460A CN112511460A (en) 2021-03-16
CN112511460B true CN112511460B (en) 2022-09-09

Family

ID=74951855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011585304.0A Active CN112511460B (en) 2020-12-29 2020-12-29 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment

Country Status (1)

Country Link
CN (1) CN112511460B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115225565B (en) * 2022-07-25 2023-12-15 科东(广州)软件科技有限公司 Data packet receiving and sending configuration, receiving and sending methods and devices and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050140A (en) * 2013-03-15 2014-09-17 英特尔公司 Method, apparatus, system for hybrid lane stalling or no-lock bus architectures
CN104394096A (en) * 2014-12-11 2015-03-04 福建星网锐捷网络有限公司 Multi-core processor based message processing method and multi-core processor
CN105159779A (en) * 2015-08-17 2015-12-16 深圳中兴网信科技有限公司 Method and system for improving data processing performance of multi-core CPU
CN107241305A (en) * 2016-12-28 2017-10-10 神州灵云(北京)科技有限公司 A kind of network protocol analysis system and its analysis method based on polycaryon processor
CN109542832A (en) * 2018-11-30 2019-03-29 青岛方寸微电子科技有限公司 Communication system and method between a kind of heterogeneous polynuclear CPU of no lock mechanism
CN110011936A (en) * 2019-03-15 2019-07-12 北京星网锐捷网络技术有限公司 Thread scheduling method and device based on multi-core processor
CN110704211A (en) * 2019-09-29 2020-01-17 烽火通信科技股份有限公司 Method and system for receiving packets across CPUs (central processing units) in multi-core system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8417848B2 (en) * 2007-11-20 2013-04-09 Hangzhou H3C Technologies Co., Ltd. Method and apparatus for implementing multiple service processing functions
US8380965B2 (en) * 2009-06-16 2013-02-19 International Business Machines Corporation Channel-based runtime engine for stream processing
US20120210018A1 (en) * 2011-02-11 2012-08-16 Rikard Mendel System And Method for Lock-Less Multi-Core IP Forwarding
CN105511954B (en) * 2014-09-23 2020-07-07 华为技术有限公司 Message processing method and device
US10635485B2 (en) * 2018-03-23 2020-04-28 Intel Corporation Devices, systems, and methods for lockless distributed object input/output

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104050140A (en) * 2013-03-15 2014-09-17 英特尔公司 Method, apparatus, system for hybrid lane stalling or no-lock bus architectures
CN104394096A (en) * 2014-12-11 2015-03-04 福建星网锐捷网络有限公司 Multi-core processor based message processing method and multi-core processor
CN105159779A (en) * 2015-08-17 2015-12-16 深圳中兴网信科技有限公司 Method and system for improving data processing performance of multi-core CPU
CN107241305A (en) * 2016-12-28 2017-10-10 神州灵云(北京)科技有限公司 A kind of network protocol analysis system and its analysis method based on polycaryon processor
CN109542832A (en) * 2018-11-30 2019-03-29 青岛方寸微电子科技有限公司 Communication system and method between a kind of heterogeneous polynuclear CPU of no lock mechanism
CN110011936A (en) * 2019-03-15 2019-07-12 北京星网锐捷网络技术有限公司 Thread scheduling method and device based on multi-core processor
CN110704211A (en) * 2019-09-29 2020-01-17 烽火通信科技股份有限公司 Method and system for receiving packets across CPUs (central processing units) in multi-core system

Also Published As

Publication number Publication date
CN112511460A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
EP1581875B1 (en) Using direct memory access for performing database operations between two or more machines
CN110402568B (en) Communication method and device
Lazarev et al. Dagger: efficient and fast RPCs in cloud microservices with near-memory reconfigurable NICs
US7076569B1 (en) Embedded channel adapter having transport layer configured for prioritizing selection of work descriptors based on respective virtual lane priorities
US10521283B2 (en) In-node aggregation and disaggregation of MPI alltoall and alltoallv collectives
US7948999B2 (en) Signaling completion of a message transfer from an origin compute node to a target compute node
US9813283B2 (en) Efficient data transfer between servers and remote peripherals
US7797445B2 (en) Dynamic network link selection for transmitting a message between compute nodes of a parallel computer
CN111966446B (en) RDMA virtualization method in container environment
US20080267066A1 (en) Remote Direct Memory Access
US20080270563A1 (en) Message Communications of Particular Message Types Between Compute Nodes Using DMA Shadow Buffers
US11750418B2 (en) Cross network bridging
US8959172B2 (en) Self-pacing direct memory access data transfer operations for compute nodes in a parallel computer
CN113891396B (en) Data packet processing method and device, computer equipment and storage medium
US9274586B2 (en) Intelligent memory interface
CN114756388A (en) RDMA (remote direct memory Access) -based method for sharing memory among cluster system nodes as required
CN112511460B (en) Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
Qiu et al. Full-kv: Flexible and ultra-low-latency in-memory key-value store system design on cpu-fpga
CN112968965B (en) Metadata service method, server and storage medium for NFV network node
US9137167B2 (en) Host ethernet adapter frame forwarding
CN117240935A (en) Data plane forwarding method, device, equipment and medium based on DPU
CN116471242A (en) RDMA-based transmitting end, RDMA-based receiving end, data transmission system and data transmission method
US20220358078A1 (en) Integrated circuit, data processing device and method
US7266614B1 (en) Embedded channel adapter having link layer configured for concurrent retrieval of payload data during packet transmission
Baymani et al. Exploring RapidIO technology within a DAQ system event building network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Shen Xiaofeng

Inventor before: Hu Jun

CB03 Change of inventor or designer information