CN108023829B - Message processing method and device, storage medium and electronic equipment - Google Patents

Message processing method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN108023829B
CN108023829B CN201711124549.1A CN201711124549A CN108023829B CN 108023829 B CN108023829 B CN 108023829B CN 201711124549 A CN201711124549 A CN 201711124549A CN 108023829 B CN108023829 B CN 108023829B
Authority
CN
China
Prior art keywords
network card
current network
message
cpu
queue
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711124549.1A
Other languages
Chinese (zh)
Other versions
CN108023829A (en
Inventor
刘健男
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Corp
Original Assignee
Neusoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Corp filed Critical Neusoft Corp
Priority to CN201711124549.1A priority Critical patent/CN108023829B/en
Publication of CN108023829A publication Critical patent/CN108023829A/en
Application granted granted Critical
Publication of CN108023829B publication Critical patent/CN108023829B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/29Flow control; Congestion control using a combination of thresholds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/6245Modifications to standard FIFO or LIFO

Abstract

The disclosure relates to a message processing method and device, a storage medium and an electronic device. The method comprises the following steps: when the current network card fails to send the packet, the forwarding CPU utilizes the available CPU and/or the available network card to carry out load balancing on the packet which fails to send the packet of the current network card; if the load balancing fails, judging whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value or not; if the number of the messages in the cache queue does not exceed a preset threshold value, the forwarding CPU adds the messages with the load balancing failure to the cache queue; and when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send the message in the cache queue. By the scheme, the packet sending failure probability is reduced, and the stability of the system is improved.

Description

Message processing method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of communications technologies, and in particular, to a method and an apparatus for processing a packet, a computer-readable storage medium, and an electronic device.
Background
A network forwarding system based on a user mode realized by a DPDK (Data Plane Development Kit, Chinese) platform mainly processes the receiving and sending of network messages in a polling mode, and if the design can ensure that the operation of multiple cores on shared resources is not locked as much as possible, a high-performance multi-core system can achieve the performance of network card speed limitation. However, for the speed-limited forwarding system, once the burst traffic is slightly more at a certain intermediate moment, the network card fails to send packets, and the speed-limited performance of the network card cannot be achieved.
At present, the forwarding performance of a firewall basically can reach the maximum speed limiting performance under the condition of stable test, but a certain failure probability still exists. In the performance test, taking the throughput under the condition of speed limit of the test as an example, if the tester tests the throughput of 160G, the probability of failure is 20% and the probability of success is 80%. With the existing design, when the test is stable, the test of the speed limit condition can be passed, and once a little jitter occurs, the network card packet sending failure can be caused, thereby affecting the overall stability of the network forwarding system.
How to reduce the failure probability of sending a packet and improve the stability of the system is a problem which needs to be solved urgently at present.
Disclosure of Invention
The invention aims to provide a message processing method and device, a computer readable storage medium and electronic equipment, which are beneficial to reducing the packet sending failure probability and improving the stability of a system.
In order to achieve the above object, in a first aspect, the present disclosure provides a message processing method, where the method includes:
when the current network card fails to send the packet, the forwarding CPU utilizes the available CPU and/or the available network card to carry out load balancing on the packet which fails to send the packet of the current network card;
if the load balancing fails, judging whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value or not;
if the number of the messages in the cache queue does not exceed a preset threshold value, the forwarding CPU adds the messages with the load balancing failure to the cache queue;
and when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send the message in the cache queue.
Optionally, the forwarding CPU is configured with a bond port, and the network card of the bond port stores configuration information of the bond port, then
And determining the network card which does not reach the performance limit in the bond port as the available network card so that the current network card sends a packet failure message to the available network card according to the configuration information.
Optionally, the current network card has a connection relationship with a plurality of CPUs, and each CPU has a corresponding sending queue in the current network card, then
And determining the CPU corresponding to the transmission queue with the most effective descriptors in all the transmission queues of the current network card as the available CPU, so that the forwarding CPU sends the packet-sending failure message to the available CPU.
Optionally, when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send a message in the buffer queue, where the method includes:
the forwarding CPU judges whether the number of messages received by the network card queue in the polling period is less than a first upper limit value or not;
if the number of the messages received by the network card queue in the polling period is less than a first upper limit value, the forwarding CPU judges that the load of the current network card is idle;
if the cache queue is not empty, the forwarding CPU controls the current network card to forward the message in the cache queue until the cache queue is empty or the forwarding processing fails to send the packet.
Optionally, the method further comprises:
the forwarding CPU judges whether the number of the messages received by the kernel-mode queue in the polling period is less than a second upper limit value or not;
if the number of the messages received by the kernel state queue in the polling period is less than a second upper limit value, the forwarding CPU judges that the load of the current network card is idle;
and if the cache queue is not empty, the forwarding CPU controls the current network card to perform one-time forwarding processing on the message in the cache queue.
Optionally, if the number of packets in the buffer queue exceeds a preset threshold, the method further includes:
the forwarding CPU judges whether the performance of the current network card per second exceeds a performance limit;
if the performance of the current network card per second exceeds the performance limit, the forwarding CPU records the time exceeding the performance limit and carries out packet loss processing on the messages received after the time.
Optionally, the manner of obtaining the performance of the current network card per second is as follows:
the forwarding CPU acquires the message once per second, and all the sending queues of the current network card successfully send the total byte number of the message in the current second;
and the forwarding CPU accumulates the total byte number of the messages successfully sent by all the sending queues in the current second to obtain the performance of the current network card per second.
In a second aspect, the present disclosure provides a packet processing apparatus, where the packet processing apparatus belongs to a forwarding CPU, and the packet processing apparatus includes:
the message load balancing module is used for carrying out load balancing on the message of the current network card packet sending failure by utilizing the available CPU and/or the available network card when the current network card packet sending failure occurs;
the message number judging module is used for judging whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value or not when the load balancing fails;
the buffer queue adding module is used for adding the message with the failure load balance to the buffer queue when the number of the messages in the buffer queue does not exceed a preset threshold value;
and the polling control module is used for controlling the current network card to send the message in the cache queue if the load of the current network card is idle during polling processing.
Optionally, the forwarding CPU is configured with a bond port, and configuration information of the bond port is stored in a network card of the bond port, and the apparatus further includes:
and the available network card determining module is used for determining the network card which does not reach the performance limit in the bond port as the available network card so that the current network card sends a packet sending failure message to the available network card according to the configuration information.
Optionally, the current network card has a connection relationship with a plurality of CPUs, and each CPU has a corresponding sending queue on the current network card, and the apparatus further includes:
and the available CPU determining module is used for determining the CPU corresponding to the sending queue with the most effective descriptors in all the sending queues of the current network card as the available CPU so that the forwarding CPU sends the packet sending failure message to the available CPU.
Optionally, the polling control module is configured to determine whether the number of messages received by the network card queue in the current polling period is smaller than a first upper limit value; if the number of the messages received by the network card queue in the polling period is less than a first upper limit value, judging that the load of the current network card is idle; if the cache queue is not empty, controlling the current network card to forward the message in the cache queue until the cache queue is empty or the forwarding processing fails.
Optionally, the polling control module is further configured to determine whether the number of messages received by the kernel-state queue in the current polling period is smaller than a second upper limit value; if the number of the messages received by the kernel state queue in the polling period is less than a second upper limit value, judging that the load of the current network card is idle; and if the cache queue is not empty, controlling the current network card to perform primary forwarding processing on the message in the cache queue.
Optionally, the apparatus further comprises:
the performance per second judging module is used for judging whether the performance per second of the current network card exceeds a performance limit or not when the number of the messages in the cache queue exceeds a preset threshold;
and the packet loss processing module is used for recording the time exceeding the performance limit when the performance of the current network card per second exceeds the performance limit, and performing packet loss processing on the message received after the time.
Optionally, the apparatus further comprises:
a performance obtaining module per second, configured to obtain the total number of bytes of the message successfully sent in the current second by all sending queues of the current network card once per second; and accumulating the total byte number of the messages successfully sent by all the sending queues in the current second to obtain the performance of the current network card per second.
In a third aspect, the present disclosure provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the steps of the message processing method described above.
In a fourth aspect, the present disclosure provides an electronic device comprising:
the computer-readable storage medium described above; and
one or more processors to execute the program in the computer-readable storage medium.
In the scheme disclosed by the disclosure, when the current network card fails to send a packet, the forwarding CPU can forward the packet-failed message by using the available CPU and/or the available network card in a load balancing manner, and further add the packet-failed in the load balancing manner to the buffer queue in the buffer queue manner when the load balancing fails, and control the current network card to forward the packet in the buffer queue when the load of the current network card is idle, so that the packet-failed message can be successfully sent out as much as possible, thereby being beneficial to reducing the packet-failure probability and improving the stability of the system.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure without limiting the disclosure. In the drawings:
FIG. 1 is a schematic diagram of message polling in the present disclosure;
fig. 2 is a schematic flow chart of the message processing method of the present disclosure;
FIG. 3 is a schematic diagram of a CPU configuration bond port in the present disclosure;
FIG. 4 is a schematic diagram of a correspondence relationship between a network card and a CPU in the present disclosure;
fig. 5 is a schematic diagram of a corresponding relationship of a CPU sending a message to a network card in the present disclosure;
fig. 6 is a schematic structural diagram of a message processing apparatus according to the present disclosure;
fig. 7 is a block diagram of an electronic device for message processing according to the present disclosure.
Detailed Description
The following detailed description of specific embodiments of the present disclosure is provided in connection with the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present disclosure, are given by way of illustration and explanation only, not limitation.
Before introducing the present disclosure, an application scenario of the present disclosure is explained.
Generally, when a network forwarding system forwards a message, in order to ensure system performance, a user mode CPU may process the message in a polling manner. As shown in fig. 1, in a polling cycle, the user mode CPU1 may receive a message from the network card, write the message into the network card queue, and perform forwarding processing; or receiving the message from the kernel-mode CPU, writing the message into the kernel-mode queue and forwarding the message; and messages from other user mode CPUs can be received, written into the inter-core queue and forwarded. That is, the user mode CPU1 may receive and forward messages from the kernel mode, the user mode, and the network card in a polling cycle, and needs to process timer logic in the middle.
It can be understood that the CPU in each user mode can be regarded as a forwarding CPU, receives the packet in a polling manner, writes the polled packet into a sending queue of the network card, and forwards the packet through the network card. If the packet sending speed of the network card is not as fast as the forwarding speed of the CPU for processing the packet in the actual application process, the descriptor of the sending queue of the network card is not enough, and the packet sending fails.
Generally, before sending and receiving a message, an identifier capable of uniquely identifying a message queue and a data structure needs to be created, and the unique identifier is called a message queue descriptor msqid and is used for identifying or referring to the related message queue and data structure.
At present, for the problem of packet sending failure of a network forwarding system, packet loss processing is mostly performed after the packet sending failure, so that a certain probability of failure occurs in a performance test.
In view of the above, the present disclosure provides a message processing scheme, which is helpful to improve the success rate of performance testing and avoid the failure of the whole performance testing with a certain probability due to a small amount of packet loss in part of burst traffic. The following explains the implementation of the disclosed scheme.
Referring to fig. 2, a flowchart of a message processing method according to the embodiment of the present disclosure is shown. The scheme disclosed by the invention can be applied to message forwarding equipment, and the method can comprise the following steps:
step 101, when the current network card fails to send a packet, a forwarding CPU utilizes an available CPU and/or an available network card to perform load balancing on the packet which fails to send the current network card.
The disclosed solution provides a new logic for handling a packet-sending failure. For example, the user mode CPU1 is used as a forwarding CPU, and if the current network card fails to send a packet, the forwarding CPU may first forward the packet that fails to send in a load balancing manner. As an example, the current network card may return the number of successfully sent messages to the forwarding CPU, and the forwarding CPU may determine whether the current network card fails to send the packet and which messages failed to send the packet.
In practical applications, to improve the packet transmission performance, the DPDK is usually transmitted in a batch manner. Specifically, an array may be set for each packet sent by the forwarding CPU to a network card, and the array size may be DEFAULT _ TX _ BURST. After the forwarding CPU finishes message processing, the messages can be put into the array, when the number of the messages in the array reaches DEFAULT _ TX _ BURST, the forwarding CPU sends the messages into a sending queue of the network card in batches, and if no effective descriptor exists in the sending queue of the network card, packet sending failure occurs. Of course, in combination with the application requirement, the forwarding CPU may also perform packet timing transmission, that is, when the number of the messages in the array is less than DEFAULT _ TX _ BURST, the messages less than DEFAULT _ TX _ BURST may be transmitted to the network card at a timing, which is not specifically limited in this disclosure.
In connection with the above, the following two points can be known:
first, in order to ensure the packet sending performance, the forwarding CPU does not perform single packet sending on a packet-by-packet basis, i.e., for each packet, whether the sending is actually successful or failed from hardware is not returned immediately, but can be determined only when the packet is sent in subsequent batches. That is to say, when it is determined that the current network card fails to send a packet, actually, a part of the packets in the batch of packets is successfully sent, and a part of the packets is failed to send, considering that the same batch of packets may come from different devices and ports, that is, the entry information of the packets is different, but it is known from the existing communication protocol that when the current network card sends a packet, the packet only contains the destination port information and does not contain the entry information, that is, the processing process of the part of the packets failed to send the packet has lost the context logic of the original processing, so that the adjustment cannot be performed through the original logic, and the part of the packets failed to send the packet can be successfully sent.
Second, the reason for the packet sending failure is that the sending queue of the network card is full, i.e., there is no valid descriptor, so that the message cannot be sent into the sending queue of the network card.
When a packet sending failure problem occurs, the scheme disclosed in the present disclosure may perform load balancing in combination with the two points described above, so that the packet loss rate caused by burst traffic increase is reduced to the minimum, which is illustrated below.
1. For the first point, the scheme of the present disclosure considers the bond port for increasing the network bandwidth, and because the packet has lost the previous context when the packet sending fails, the load balancing can be performed through the available network card of the bond port, so as to ensure the success of the packet sending.
Referring to fig. 3, a pair of bond ports may be configured at the forwarding CPU1 to achieve a doubling of performance. The bond0 can be used as a flow inlet to receive messages from the network card 1 and the network card 2; the bond1 can be used as a traffic outlet to send messages into the sending queues of the network card 3 and the network card 4. When the inlet flow reaches the limit, if the outlet flow is not uniform enough, a network card packet sending failure may occur, and the other network card does not reach the performance limit, that is, the packet sending by the bond port is not uniform, and at this time, the packet sending failure message may be subjected to load balancing through other available network cards of the bond port.
Because the packet sending system of the DPDK platform is sent in batch, that is, the packet sending failure is found only after the message reaches the drive module of the network card, the configuration information of the bond port can be sent to the network card. In the above example, the configuration information of the bond1 may be sent to the network card 3 and the network card 4, so that when the network card 3 fails to be sent, it may be determined whether other network cards support bond ports according to the configuration information, for example, it may be known that the network card 4 still exists according to the configuration information, so that the network card 4 may be used as an available network card for load balancing. It can be understood that if there are multiple available network cards, the available network card with a smaller load may be selected for load balancing. The method for selecting the available network cards and the number of the available network cards for load balancing in the scheme of the disclosure are not limited, and can be determined according to actual application requirements.
It should be noted that the configuration information of the bond port may be understood as which network cards are configured for the bond port, and taking the schematic diagram shown in fig. 3 as an example, the configuration information of the bond1 is that the network card 3 and the network card 4 are configured for the bond port.
As an example, the network card 3 balances the packet sending failure message load to the network card 4, and the packet sending failure message may be sent to the network card 4 through the current forwarding CPU; or, the current forwarding CPU may be inverted to another forwarding CPU, and the other forwarding CPU sends the packet-sending failure message to the network card 4.
2. For the second point above, when the sending queue of the network card is full, the forwarding CPU corresponding to the sending queue with a smaller load in all the sending queues of the network card can be used as an available CPU for load balancing, so as to ensure successful packet sending.
Referring to fig. 4, the correspondence between the network card and the CPU is shown. As can be seen from the figure, the CPUs and the network cards are in a full-connection correspondence relationship, that is, each CPU can send a message to any network card, and the message is sent without lock processing at all, because each network card has a sending queue unique to the network card from the CPU.
For the network card, the corresponding relationship between receiving and sending the message from the CPU can be seen in fig. 5. As can be seen from the figure, each CPU can send a message to the network card 1 of numa0, and each CPU can also send a message to the network card 5 of numa1, that is, each network card can receive a message sent by any one CPU, and the network cards and the CPUs are also in a full-connection correspondence relationship.
Therefore, when the packet sending fails, the CPU with smaller load can be searched and used as an available CPU for load balancing. Specifically, the sending queue with the most effective descriptors in all the sending queues of the current network card may be obtained, the CPU corresponding to the sending queue is determined to be an available CPU, and the forwarding CPU sends the packet-sending failure message to the available CPU for forwarding processing. Combining with the actual application requirement, one sending queue with the first number of the effective descriptors can be sent; or, at least one transmission queue in which the number of valid descriptors exceeds a preset value is determined as the transmission queue with the largest number of valid descriptors, which is not specifically limited by the present disclosure.
In summary, when a packet sending failure problem occurs, the forwarding CPU may perform load balancing processing on the packet sending failure message through the available network card and/or the available CPU, so as to send the packet sending failure message out successfully.
Step 102, if the load balancing fails, determining whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value.
Step 103, if the number of the messages in the cache queue does not exceed a preset threshold, the forwarding CPU adds the message with the failure of load balancing to the cache queue.
And 104, when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send the message in the cache queue.
In the actual application process, if the load balancing fails, for example, an available CPU and an available network card are not found; or the processing capacity of the available CPU and the available network card cannot meet the processing requirement of the message which fails to be sent out by the current network card. Correspondingly, the forwarding CPU can also process the message in a buffer queue mode, and the message which fails to be sent can be successfully sent out as much as possible.
Specifically, it may be determined whether the number of messages in the buffer queue corresponding to the forwarding CPU exceeds a preset threshold, and if the number of messages in the buffer queue does not exceed the preset threshold, it indicates that the current network card has the ability to process the packet-sending failure message in the buffer queue, so the load balancing failure message may be added to the buffer queue, and when the current network card is idle, the current network card processes the messages in the buffer queue, and the part of messages is successfully sent out.
As an example, the forwarding CPU may poll the entire load of the current network card to determine whether the load of the current network card is free. Specifically, assuming that the number of messages received by the forwarding CPU in each polling cycle is a first upper limit value, when the number of messages actually received by a network card queue of the forwarding CPU in the polling cycle is less than the first upper limit value, it indicates that the current load of the network card is still vacant, and if the buffer queue is not vacant, the current network card may try to process the messages in the buffer queue, so as to successfully send out the messages in the buffer queue.
When the load of the current network card is judged to be idle, the forwarding CPU can control the current network card to forward the messages in the cache queue for multiple times, and the forwarding CPU stops the processing until the cache queue is empty or the forwarding processing fails to send the packets, and waits for the next polling cycle to forward the messages. Therefore, the processing resources of the network card when the load is idle can be utilized to the maximum extent; in addition, the system can stop immediately when the packet sending fails, and can avoid the phenomena of null and the like, which influence the forwarding of normal messages of the system and further influence the overall forwarding performance of the system.
As can be seen from practical applications, most of the messages received by the forwarding CPU come from the network card, but because part of the messages need to enter the kernel mode for processing, the forwarding CPU also needs to process the messages sent back from the kernel mode again, and other messages sent by the forwarding CPU, so that the forwarding CPU can control the current network card to process the messages in the cache queue when the number of the messages in the network card queue is smaller than the first upper limit value, and can process the messages in the cache queue when the number of the messages in the kernel mode queue and the inter-core queue is smaller. Considering that the number of messages in the inter-core queue is very small in the practical application process, the scheme disclosed by the invention can process the messages in the cache queue when the core-state queue is idle.
As an example, a second upper limit value may be set, when the number of messages actually received by the kernel mode queue of the forwarding CPU in the polling cycle is less than the second upper limit value, it indicates that the load of the current network card is still vacant, and if the buffer queue is not vacant, the current network card may attempt to perform one-time forwarding processing on the messages in the buffer queue.
According to practical application, most of the reasons for packet sending failure are that the network card queue receives packets too fast, but not that the kernel-mode queue or the inter-core queue has more messages, so the scheme disclosed by the invention can distinguish the queue types in the whole polling cycle, on one hand, the processing priority of the network card queue is highest, and on the other hand, the situation that the kernel-mode queue and the inter-core queue are idle is also considered.
As an example, the packet sending failure message may be processed by combining the performance limit of the hardware network card in the present disclosure. Specifically, after it is determined that the current network card reaches the performance limit, packet loss processing may be performed on the packet sending failure packet, so as to ensure the robustness of the forwarding system, which is described in the following.
For example, the processing capability of the forwarding CPU is a capacity of ten-gigabit, if one ten-gigabit network card and one gigabit network card correspond to each other, and a message is sent from the ten-gigabit network card to the gigabit network card, that is, the gigabit network card is the current network card in the present disclosure, with such configuration, a packet receiving of the gigabit network card will soon exceed a performance limit, if a packet sending failure message is written into the cache queue without limitation, even if the cache queue runs out all memories, the problem of packet sending failure cannot be solved, so that the performance per second of the current network card can be obtained, and when the performance per second of the current network card exceeds the performance limit, the time exceeding the performance limit is recorded, and a packet loss process is performed on the message received after the time.
Specifically, if the number of messages in the cache queue exceeds a preset threshold, it may be determined whether the performance of the current network card per second exceeds a performance limit, that is, the number of messages in the cache queue exceeds the preset threshold, which is a trigger condition for determining the performance limit of the network card. If the performance of the current network card per second does not exceed the performance limit, the current network card is capable of processing the packet-sending failure message in the above-described manner of the cache queue. If the number of the messages in the cache queue exceeds a preset threshold value and the performance of the current network card per second exceeds a performance limit, packet loss processing can be directly carried out. It can be understood that the performance per second of the current network card is the total number of bytes of messages successfully sent to the current network card by all CPUs per second.
In a multi-core system, the processing performance of each network card is the overall performance of adding all the sending queues of the network card together, and the problem of competition among multiple cores is involved. In order to enable the performance of multiple cores not to affect each other, the scheme disclosed by the invention can acquire the performance of the current network card per second in the following ways:
(1) and acquiring the per-second performance of each sending queue of the current network card, namely the total number of bytes of messages successfully sent to the current network card by the CPU corresponding to each sending queue per second.
For each sending queue, a corresponding per CPU variable may be established to count the total number of bytes of the message successfully sent to the current network card by each CPU, and the total number of bytes of the message successfully sent in the last second of the sending queue may also be recorded.
(2) And accumulating the performance per second of all the sending queues of the current network card to obtain the performance per second of the current network card, so that the performance per second of the current network card can be compared with the performance limit of the current network card to judge whether the performance limit is exceeded or not.
When the per-second performance of the current network card is calculated based on the per CPU variables, the operation that a plurality of CPUs write the same variable at the same time does not occur, that is, the plurality of CPUs do not have competition operation. In addition, according to the scheme, whether the performance limit of the hardware is exceeded or not is judged once when one message is received, but the performance limit is triggered and judged when the packet sending fails and the number of the messages in the cache queue exceeds a preset threshold, so that the system performance is improved, and the system resources consumed by calculating the performance per second are reduced.
As an example, the present disclosure may further set an overproof flag bit, which is 0 by default, to indicate that the performance per second of the current network card does not exceed the performance limit. When the forwarding CPU judges that the performance of the current network card exceeds the standard, the exceeding flag bit is set to be 1, and the exceeding time tsc is recorded, so that when the exceeding flag bit is set to be 1 and the current time-tsc is less than 1 second, whether follow-up exceeding is needed to be calculated or not is not needed, and the packet is directly processed. It will be appreciated that the clear superscalar flag and tsc values may be updated every second when the last total value was calculated.
In addition, it should be noted that all write resources in the present disclosure are independent resources of each core, and other CPUs can only perform read operation, so that it can be ensured that the performance between multiple cores can linearly increase as the number of CPUs increases, and the performance has no influence.
As can be seen from the above description, the processing procedure of the packet-sending failed packet according to the scheme of the present disclosure does not cause any additional influence on the normal packet forwarding flow, and does not affect the overall performance of the forwarding system.
Referring to fig. 6, a schematic structural diagram of a message processing apparatus according to an embodiment of the present disclosure is shown. The packet processing apparatus belongs to a forwarding CPU, and the packet processing apparatus may include:
a message load balancing module 201, configured to, when packet sending of the current network card fails, perform load balancing on a packet that the current network card fails to send by using an available CPU and/or an available network card;
a message number determining module 202, configured to determine whether the number of messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold when load balancing fails;
a buffer queue adding module 203, configured to add a message with a failure in load balancing to the buffer queue when the number of messages in the buffer queue does not exceed a preset threshold;
and the polling control module 204 is configured to, when performing polling processing, control the current network card to send the message in the buffer queue if the load of the current network card is idle.
Optionally, the forwarding CPU is configured with a bond port, and configuration information of the bond port is stored in a network card of the bond port, and the apparatus further includes:
and the available network card determining module is used for determining the network card which does not reach the performance limit in the bond port as the available network card so that the current network card sends a packet sending failure message to the available network card according to the configuration information.
Optionally, the current network card has a connection relationship with a plurality of CPUs, and each CPU has a corresponding sending queue on the current network card, and the apparatus further includes:
and the available CPU determining module is used for determining the CPU corresponding to the sending queue with the most effective descriptors in all the sending queues of the current network card as the available CPU so that the forwarding CPU sends the packet sending failure message to the available CPU.
Optionally, the polling control module is configured to determine whether the number of messages received by the network card queue in the current polling period is smaller than a first upper limit value; if the number of the messages received by the network card queue in the polling period is less than a first upper limit value, judging that the load of the current network card is idle; if the cache queue is not empty, controlling the current network card to forward the message in the cache queue until the cache queue is empty or the forwarding processing fails.
Optionally, the polling control module is further configured to determine whether the number of messages received by the kernel-state queue in the current polling period is smaller than a second upper limit value; if the number of the messages received by the kernel state queue in the polling period is less than a second upper limit value, judging that the load of the current network card is idle; and if the cache queue is not empty, controlling the current network card to perform primary forwarding processing on the message in the cache queue.
Optionally, the apparatus further comprises:
the performance per second judging module is used for judging whether the performance per second of the current network card exceeds a performance limit or not when the number of the messages in the cache queue exceeds a preset threshold;
and the packet loss processing module is used for recording the time exceeding the performance limit when the performance of the current network card per second exceeds the performance limit, and performing packet loss processing on the message received after the time.
Optionally, the apparatus further comprises:
a performance obtaining module per second, configured to obtain the total number of bytes of the message successfully sent in the current second by all sending queues of the current network card once per second; and accumulating the total byte number of the messages successfully sent by all the sending queues in the current second to obtain the performance of the current network card per second.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 7 is a block diagram illustrating an electronic device 300, according to an example embodiment, the electronic device 300 configured to perform message processing. As shown in fig. 7, the electronic device 300 may include: a processor 301, a memory 302, a multimedia component 303, an input/output (I/O) interface 304, and a communication component 305.
The processor 301 is configured to control the overall operation of the electronic device 300, so as to complete all or part of the steps in the message processing method. The memory 302 is used to store various types of data to support operation at the electronic device 300, such as instructions for any application or method operating on the electronic device 300 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and the like. The Memory 302 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 303 may include a screen and an audio component. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 302 or transmitted through the communication component 305. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 304 provides an interface between the processor 301 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 305 is used for wired or wireless communication between the electronic device 300 and other devices. Wireless Communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G or 4G, or a combination of one or more of them, so that the corresponding Communication component 305 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the message Processing methods described above.
In another exemplary embodiment, a computer readable storage medium comprising program instructions, such as the memory 302 comprising program instructions, executable by the processor 301 of the electronic device 300 to perform the message processing method described above is also provided.
The preferred embodiments of the present disclosure are described in detail with reference to the accompanying drawings, however, the present disclosure is not limited to the specific details of the above embodiments, and various simple modifications may be made to the technical solution of the present disclosure within the technical idea of the present disclosure, and these simple modifications all belong to the protection scope of the present disclosure.
It should be noted that, in the foregoing embodiments, various features described in the above embodiments may be combined in any suitable manner, and in order to avoid unnecessary repetition, various combinations that are possible in the present disclosure are not described again.
In addition, any combination of various embodiments of the present disclosure may be made, and the same should be considered as the disclosure of the present disclosure, as long as it does not depart from the spirit of the present disclosure.

Claims (10)

1. A message processing method is characterized by comprising the following steps:
when the current network card fails to send the packet, the forwarding CPU utilizes the available CPU and/or the available network card to carry out load balancing on the packet which fails to send the packet of the current network card;
if the load balancing fails, judging whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value or not;
if the number of the messages in the cache queue does not exceed a preset threshold value, the forwarding CPU adds the messages with the load balancing failure to the cache queue;
and when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send the message in the cache queue.
2. The method of claim 1, wherein the forwarding CPU configures a bond port, and the network card of the bond port stores configuration information of the bond port, then
And determining the network card which does not reach the performance limit in the bond port as the available network card so that the current network card sends a packet failure message to the available network card according to the configuration information.
3. The method of claim 1, wherein the current network card has a connection relationship with a plurality of CPUs, and each CPU has a corresponding transmission queue in the current network card
And determining the CPU corresponding to the transmission queue with the most effective descriptors in all the transmission queues of the current network card as the available CPU, so that the forwarding CPU sends the packet-sending failure message to the available CPU.
4. The method according to claim 1, wherein when the forwarding CPU performs polling processing, if the load of the current network card is idle, controlling the current network card to send the message in the buffer queue includes:
the forwarding CPU judges whether the number of messages received by the network card queue in the polling period is less than a first upper limit value or not;
if the number of the messages received by the network card queue in the polling period is less than a first upper limit value, the forwarding CPU judges that the load of the current network card is idle;
if the cache queue is not empty, the forwarding CPU controls the current network card to forward the message in the cache queue until the cache queue is empty or the forwarding processing fails to send the packet.
5. The method of claim 1, further comprising:
the forwarding CPU judges whether the number of the messages received by the kernel-mode queue in the polling period is less than a second upper limit value or not;
if the number of the messages received by the kernel state queue in the polling period is less than a second upper limit value, the forwarding CPU judges that the load of the current network card is idle;
and if the cache queue is not empty, the forwarding CPU controls the current network card to perform one-time forwarding processing on the message in the cache queue.
6. The method according to any one of claims 1 to 5, wherein if the number of packets in the buffer queue exceeds a preset threshold, the method further comprises:
the forwarding CPU judges whether the performance per second of the current network card exceeds a performance limit, wherein the performance per second is the sum of the total number of bytes of messages successfully sent to the current network card by the CPUs corresponding to all sending queues of the current network card per second;
if the performance of the current network card per second exceeds the performance limit, the forwarding CPU records the time exceeding the performance limit and carries out packet loss processing on the messages received after the time.
7. The method of claim 6, wherein the manner of obtaining the performance per second of the current network card is:
the forwarding CPU acquires the message once per second, and all the sending queues of the current network card successfully send the total byte number of the message in the current second;
and the forwarding CPU accumulates the total byte number of the messages successfully sent by all the sending queues in the current second to obtain the performance of the current network card per second.
8. A message processing apparatus belonging to a forwarding CPU, the message processing apparatus comprising:
the message load balancing module is used for carrying out load balancing on the message of the current network card packet sending failure by utilizing the available CPU and/or the available network card when the current network card packet sending failure occurs;
the message number judging module is used for judging whether the number of the messages in the cache queue corresponding to the forwarding CPU exceeds a preset threshold value or not when the load balancing fails;
the buffer queue adding module is used for adding the message with the failure load balance to the buffer queue when the number of the messages in the buffer queue does not exceed a preset threshold value;
and the polling control module is used for controlling the current network card to send the message in the cache queue if the load of the current network card is idle during polling processing.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
10. An electronic device, comprising:
the computer-readable storage medium recited in claim 9; and
one or more processors to execute the program in the computer-readable storage medium.
CN201711124549.1A 2017-11-14 2017-11-14 Message processing method and device, storage medium and electronic equipment Active CN108023829B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711124549.1A CN108023829B (en) 2017-11-14 2017-11-14 Message processing method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711124549.1A CN108023829B (en) 2017-11-14 2017-11-14 Message processing method and device, storage medium and electronic equipment

Publications (2)

Publication Number Publication Date
CN108023829A CN108023829A (en) 2018-05-11
CN108023829B true CN108023829B (en) 2021-04-23

Family

ID=62080685

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711124549.1A Active CN108023829B (en) 2017-11-14 2017-11-14 Message processing method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN108023829B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111490947B (en) * 2019-01-25 2024-01-23 上海哔哩哔哩科技有限公司 Data packet sending method, data packet receiving method, system, equipment and medium
CN110061924B (en) * 2019-04-18 2022-05-06 东软集团股份有限公司 Message forwarding method and device and related product
CN110347619A (en) * 2019-07-01 2019-10-18 北京天融信网络安全技术有限公司 Data transmission method and device between a kind of network interface card and cpu
CN110855468B (en) * 2019-09-30 2021-02-23 华为技术有限公司 Message sending method and device
CN111698175B (en) * 2020-06-24 2023-09-19 北京经纬恒润科技股份有限公司 Message receiving and transmitting method and system for gateway
CN115665073B (en) * 2022-12-06 2023-04-07 江苏为是科技有限公司 Message processing method and device
CN117112044B (en) * 2023-10-23 2024-02-06 腾讯科技(深圳)有限公司 Instruction processing method, device, equipment and medium based on network card

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
CN101662506A (en) * 2009-10-14 2010-03-03 中兴通讯股份有限公司 Load balancing method based on CPU kernel sharing and device thereof
CN105721241A (en) * 2016-01-25 2016-06-29 汉柏科技有限公司 Statistical debugging method and system for network interface card message reception and transmission
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393483B1 (en) * 1997-06-30 2002-05-21 Adaptec, Inc. Method and apparatus for network interface card load balancing and port aggregation
CN101662506A (en) * 2009-10-14 2010-03-03 中兴通讯股份有限公司 Load balancing method based on CPU kernel sharing and device thereof
CN105721241A (en) * 2016-01-25 2016-06-29 汉柏科技有限公司 Statistical debugging method and system for network interface card message reception and transmission
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OpenStack 云环境中基于DPDK 的;徐启后;《武汉邮电科学研究院硕士学位论文》;20170331;第3章 *

Also Published As

Publication number Publication date
CN108023829A (en) 2018-05-11

Similar Documents

Publication Publication Date Title
CN108023829B (en) Message processing method and device, storage medium and electronic equipment
US9344490B2 (en) Cross-channel network operation offloading for collective operations
US7263103B2 (en) Receive queue descriptor pool
US20190163364A1 (en) System and method for tcp offload for nvme over tcp-ip
CN109688058B (en) Message processing method and device and network equipment
US20160065659A1 (en) Network operation offloading for collective operations
US11750418B2 (en) Cross network bridging
CN113326228B (en) Message forwarding method, device and equipment based on remote direct data storage
US8484396B2 (en) Method and system for conditional interrupts
US20150026325A1 (en) Notification normalization
CN107800663B (en) Method and device for detecting flow offline file
CN112612734A (en) File transmission method and device, computer equipment and storage medium
US11593136B2 (en) Resource fairness enforcement in shared IO interfaces
CN106603409B (en) Data processing system, method and equipment
US20200099670A1 (en) Secure In-line Received Network Packet Processing
CN109525495B (en) Data processing device and method and FPGA board card
US20170160929A1 (en) In-order execution of commands received via a networking fabric
US20090182798A1 (en) Method and apparatus to improve the effectiveness of system logging
WO2023116340A1 (en) Data message forwarding method and apparatus
US9697149B2 (en) Low latency interrupt with existence of interrupt moderation
WO2014149519A1 (en) Flow director-based low latency networking
CN110870286B (en) Fault tolerance processing method and device and server
CN111936982A (en) Efficient and reliable message tunneling between host system and integrated circuit acceleration system
US10185675B1 (en) Device with multiple interrupt reporting modes
CN115334156A (en) Message processing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant