CN111884945A - Network message processing method and network access equipment - Google Patents

Network message processing method and network access equipment Download PDF

Info

Publication number
CN111884945A
CN111884945A CN202010525154.8A CN202010525154A CN111884945A CN 111884945 A CN111884945 A CN 111884945A CN 202010525154 A CN202010525154 A CN 202010525154A CN 111884945 A CN111884945 A CN 111884945A
Authority
CN
China
Prior art keywords
target
cpu
network
message
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010525154.8A
Other languages
Chinese (zh)
Other versions
CN111884945B (en
Inventor
徐毅
潘鸿雷
叶志钢
凌云卿
何大明
谭芳
钟华
张玲
代敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd Chongqing Branch
Wuhan Greenet Information Service Co Ltd
Original Assignee
China Telecom Corp Ltd Chongqing Branch
Wuhan Greenet Information Service Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd Chongqing Branch, Wuhan Greenet Information Service Co Ltd filed Critical China Telecom Corp Ltd Chongqing Branch
Priority to CN202010525154.8A priority Critical patent/CN111884945B/en
Publication of CN111884945A publication Critical patent/CN111884945A/en
Application granted granted Critical
Publication of CN111884945B publication Critical patent/CN111884945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L69/00Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
    • H04L69/22Parsing or analysis of headers

Abstract

The invention discloses a network message processing method and network access equipment, wherein the processing method comprises the following steps: determining the flow size to be accessed by the target network card and a target NUMA domain to which the target network card belongs; acquiring the load bearing condition of each CPU of a target NUMA domain and the memory size of each memory; planning a target CPU and a target memory matched with the target network card according to the load bearing condition of each CPU, the memory size of each memory and the flow to be accessed by the target network card, and binding the target network card, the target CPU and the target memory; and after the target network card receives the network message, transmitting the network message to a target CPU and a target memory bound with the target network card for data processing. The network message processing method has good load balance, does not need to access a CPU and an internal memory across NUMA domains after receiving the network message, has good response to authentication and data forwarding, reduces delay and improves efficiency.

Description

Network message processing method and network access equipment
Technical Field
The present invention belongs to the technical field of network message processing, and more particularly, to a network message processing method and a network access device.
Background
The traditional fixed network Access device (Broadband Remote Access Server, abbreviated as Bars) uses chip to process user authentication and message forwarding, but the chip has high production cost, long production period, and much worse expandability and flexibility than the software forwarding device.
At present, a plurality of software routing devices (e.g., router os) are completely implemented by software, so that the cost is low, the flexibility is high, the software routing devices can be operated on various hardware devices, but the forwarding performance is basically far lower than that of the conventional chip forwarding device, and the performance requirements of the network access device cannot be met; and the processing of the software router on user authentication is very incomplete.
At present, some authentication-related tool software can be downloaded on the internet, but the part of software is a single small function, is not combined with forwarding, and has performance far reaching the standard of bras.
In view of this, overcoming the deficiencies of the prior art products is an urgent problem to be solved in the art.
Disclosure of Invention
The network message processing method has good load balance, does not need to access a CPU and a memory across NUMA domains after receiving the network message, has good response to authentication and data forwarding, reduces delay and improves efficiency.
In order to achieve the above object, according to an aspect of the present invention, a method for processing a network packet is provided, where the method is applied to a network access device, the network access device includes a plurality of NUMA domains, each NUMA domain includes a CPU, a network card, and a memory, and the method includes:
determining the flow size to be accessed by a target network card and a target NUMA domain to which the target network card belongs;
acquiring the load bearing condition of each CPU of the target NUMA domain and the memory size of each memory;
planning a target CPU and a target memory matched with the target network card according to the load bearing condition of each CPU, the memory size of each memory and the flow size to be accessed by the target network card, and binding the target network card, the target CPU and the target memory;
and after the target network card receives a network message, transmitting the network message to the target CPU and the target memory bound with the target network card for data processing.
Preferably, the processing method further comprises:
acquiring CPU load requirements of threads;
assigning a designated CPU to the thread whose CPU load demand exceeds a load threshold to bind the thread with the designated CPU.
Preferably, the processing method further comprises:
after the target network card receives a network message, analyzing the network message to acquire a request type of the network message;
and distributing the network message to the corresponding target CPU according to the request type of the network message.
Preferably, the request type includes message forwarding, authentication and routing protocol;
when the request type of the network message is message forwarding, transmitting the network message to a target CPU for a forwarding plane;
and when the request type of the network message is authentication or a routing protocol, transmitting the network message to the target CPU for the control plane so as to separate the data processing of the forwarding plane and the control plane.
Preferably, the processing method further comprises:
after the target network card receives a network message, converting the network message into a data stream with a specified format through a driving thread, wherein the data stream with the specified format comprises a binary data stream;
and storing the data stream with the specified format in a specified target memory so as to facilitate an upper-layer processing thread to acquire data from the specified target memory.
Preferably, the storing the data stream in the specified format in a specified target memory includes:
processing the network message by adopting a Hash algorithm according to a source IP address, a target IP address, a source MAC or a target MAC of the network message to obtain a plurality of data queues;
and storing a plurality of the data queues in a specified target memory so that the load of each upper-layer processing program is close to the same.
Preferably, the network packet is a pppoe packet, and the processing method further includes:
after the target network card receives a network message, dividing the network message into a message header and a message body, wherein the message body is the actual flow of a user, and the message header comprises an Ethernet header and a multilayer vlan;
and when the message is forwarded, modifying the message header to increase a layer of vlan or reduce a layer of vlan, thereby avoiding translating the message body.
Preferably, the target network card comprises a first target network card and a second target network card, and a shared target CPU exists between the first target network card and the second target network card;
the processing method comprises the following steps:
after the first target network card receives a first network message, judging the actual flow of the first network message;
if the actual flow of the first network message is larger than a first flow threshold, transmitting the first network message to the target CPU bound with the target network card for data processing, and configuring the shared target CPU into an operating state;
and if the actual flow of the first network message is not larger than the second flow threshold, preferentially selecting other target CPUs except the shared target CPU to process the message, and configuring the shared target CPU in an idle state.
Preferably, the processing method comprises:
after the second target network card receives a second network message, judging the actual flow of the second network message;
if the actual flow of the second network message is larger than the first flow threshold, judging the shared target CPU state;
if the shared target CPU is in an idle state, transmitting the first network message to the target CPU bound with the target network card for data processing;
if the shared target CPU state is the running state, acquiring the state of the CPU in the adjacent NUMA domain, temporarily binding the CPU in the idle state in the adjacent NUMA domain with the second target network card, transmitting the second network message to other target CPUs except the shared target CPU, and processing the message by the temporarily bound CPU.
To achieve the above object, according to another aspect of the present invention, there is provided a network access device including at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor programmed to perform the processing methods of the present invention.
Generally, compared with the prior art, the technical scheme of the invention has the following beneficial effects: the invention provides a processing method of a network message and a network access device, wherein the processing method is applied to the network access device, the network access device comprises a plurality of NUMA domains, each NUMA domain comprises a CPU, a network card and a memory, and the processing method comprises the following steps: determining the flow size to be accessed by a target network card and a target NUMA domain to which the target network card belongs; acquiring the load bearing condition of each CPU of the target NUMA domain and the memory size of each memory; planning a target CPU and a target memory matched with the target network card according to the load bearing condition of each CPU, the memory size of each memory and the flow size to be accessed by the target network card, and binding the target network card, the target CPU and the target memory; and after the target network card receives a network message, transmitting the network message to the target CPU and the target memory bound with the target network card for data processing.
The network card, the CPU and the memory which belong to the same NUMA domain are pre-bound by adopting software, the resources of the CPU and the memory can be matched with the flow accessed by the network card, the load balance is better, the CPU and the memory do not need to be accessed across the NUMA domain after the network message is received, the response to authentication and data forwarding is good, the delay is reduced, and the efficiency is improved. On the other hand, the software form is adopted for pre-binding, so that the method has larger expansion space and flexibility and wider applicability.
Drawings
Fig. 1 is a schematic flowchart of a method for processing a network packet according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a NUMA domain of a network access device according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a CPU resource allocation structure according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a network access device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the description of the present invention, the terms "inner", "outer", "longitudinal", "lateral", "upper", "lower", "top", "bottom", and the like indicate orientations or positional relationships based on those shown in the drawings, and are for convenience only to describe the present invention without requiring the present invention to be necessarily constructed and operated in a specific orientation, and thus should not be construed as limiting the present invention.
In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
At present, the traditional bras equipment chip has high forwarding cost, long production period, expandability and flexibility; the software routing equipment has poor performance and is very imperfect in processing user authentication; the present invention provides a method for processing network messages, and the method is described in detail in the following embodiments, in order to solve the foregoing problems.
Example 1:
referring to fig. 1 and fig. 2, this embodiment provides a method for processing a network packet, where the method is applied to a network access device, where the network access device includes a plurality of Non Uniform Memory Access (NUMA) domains, each NUMA domain includes a CPU, a network card, and a Memory, and the method includes the following steps:
step 101: and determining the flow size to be accessed by the target network card and the target NUMA domain to which the target network card belongs.
At present, a CPU and a memory corresponding to a network card are not planned in advance, and an operating system is randomly selected, so that the performance is good and bad. In this embodiment, first, the flow rate to be accessed by the target network card is determined, where the flow rate to be accessed is the estimated access flow rate in advance.
When the network card performs CPU and memory access across NUMA domains, efficiency may be reduced, and therefore, when performing CPU and memory planning, the network card, the CPU, and the memory to be bound need to exist in one NUMA domain.
Step 102: and acquiring the load bearing condition of each CPU of the target NUMA domain and the memory size of each memory.
In this embodiment, each NUMA domain includes a plurality of network cards, a plurality of memories, and a plurality of CPUs, and obtains the load bearing condition of each CPU of the target NUMA domain and the memory size of each memory to perform resource allocation, thereby ensuring that the loads of the components are balanced as much as possible.
Step 103: planning a target CPU and a target memory matched with the target network card according to the load bearing condition of each CPU, the memory size of each memory and the flow size to be accessed by the target network card, and binding the target network card, the target CPU and the target memory.
In this embodiment, one network card may bind a plurality of CPUs, and as for the number of CPUs to be bound to one network card, software processing capacity needs to be tested in advance to determine how many CPUs are required to process under the condition that the network card is fully loaded, and indexes such as time delay and CPU occupancy rate can reach the standard, and then a corresponding number of CPUs are allocated to the network card.
Namely, the maximum flow which the network card may access is estimated in advance, and sufficient CPU resources and memory are allocated to the network card according to the performance of the CPU and the memory size of the memory. For example, if 100G traffic (e.g., the total amount of traffic in a certain cell) is accessed to the network card 1, the network card 1 needs to be allocated with CPUs (e.g., CPUs numbered 5, 6, 7, and 8) in the same NUMA domain.
Then, attention is paid to the program design, and the amount of the memory used by the program for processing the network card cannot be more than the amount of the memory planned for the network card; otherwise, the CPU corresponding to the network card will use the memory on other NUMA domains, and the efficiency will be reduced.
Step 104: and after the target network card receives a network message, transmitting the network message to the target CPU and the target memory bound with the target network card for data processing.
In this embodiment, after the target network card receives a network message, the network message is transmitted to the target CPU and the target memory bound to the target network card for data processing.
Different from the prior art, in the method for processing the network message in the embodiment, the network card, the CPU and the memory which belong to the same NUMA domain are pre-bound by software, the CPU resource and the memory resource can be matched with the flow accessed by the network card, the load balance is good, and after the network message is received, the CPU and the memory do not need to be accessed across the NUMA domain, so that the method has good response to authentication and data forwarding, reduces delay and improves efficiency. On the other hand, the software form is adopted for pre-binding, so that the method has larger expansion space and flexibility and wider applicability.
Aiming at each thread for processing the network message, the functions of each thread are different, and the CPU load requirements of the threads are obtained; for threads needing high-performance processing, such as user data message forwarding, user authentication requests and the like, independent threads are distributed for processing, and each thread is distributed with an independent CPU core, so that CPU thread switching is greatly reduced.
Specifically, a designated CPU is allocated to the thread whose CPU load demand exceeds a load threshold value, so as to bind the thread with the designated CPU, where the load threshold value is determined according to an actual situation, and is not specifically limited.
In this embodiment, the processing performance of each CPU is different, and in order to ensure the efficiency of data processing and load balancing, after the target network card receives a network message, the network message is analyzed to obtain a request type of the network message; and distributing the network message to the corresponding target CPU according to the request type of the network message.
Specifically, the request type includes a message forwarding, an authentication and authorization and a routing protocol; when the request type of the network message is message forwarding, transmitting the network message to a target CPU for a forwarding plane; and when the request type of the network message is authentication or a routing protocol, transmitting the network message to the target CPU for the control plane so as to separate the data processing of the forwarding plane and the control plane. In the embodiment, the functions are decoupled, so that the functions are conveniently distributed to a plurality of CPUs for processing, and the task switching of the CPUs is not performed.
In a preferred embodiment, when forwarding a message, the load balancing of the CPU is achieved through a message queue, and specifically, after receiving a network message, the target network card converts the network message into a data stream in an assigned format through a driving thread, where the data stream in the assigned format includes a binary data stream; and storing the data stream with the specified format in a specified target memory so as to facilitate an upper-layer processing thread to acquire data from the specified target memory.
Wherein the storing the data stream of the specified format in a specified target memory comprises: processing the network message by adopting a Hash algorithm according to a source IP address, a target IP address, a source MAC or a target MAC of the network message to obtain a plurality of data queues; and storing a plurality of the data queues in a specified target memory so that the load of each upper-layer processing program is close to the same.
Specifically, the foregoing hash algorithm may be any hash algorithm (for example, MD5, division by a specific value and the like), for example, the source IP address, the destination IP address, the source MAC, the destination MAC, the vlan value and the like of the network packet may be extracted, the source IP address, the destination IP address, the source MAC, the destination MAC, and the vlan value of the network packet may be placed in a segment of memory space, the value in the segment of memory space is used as a large integer, division by the specific value and the remainder obtained is used as the hash value, and the network packets with the same hash value are placed in a queue, so that the network packets may be uniformly dispersed into a plurality of queues. The specific value may be 0x123, or other values.
Each network message needs to go through two steps of bottom layer driving processing and upper layer program processing, generally, the bottom layer driving processing has less content and faster speed. For example, the drive thread uses two target CPUs (a first target CPU and a second target CPU), and after processing, the two target CPUs are placed in eight data queues (data queue 1 to data queue 8); the upper layer program uses four CPU cores (a third target CPU, a fourth target CPU, a fifth target CPU and a sixth target CPU) with serial numbers, the third target CPU corresponds to a data queue 1 and a data queue 2, the fourth target CPU corresponds to a data queue 3 and a data queue 4, and the like; and when the upper layer program is processed and then handed to the bottom layer to be sent out, the seventh target CPU and the eighth target CPU are correspondingly used for data processing. Each upper-layer program selects a proper data queue according to the load bearing capacity of the upper-layer program, namely, each upper-layer program does not have a necessary corresponding relation with the data queue, and the upper-layer program can bear the load.
It can be understood that the messages received on each network card are arranged to be processed by a plurality of independent CPUs, and the driver is used for processing the messages at first and is arranged to process one or more CPUs according to the physical properties of different network cards. And after the drive finishes processing the message, putting the message into a specified memory, reading the message from the specified memory by an upper layer processing program and performing subsequent processing, wherein the upper layer processing program is also divided into a plurality of CPUs for parallel processing. In order to balance the CPU load of the upper layer processing program, when the driver puts the message into the appointed queue, the hash can be carried out according to the source IP address, the target IP address, the source MAC or the target MAC and the like of the network message, the message is uniformly distributed into a plurality of queues, and the load of each upper layer processing program is approximately the same.
In an actual application scenario, for an edge access device, generally, messages received from a user side are all pppoe messages, and when the messages are forwarded to a core network, a pppoe header needs to be stripped, which is equivalent to forward shifting a data portion of the pppoe by one section. When a user message is forwarded by bras, it is likely to add a layer of vlan or reduce a layer of vlan, and then, generally, the following data part is translated forward or backward (distance of a layer of vlan), and the translation is very cpu-consuming.
In order to solve the foregoing problems, in a preferred embodiment, after receiving a network message, the target network card divides the network message into a message header and a message body, where the message body is an actual flow of a user, and the message header includes an ethernet header and a multilayer vlan; when the message is forwarded, the message header is modified to add a layer of vlan or reduce a layer of vlan, the message body is not modified, and the message body is prevented from being translated, so that a large amount of data translation (copy) operation is avoided, and the performance is saved.
Example 2:
in an actual application scenario, the NUMA domain includes at least two network cards, the CPU resources are planned in advance according to the intended access traffic of each network card, but there is a case where the actual traffic is greater than the intended access traffic.
With reference to embodiment 1, after receiving a network packet, the target network card determines an actual traffic size of the network packet; if the actual flow of the network message is larger than the intended access flow of the target network card, transmitting the network message to the target CPU bound with the target network card for data processing, selecting the CPU in an idle state which belongs to the same NUMA domain as the target network card, and selecting a proper CPU from the CPUs in the idle state to temporarily bind with the target network card according to the flow difference between the actual flow and the intended access flow, thereby finishing the processing of the network message.
With reference to fig. 3, what is bound to the first target network card is the CPU1, the CPU2, the CPU3, and the CPU4, but the actual access traffic of the first target network card at a certain time end is large and exceeds the loads of the CPU1, the CPU2, the CPU3, and the CPU4, if the CPU5, the CPU6, the CPU5, and the CPU6 are all in an idle state, the processing performance of the CPU5 and the CPU6 is obtained, a proper CPU is selected according to the overload condition, it is determined that the load requirement can be met after the CPU5 is added, and the CPU5 is temporarily bound to the first target network card.
In a specific application scene, aiming at a NUMA domain comprising a plurality of network cards, in a preset time period, comparing and analyzing the actual access flow condition and the intended access flow condition of each network card, and dynamically dividing a target CPU and a target memory bound by each network card so as to cope with a scene with a larger difference between the actual access flow condition and the intended access flow condition, so that certain target CPUs are prevented from being in an idle state all the time, and certain target CPUs are prevented from being in a full-load operation state all the time, and the load balance of the CPUs in the same NUMA domain is realized.
In this embodiment, when the actual access flow is greater than the intended access flow, a suitable CPU may be selected from the NUMA domain to temporarily bind with the network card, so as to relieve the pressure of insufficient CPU resources caused by sudden increase in flow, and after data processing is completed, unbinding is performed.
In other application scenarios, in view of the fact that the planned intended access flow is generally larger than the actual flow, in order to save CPU resources, a plurality of network cards share the same target CPU, and the shared target CPU can be understood as a standby CPU. To solve the aforementioned problems, the following embodiments propose an alternative.
Specifically, the target network card includes a first target network card and a second target network card, and a shared target CPU exists between the first target network card and the second target network card.
Referring to fig. 3 in conjunction with embodiment 1, the processing method includes: after the first target network card receives a first network message, judging the actual flow of the first network message; if the actual flow of the first network message is larger than a first flow threshold, transmitting the first network message to the target CPU bound with the target network card for data processing, and configuring the shared target CPU into an operating state; if the actual flow of the first network packet is not greater than the second flow threshold, preferentially selecting other target CPUs except the shared target CPU to process the packet, and configuring the shared target CPU in an idle state, where the first flow threshold and the second flow threshold are determined according to actual conditions and are not specifically limited herein.
After the second target network card receives a second network message, judging the actual flow of the second network message; if the actual flow of the second network message is larger than the first flow threshold, judging the shared target CPU state; if the shared target CPU is in an idle state, transmitting the first network message to the target CPU bound with the target network card for data processing; if the shared target CPU state is the running state, acquiring the state of the CPU in the adjacent NUMA domain, temporarily binding the CPU in the idle state in the adjacent NUMA domain with the second target network card, transmitting the second network message to other target CPUs except the shared target CPU, and processing the message by the temporarily bound CPU.
Referring to fig. 3, the first target network card is bound to the CPU1, the CPU2, the CPU3 and the CPU4, the second target network card is bound to the CPU4, the CPU5 and the CPU6, and the CPU4 is a shared target CPU, so that in the process of data processing, the first target network card and the second target network card are selectively bound to each other according to the state of the CPU4 in the manner described above, and when one of the target network cards occupies the CPU4 and the CPU resource of the other target network card is insufficient, a CPU is selected from an adjacent NUMA domain to be temporarily bound to the target network card.
In this embodiment, during resource planning, CPU resources (e.g., the standby CPU4 in fig. 3) are generally reserved to relieve the pressure caused by traffic surges, and when the standby CPU is occupied and the CPUs of the present NUMA domain are all in a running state, a suitable CPU may be selected from the adjacent NUMA domain for temporary binding.
Example 3:
referring to fig. 4, fig. 4 is a schematic structural diagram of a network access device according to an embodiment of the present invention. The network access device of the present embodiment includes one or more processors 41 and memory 42. In fig. 4, one processor 41 is taken as an example.
The processor 41 and the memory 42 may be connected by a bus or other means, such as the bus connection in fig. 4.
The memory 42, which is a non-volatile computer-readable storage medium for a network message-based processing method, may be used to store non-volatile software programs, non-volatile computer-executable programs, and modules, the methods of the above embodiments, and corresponding program instructions. The processor 41 implements the methods of the foregoing embodiments by executing non-volatile software programs, instructions, and modules stored in the memory 42 to thereby execute various functional applications and data processing.
The memory 42 may include, among other things, high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some embodiments, memory 42 may optionally include memory located remotely from processor 41, which may be connected to processor 41 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
It should be noted that, for the information interaction, execution process and other contents between the modules and units in the apparatus and system, the specific contents may refer to the description in the embodiment of the method of the present invention because the same concept is used as the embodiment of the processing method of the present invention, and are not described herein again.
Those of ordinary skill in the art will appreciate that all or part of the steps of the various methods of the embodiments may be implemented by associated hardware as instructed by a program, which may be stored on a computer-readable storage medium, which may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. A processing method of a network message is characterized in that the processing method is applied to a network access device, the network access device comprises a plurality of NUMA domains, each NUMA domain comprises a CPU, a network card and a memory, and the processing method comprises the following steps:
determining the flow size to be accessed by a target network card and a target NUMA domain to which the target network card belongs;
acquiring the load bearing condition of each CPU of the target NUMA domain and the memory size of each memory;
planning a target CPU and a target memory matched with the target network card according to the load bearing condition of each CPU, the memory size of each memory and the flow size to be accessed by the target network card, and binding the target network card, the target CPU and the target memory;
and after the target network card receives a network message, transmitting the network message to the target CPU and the target memory bound with the target network card for data processing.
2. The processing method according to claim 1, characterized in that it further comprises:
acquiring CPU load requirements of threads;
assigning a designated CPU to the thread whose CPU load demand exceeds a load threshold to bind the thread with the designated CPU.
3. The processing method according to claim 1, characterized in that it further comprises:
after the target network card receives a network message, analyzing the network message to acquire a request type of the network message;
and distributing the network message to the corresponding target CPU according to the request type of the network message.
4. The processing method according to claim 3, wherein the request type comprises packet forwarding, authentication and routing protocols;
when the request type of the network message is message forwarding, transmitting the network message to a target CPU for a forwarding plane;
and when the request type of the network message is authentication or a routing protocol, transmitting the network message to the target CPU for the control plane so as to separate the data processing of the forwarding plane and the control plane.
5. The processing method according to claim 1, characterized in that it further comprises:
after the target network card receives a network message, converting the network message into a data stream with a specified format through a driving thread, wherein the data stream with the specified format comprises a binary data stream;
and storing the data stream with the specified format in a specified target memory so as to facilitate an upper-layer processing thread to acquire data from the specified target memory.
6. The processing method according to claim 5, wherein the storing the data stream of the specified format in a specified target memory comprises:
processing the network message by adopting a Hash algorithm according to a source IP address, a target IP address, a source MAC or a target MAC of the network message to obtain a plurality of data queues;
and storing a plurality of the data queues in a specified target memory so that the load of each upper-layer processing program is close to the same.
7. The processing method according to claim 1, wherein the network packet is a pppoe packet, and the processing method further comprises:
after the target network card receives a network message, dividing the network message into a message header and a message body, wherein the message body is the actual flow of a user, and the message header comprises an Ethernet header and a multilayer vlan;
and when the message is forwarded, modifying the message header to increase a layer of vlan or reduce a layer of vlan, thereby avoiding translating the message body.
8. The processing method according to claim 1, wherein the target network card comprises a first target network card and a second target network card, and a shared target CPU exists between the first target network card and the second target network card;
the processing method comprises the following steps:
after the first target network card receives a first network message, judging the actual flow of the first network message;
if the actual flow of the first network message is larger than a first flow threshold, transmitting the first network message to the target CPU bound with the target network card for data processing, and configuring the shared target CPU into an operating state;
and if the actual flow of the first network message is not larger than the second flow threshold, preferentially selecting other target CPUs except the shared target CPU to process the message, and configuring the shared target CPU in an idle state.
9. The processing method according to claim 8, characterized in that it comprises:
after the second target network card receives a second network message, judging the actual flow of the second network message;
if the actual flow of the second network message is larger than the first flow threshold, judging the shared target CPU state;
if the shared target CPU is in an idle state, transmitting the first network message to the target CPU bound with the target network card for data processing;
if the shared target CPU state is the running state, acquiring the state of the CPU in the adjacent NUMA domain, temporarily binding the CPU in the idle state in the adjacent NUMA domain with the second target network card, transmitting the second network message to other target CPUs except the shared target CPU, and processing the message by the temporarily bound CPU.
10. A network access device, characterized in that the network access device comprises at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor and programmed to perform the processing method of any of claims 1 to 9.
CN202010525154.8A 2020-06-10 2020-06-10 Network message processing method and network access equipment Active CN111884945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010525154.8A CN111884945B (en) 2020-06-10 2020-06-10 Network message processing method and network access equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010525154.8A CN111884945B (en) 2020-06-10 2020-06-10 Network message processing method and network access equipment

Publications (2)

Publication Number Publication Date
CN111884945A true CN111884945A (en) 2020-11-03
CN111884945B CN111884945B (en) 2022-09-02

Family

ID=73156509

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010525154.8A Active CN111884945B (en) 2020-06-10 2020-06-10 Network message processing method and network access equipment

Country Status (1)

Country Link
CN (1) CN111884945B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113886131A (en) * 2021-10-28 2022-01-04 建信金融科技有限责任公司 Data checking method, device, equipment and storage medium
CN113992589A (en) * 2021-10-21 2022-01-28 绿盟科技集团股份有限公司 Message distribution method and device and electronic equipment
CN115022336A (en) * 2022-05-31 2022-09-06 苏州浪潮智能科技有限公司 Server resource load balancing method, system, terminal and storage medium
CN115996203A (en) * 2023-03-22 2023-04-21 北京华耀科技有限公司 Network traffic domain division method, device, equipment and storage medium
WO2023066180A1 (en) * 2021-10-19 2023-04-27 华为技术有限公司 Data processing method and related apparatus

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer
CN102929722A (en) * 2012-10-18 2013-02-13 曙光信息产业(北京)有限公司 Packet reception based on large-page 10-gigabit network card and system thereof
CN104615480A (en) * 2015-02-04 2015-05-13 上海交通大学 Virtual processor scheduling method based on NUMA high-performance network processor loads
WO2016026131A1 (en) * 2014-08-22 2016-02-25 上海交通大学 Virtual processor scheduling method based on numa high-performance network cache resource affinity
US20170168715A1 (en) * 2015-12-11 2017-06-15 Vmware, Inc. Workload aware numa scheduling
CN107346267A (en) * 2017-07-13 2017-11-14 郑州云海信息技术有限公司 A kind of cpu performance optimization method and device based on NUMA architecture
CN107566302A (en) * 2017-10-10 2018-01-09 新华三技术有限公司 Message forwarding method and device
CN108259328A (en) * 2017-08-30 2018-07-06 新华三技术有限公司 Message forwarding method and device
CN108363621A (en) * 2018-01-18 2018-08-03 东软集团股份有限公司 Message forwarding method, device, storage medium under numa frameworks and electronic equipment
CN108897622A (en) * 2018-06-29 2018-11-27 郑州云海信息技术有限公司 A kind of dispatching method and relevant apparatus of task run
CN110798412A (en) * 2019-10-18 2020-02-14 北京浪潮数据技术有限公司 Multicast service processing method, device, cloud platform, equipment and readable storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer
CN102929722A (en) * 2012-10-18 2013-02-13 曙光信息产业(北京)有限公司 Packet reception based on large-page 10-gigabit network card and system thereof
WO2016026131A1 (en) * 2014-08-22 2016-02-25 上海交通大学 Virtual processor scheduling method based on numa high-performance network cache resource affinity
CN104615480A (en) * 2015-02-04 2015-05-13 上海交通大学 Virtual processor scheduling method based on NUMA high-performance network processor loads
US20170168715A1 (en) * 2015-12-11 2017-06-15 Vmware, Inc. Workload aware numa scheduling
CN107346267A (en) * 2017-07-13 2017-11-14 郑州云海信息技术有限公司 A kind of cpu performance optimization method and device based on NUMA architecture
CN108259328A (en) * 2017-08-30 2018-07-06 新华三技术有限公司 Message forwarding method and device
CN107566302A (en) * 2017-10-10 2018-01-09 新华三技术有限公司 Message forwarding method and device
CN108363621A (en) * 2018-01-18 2018-08-03 东软集团股份有限公司 Message forwarding method, device, storage medium under numa frameworks and electronic equipment
CN108897622A (en) * 2018-06-29 2018-11-27 郑州云海信息技术有限公司 A kind of dispatching method and relevant apparatus of task run
CN110798412A (en) * 2019-10-18 2020-02-14 北京浪潮数据技术有限公司 Multicast service processing method, device, cloud platform, equipment and readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李慧娟,栾钟治,王辉,杨海龙,钱德沛: "NUMA架构内多个节点间访存延时平衡的内存分配策略", 《计算机学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023066180A1 (en) * 2021-10-19 2023-04-27 华为技术有限公司 Data processing method and related apparatus
CN113992589A (en) * 2021-10-21 2022-01-28 绿盟科技集团股份有限公司 Message distribution method and device and electronic equipment
CN113992589B (en) * 2021-10-21 2023-05-26 绿盟科技集团股份有限公司 Message distribution method and device and electronic equipment
CN113886131A (en) * 2021-10-28 2022-01-04 建信金融科技有限责任公司 Data checking method, device, equipment and storage medium
CN113886131B (en) * 2021-10-28 2023-05-26 建信金融科技有限责任公司 Data checking method, device, equipment and storage medium
CN115022336A (en) * 2022-05-31 2022-09-06 苏州浪潮智能科技有限公司 Server resource load balancing method, system, terminal and storage medium
CN115996203A (en) * 2023-03-22 2023-04-21 北京华耀科技有限公司 Network traffic domain division method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111884945B (en) 2022-09-02

Similar Documents

Publication Publication Date Title
CN111884945B (en) Network message processing method and network access equipment
US9069722B2 (en) NUMA-aware scaling for network devices
US8121120B2 (en) Packet relay apparatus
US7660306B1 (en) Virtualizing the operation of intelligent network interface circuitry
EP2386962B1 (en) Programmable queue structures for multiprocessors
US9264369B2 (en) Technique for managing traffic at a router
US10104006B2 (en) Bus interface apparatus, router, and bus system including them
JP5136564B2 (en) Packet processing apparatus and packet processing program
JP2010193366A (en) Apparatus and method for packet processing by multiprocessor
CN104734955A (en) Network function virtualization implementation method, wide-band network gateway and control device
US7751401B2 (en) Method and apparatus to provide virtual toe interface with fail-over
CN112965824A (en) Message forwarding method and device, storage medium and electronic equipment
US9906443B1 (en) Forwarding table updates during live packet stream processing
US9019832B2 (en) Network switching system and method for processing packet switching in network switching system
US9015438B2 (en) System and method for achieving enhanced performance with multiple networking central processing unit (CPU) cores
CN112671941A (en) Message processing method, device, equipment and medium
WO2009093299A1 (en) Packet processing device and packet processing program
US10116588B2 (en) Large receive offload allocation method and network device
CN112511438A (en) Method and device for forwarding message by using flow table and computer equipment
CN116132369A (en) Flow distribution method of multiple network ports in cloud gateway server and related equipment
CN115766729A (en) Data processing method for four-layer load balancing and related device
CN110247863A (en) Data package processing method, device, SDN switch and storage medium
CN106790632B (en) Streaming data concurrent transmission method and device
CN113676544A (en) Cloud storage network and method for realizing service isolation in entity server
CN111262786B (en) Gateway control method, gateway device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant