CN111314249B - Method and server for avoiding data packet loss of 5G data forwarding plane - Google Patents

Method and server for avoiding data packet loss of 5G data forwarding plane Download PDF

Info

Publication number
CN111314249B
CN111314249B CN202010380578.XA CN202010380578A CN111314249B CN 111314249 B CN111314249 B CN 111314249B CN 202010380578 A CN202010380578 A CN 202010380578A CN 111314249 B CN111314249 B CN 111314249B
Authority
CN
China
Prior art keywords
queues
cpu cores
receiving
program
cpu
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010380578.XA
Other languages
Chinese (zh)
Other versions
CN111314249A (en
Inventor
向卫东
孟庆晓
吴闽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Genew Technologies Co Ltd
Original Assignee
Shenzhen Genew Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Genew Technologies Co Ltd filed Critical Shenzhen Genew Technologies Co Ltd
Priority to CN202010380578.XA priority Critical patent/CN111314249B/en
Publication of CN111314249A publication Critical patent/CN111314249A/en
Application granted granted Critical
Publication of CN111314249B publication Critical patent/CN111314249B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/25Routing or path finding in a switch fabric
    • H04L49/252Store and forward routing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/30Peripheral units, e.g. input or output ports
    • H04L49/3018Input queuing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a method and a server for avoiding data packet loss of a 5G data forwarding plane, wherein the method comprises the following steps: the method comprises the steps of setting receiving queues of a preset number of data packets according to the number of the CPU cores of the maximum concurrent operation forwarding program of the server, ensuring that each CPU core corresponds to a plurality of receiving queues of a certain number, and when the number of the CPU cores of the concurrent operation forwarding program is increased or decreased, only adjusting the corresponding relation between the receiving queues of the preset number of data packets and the plurality of CPU cores of which the number is changed without adjusting the number of the receiving queues of a network card, so that the problem that the data packets are lost when the number of the CPU cores of the concurrent operation forwarding program is increased or decreased is avoided, and the workload of all the CPU cores of the concurrent operation forwarding program is relatively balanced.

Description

Method and server for avoiding data packet loss of 5G data forwarding plane
Technical Field
The invention relates to the technical field of computer application, in particular to a method and a server for avoiding data packet loss of a 5G data forwarding plane.
Background
The 5G data forwarding plane is a general name of hardware resources and software resources used for forwarding a user's data packet in a 5G network (the 5G network is a fifth generation mobile communication network, the peak theoretical transmission speed of which can reach 1GB per 8 seconds, which is more than 10 times faster than the transmission speed of a 4G network, and for the 5G network, it shows more obvious advantages and more powerful functions in the actual application process).
The 5G data forwarding plane is used to forward packets of a user in a 5G network, and a procedure for implementing the data forwarding plane is called a forwarding procedure. The number of users supported in the 5G network is very large, and in the era of current user traffic explosion, the data traffic of the same number of users is also getting larger and larger. In order to improve the processing capability of a forwarding program on a single server, a processing mode of multi-core concurrency (multiple CPU cores forward data packets at the same time, each operation core of a CPU can independently run a program, the multiple operation cores run the program at the same time and are called concurrency, and the CPU cores are the meanings of CPU cores) is often adopted, the CPU cores (the CPU cores are the CPU operation cores) load and run the same forwarding program, and the multiple CPU cores (the multi-core CPUs, which are collectively called as CPUs having multiple operation cores) share the forwarding processing work responsible for a part of data packets respectively.
The larger the size of the data traffic in the network, the larger the traffic size of the user data traffic will naturally be in the peak and valley periods of the network. When the network valley period gradually changes to the network peak period, the working pressure of the CPU core becomes greater and greater until the CPU core has to be increased to participate in concurrent processing of forwarding data packets (i.e., multi-core concurrency) to share and relieve the working pressure of the existing CPU core, which is referred to as capacity expansion for short. When the network peak period gradually changes into the network valley period, the working pressure of the CPU core is smaller and smaller, the quantity of the CPU cores for concurrently processing and forwarding the data packet can be reduced, the power consumption is reduced, the operation cost is saved, and the capacity reduction is called as brief.
When the number of CPU cores concurrently running the forwarding program is increased or decreased (i.e. capacity expansion or capacity reduction), the proportion of the data packets respectively responsible for processing needs to be redistributed on the CPU cores after capacity expansion or capacity reduction (the proportion of the data packets respectively responsible for processing is as balanced as possible), so as to rebalance the workload of the CPU cores (avoid that some CPU cores are not loaded and processed, and some CPU cores are loaded and waste resources).
In the prior art, a method for realizing that a data packet is distributed to a CPU core for processing is as follows: the network card supports a plurality of data packet receiving queues (the receiving queue is a buffer queue for storing data packets after the network card receives the data packets, and the CPU core reads the data packets from the buffer queue for forwarding processing), that is, after the network card receives the data packets, the data packets are stored in the plurality of buffer queues, the network card stores all the data packets in a balanced distribution manner on each receiving queue, and one receiving queue is used by one CPU core (one CPU core can correspond to a plurality of receiving queues), so that the balance of the workload of the CPU cores is ensured.
However, during capacity expansion and capacity reduction, the number of CPU cores changes, and if the number of the network card also adjusts the number of the packet receiving queues to be equal to the number of the CPU cores, the network card will discard the packets that have not yet been processed in the receiving queues, and meanwhile, the network card will temporarily stop receiving the packets during the process of adjusting the number of the receiving queues, thereby causing the problem of packet loss during capacity expansion and capacity reduction.
Accordingly, the prior art is yet to be improved and developed.
Disclosure of Invention
The invention mainly aims to provide a method and a server for avoiding data packet loss of a 5G data forwarding plane, and aims to solve the problem that data packets are lost during capacity expansion and capacity reduction in the prior art.
In order to achieve the above object, the present invention provides a method for avoiding packet loss for a 5G data forwarding plane, where the method for avoiding packet loss for the 5G data forwarding plane includes the following steps:
acquiring the number N of CPU cores of the maximum concurrent running forwarding program in the server;
setting N x (N-1) receiving queues of data packets, and dividing the N x (N-1) receiving queues into N groups of receiving queues, wherein each group of receiving queues comprises N-1 receiving queues, and each CPU core corresponds to the N-1 receiving queues;
acquiring the number of CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N;
all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly distributed to the 1 st CPU core to the Mth CPU core for processing;
and averagely distributing the total (N-M) × (N-1) receiving queues from the remaining M +1 th receiving queue to the N th receiving queue to M CPU cores for processing, and respectively and sequentially distributing the 1 st CPU core to the A th CPU core for processing when the receiving queues cannot be averagely distributed.
Optionally, in the method for avoiding packet loss for a 5G data forwarding plane, the number of CPU cores in the server, which run the forwarding program maximally and concurrently, is equal to the number of all CPU cores in the server.
Optionally, in the method for avoiding packet loss for a 5G data forwarding plane, the number of CPU cores actually running a forwarding program concurrently in the server is less than or equal to the number of all CPU cores in the server.
Optionally, the method for avoiding packet loss for a 5G data forwarding plane, where the receiving queues for N × N (N-1) data packets are set, and the N × N (N-1) receiving queues are divided into N groups of receiving queues, each group of receiving queues includes N-1 receiving queues, and each CPU core corresponds to N-1 receiving queues, specifically including:
according to the number N of CPU cores of the maximum concurrent operation forwarding program in the server, fixedly setting receiving queues of N x (N-1) data packets through a network card, dividing the N x (N-1) receiving queues into N groups of receiving queues, numbering each receiving queue, wherein each group of receiving queues comprises N-1 receiving queues, and the division of the N x (N-1) receiving queues into N groups comprises the following steps:
the queue numbers in group 1 receive queue are: 1 to (N-1);
the queue numbers in the group 2 receiving queue are: 1 × (N-1) +1 to 2 × (N-1);
......
the queue numbers in the Nth group of receiving queues are as follows: (N-1) × (N-1) +1 to N × (N-1);
each CPU core corresponds to N-1 receiving queues;
wherein N is a positive integer.
Optionally, in the method for avoiding packet loss for a 5G data forwarding plane, the number of CPU cores currently running a forwarding program concurrently is less than or equal to the number of all CPU cores in the server.
Optionally, the method for avoiding packet loss for a 5G data forwarding plane, where the increasing or decreasing the number of CPU cores currently running a forwarding program concurrently specifically includes:
when the number of the CPU cores of the current concurrent operation forwarding program is increased, the CPU cores are increased to participate in concurrent processing forwarding data packets, and capacity expansion is defined;
when the number of the CPU cores which run the forwarding program concurrently is reduced, the participation of the CPU cores in concurrent processing of forwarding data packets is reduced, and the reduction is defined as capacity reduction.
Optionally, the method for avoiding packet loss for a 5G data forwarding plane, where the number of CPU cores that obtain the actual concurrently-running forwarding program after the increase or decrease is M specifically includes:
the number of CPU cores of the actual concurrent operation forwarding program after capacity expansion or capacity reduction is M, wherein M is less than or equal to N, and M is a positive integer.
Optionally, in the method for avoiding packet loss for a 5G data forwarding plane, a is smaller than M, where a is a positive integer.
In addition, to achieve the above object, the present invention also provides a server, wherein the server includes: the program for avoiding data packet loss of the 5G data forwarding plane is stored on the memory and can run on the processor, and when being executed by the processor, the program for avoiding data packet loss of the 5G data forwarding plane realizes the steps of the method for avoiding data packet loss of the 5G data forwarding plane.
In addition, to achieve the above object, the present invention further provides a storage medium, wherein the storage medium stores a program for avoiding packet loss of a 5G data forwarding plane, and the program for avoiding packet loss of the 5G data forwarding plane implements the steps of the method for avoiding packet loss of the 5G data forwarding plane as described above when executed by a processor.
The number of the CPU cores of the maximum concurrent operation forwarding program in the server is N; setting N x (N-1) receiving queues of data packets, and dividing the N x (N-1) receiving queues into N groups of receiving queues, wherein each group of receiving queues comprises N-1 receiving queues, and each CPU core corresponds to the N-1 receiving queues; acquiring the number of CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N; all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly distributed to the 1 st CPU core to the Mth CPU core for processing; and averagely distributing the total (N-M) × (N-1) receiving queues from the remaining M +1 th receiving queue to the N th receiving queue to M CPU cores for processing, and respectively and sequentially distributing the 1 st CPU core to the A th CPU core for processing when the receiving queues cannot be averagely distributed. When the number of the CPU cores running the forwarding program concurrently increases or decreases, the number of the receiving queues of the network card does not need to be adjusted, and only the corresponding relation between the predetermined number of the data packet receiving queues and the plurality of CPU cores with the changed number needs to be adjusted, so that the problem that the data packets are lost when the number of the CPU cores running the forwarding program concurrently increases or decreases is solved, and the workload of all the CPU cores running the forwarding program is relatively balanced.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of a method of avoiding packet loss for the 5G data forwarding plane of the present invention;
FIG. 2 is a diagram illustrating an operating environment of a server according to a preferred embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention clearer and clearer, the present invention is further described in detail below with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the method for avoiding packet loss of the 5G data forwarding plane according to the preferred embodiment of the present invention, as shown in fig. 1, the method for avoiding packet loss of the 5G data forwarding plane includes the following steps:
and step S10, acquiring the number N of the CPU cores of the maximum concurrent running forwarding program in the server.
Specifically, the 5G data forwarding plane is used for forwarding a data packet of a user in a 5G network, a program for implementing the data forwarding plane is called a forwarding program, in order to improve the processing capability of the forwarding program on a single server, a processing mode of multi-core concurrent (a plurality of CPU cores forward the data packet at the same time, each operation core of a CPU can independently run the program, and a plurality of operation cores run the program at the same time is called concurrent) is often adopted, the CPU cores (the CPU cores are CPU operation cores) load and run the same forwarding program, the plurality of CPU cores (i.e., multi-core CPUs, which are collectively called as CPUs having a plurality of operation cores) respectively share the forwarding processing work responsible for a part of the data packet, and the multi-core concurrent operation is that the plurality of CPU cores concurrently run the forwarding program.
The number of the CPU cores in the server that run the forwarding program maximally concurrently is equal to the number of all the CPU cores in the server, that is, the number of the CPU cores in the server that actually run the forwarding program concurrently is less than or equal to the number of all the CPU cores in the server, for example, the number of all the CPU cores in the server is 8, then the number N of the CPU cores in the server that run the forwarding program maximally concurrently is also 8, the number M of the CPU cores that actually run the forwarding program concurrently is less than or equal to 8, for example, M may also be 4 or 5.
Step S20, setting N × N (N-1) receive queues for the data packets (i.e., the number of receive queues is set to N × N (N-1), the receive queues are buffer queues for storing the data packets after receiving the data packets for the network cards, and the CPU core will read the data packets from the buffer queues to perform forwarding processing), and divide the N × N (N-1) receive queues into N groups of receive queues, where each group of receive queues includes N-1 receive queues, and each CPU core corresponds to N-1 receive queues.
Specifically, according to the number N (for example, N is 8) of CPU cores in the server, which run the forwarding program at maximum concurrently, a receiving queue of N × N (N-1) packets is fixedly set through a network card (when N is 8, N × N (N-1) =8 × 8 (8-1) =56, that is, the number of the receiving queue is 56), the N × N (N-1) receiving queues are divided into N groups of receiving queues (for example, 8 groups of receiving queues), each receiving queue is numbered (for example, the 56 receiving queues are numbered sequentially according to 1, 2, 3.
The queue numbers in group 1 receive queue are: 1 to (N-1);
the queue numbers in the group 2 receiving queue are: 1 × (N-1) +1 to 2 × (N-1);
......
the queue numbers in the Nth group of receiving queues are as follows: (N-1) × (N-1) +1 to N × (N-1);
each CPU core corresponds to N-1 receiving queues;
wherein N is a positive integer.
And step S30, acquiring the number of the CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N.
Specifically, the number of CPU cores currently concurrently running the forwarding program is less than or equal to the number of all CPU cores in the server, for example, when the number of all CPU cores in the server is 8, the number of CPU cores currently concurrently running the forwarding program is less than or equal to 8.
When the number of the CPU cores of the current concurrent operation forwarding program is increased, the CPU cores are increased to participate in concurrent processing and forwarding of the data packet, and capacity expansion is defined; when the number of the CPU cores which run the forwarding program concurrently is reduced, the participation of the CPU cores in concurrent processing of forwarding data packets is reduced, and the reduction is defined as capacity reduction.
For example, N is the number of all CPU cores in the server, and the number of the current concurrent CPU cores is definitely smaller than or equal to N, for example, the number of all CPU cores N =8, the number of the current concurrent CPU cores is 4, and when the number of the current concurrent CPU cores is adjusted to 3, the method is called a capacity reduction; when the number of the current concurrent CPU cores is adjusted to 5, the capacity expansion is called, but the number of the current concurrent CPU cores cannot exceed N (8) no matter how the current concurrent CPU cores are adjusted.
The number of the CPU cores of the actual concurrently running forwarding program after capacity expansion or capacity reduction is obtained is M, where M is not greater than N, and M is a positive integer (for example, when N =8, M is not greater than 8), which is convenient for allocating a receive queue according to the number of the CPU cores M of the actual concurrently running forwarding program.
And step S40, sequentially and correspondingly distributing all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues to the 1 st CPU core to the Mth CPU core for processing.
Specifically, all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly allocated to the 1 st CPU core to the Mth CPU core for processing, namely, all the receiving queues in the 1 st group of receiving queues are allocated to the 1 st CPU core for processing, and then all the receiving queues in the Mth group of receiving queues are allocated to the Mth CPU core for processing; for example, when N =8 and M =6, all of the 1 st group of receive queues are correspondingly allocated to the 1 st CPU core process, all of the 2 nd group of receive queues are correspondingly allocated to the 2 nd CPU core process, all of the 3 rd group of receive queues are correspondingly allocated to the 3 rd CPU core process, all of the 4 th group of receive queues are correspondingly allocated to the 4 th CPU core process, all of the 5 th group of receive queues are correspondingly allocated to the 5 th CPU core process, and all of the 6 th group of receive queues are correspondingly allocated to the 6 th CPU core process.
Step S50, averagely distributing the total (N-M) × (N-1) receive queues from the remaining M +1 th group of receive queues to the nth group of receive queues to M CPU cores for processing, and respectively and sequentially distributing the remaining a receive queues to the 1 st CPU core to the a th CPU core for processing when the receive queues cannot be equally distributed.
Specifically, the remaining M +1 th group of receive queues to the N th group of receive queues, and (N-M) × (N-1) queues in total, are then evenly (or evenly) allocated to the M CPU cores for processing, and when a remainder a of receive queues (a is smaller than M) cannot be evenly allocated, the remaining a queues are sequentially allocated to the 1 st CPU core to the a th CPU core for processing, respectively.
It should be noted that, the number of CPU cores that run the forwarding program maximally and concurrently in the server is N, if only receiving queues for N data packets are set, it is assumed that only (N-1) CPU cores are actually needed to run concurrently, and that the 1 receiving queue that is added cannot be split and can only be processed by another CPU core, so that this CPU core is burdened with 2 receiving queues, and the other CPU is burdened with only 1 receiving queue, and the loads are doubled, resulting in a very unbalanced load; therefore, the invention sets N (N-1) receiving queues, when reducing 1 CPU core, the plurality of receiving queues corresponding to the CPU core can be distributed to other plurality of CPU cores for processing, so that the load is balanced, the larger the number of receiving queues is, the more balanced the receiving queues are, but the more expensive the cost of the network card is, the more the receiving queues are, the more the network card is, the more the invention takes a proper value as N (N-1).
For example, the number of all CPU cores within a server N =8, numbered 1, 2, 3, 4, 5, 6, 7, 8, respectively; the number of receiving queues is fixedly set to be N (N-1) =8 × 7=56 by the network card, and the receiving queues are respectively numbered (queue numbers) 1, 2, 3, 4. All receive queues are divided into N groups (8 groups), each group of receive queues has N-1 (7) receive queues:
the queue numbers in group 1 receive queue are: 1, 2, 3, 4, 5, 6, 7;
the queue numbers in the group 2 receiving queue are: 8,9, 10, 11, 12, 13, 14
The queue numbers in group 3 receive queues are: 15, 16, 17, 18, 19, 20, 21;
the queue numbers in the group 4 receiving queues are: 22, 23, 24, 25, 26, 27, 28;
the queue numbers in group 5 receive queues are: 29, 30, 31, 32, 33, 34, 35;
the queue numbers in the group 6 receiving queue are: 36, 37, 38, 39, 40, 41, 42;
the queue numbers in group 7 receive queues are: 43, 44, 45, 46, 47, 48, 49;
the queue numbers in the group 8 of receiving queues are: 50, 51, 52, 53, 54, 55, 56;
assuming that the number of CPU cores of the currently actual concurrently running forwarding program is M =6, and the specific CPU core numbers are 1, 2, 3, 4, 5, and 6, the group 1 receive queue to the group 6 receive queue respectively correspond to the 1 st CPU core to the 6 th CPU core, and then 14 more queues (43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56) are continuously and uniformly distributed to M (6) CPU cores as follows:
43, 44 to CPU core 1;
45, 46 to the CPU core 2;
47, 48 to the CPU core 3;
49, 50 to the CPU core 4;
51, 52 to the CPU core 5;
53, 54 to the CPU core 6;
there are two remaining receive queues 55, 56, 55 assigned to the 1 st CPU core and 56 assigned to the 2 nd CPU core.
Finally, the 1 st CPU core and the 2 nd CPU core correspond to 10 receive queues, and finally, the 3 rd CPU core, the 4 th CPU core, the 5 th CPU core and the 6 th CPU core all correspond to 9 receive queues, the number of the receive queues with a difference is 1, and the traffic difference proportion (1/10 =10%) is small, i.e., load is balanced (traffic on each queue is equal in probability).
If the number of all CPU cores within the server N =8, numbered 1, 2, 3, 4, 5, 6, 7, 8, respectively; assuming that the number of receive queues is also N (8), and assuming that the number of currently actually concurrent CPU cores is M =6, the 1 st CPU core and the 2 nd CPU core correspond to the 2 receive queues, and the 3 rd CPU core, the 4 th CPU core, the 5 th CPU core and the 6 th CPU core all correspond to 1 queue, although the number of receive queues that differ is also 1, the traffic difference ratio (1/2 =50%) is large (the traffic on each queue is equal in probability). Therefore, the number of receiving queues of the invention takes a proper value of N x (N-1).
It should be noted that, in the present invention, the relationship between the receiving queues and the network card traffic, for example, if the network card receiving traffic is 56000Mbit/s, if the network card is set to 8 receiving queues, each receiving queue has 7000Mbit/s of traffic, and if the network card is set to 56 receiving queues, each queue has 1000Mbit/s of traffic.
If the network card is set to 8 receive queues (each receive queue has 7000Mbit/s of traffic), if the number of CPU cores actually running the forwarding program concurrently is only 6 (1 st CPU core to 6 th CPU core), that is, only 6 CPU cores actually concurrently process the data packet, then the 1 st CPU core and the 2 nd CPU core need to process 2 receive queues (14000 Mbit/s of traffic), and the 3 rd CPU core, the 4 th CPU core, the 5 th CPU core and the 6 th CPU core all only need to process 7000Mbit/s of traffic, which results in unbalanced traffic.
If the network card is set to 56 receiving queues (each receiving queue has 1000Mbit/s of traffic), if only 6 CPU cores (1 st CPU core to 6 th CPU core) are actually used to process the data packet concurrently, the 1 st CPU core and the 2 nd CPU core need to process 10 receiving queues (10000 Mbit/s of traffic), and as long as the 3 rd CPU core, the 4 th CPU core, the 5 th CPU core and the 6 th CPU core process 9000Mbit/s of traffic, the traffic is relatively balanced, and the difference is not large. That is, the more the receiving queues of the network card are set, the more balanced the network card is, but the more the receiving queues are set, the more the network card is expensive.
Aiming at the problem of data packet loss during capacity expansion and capacity reduction, the invention provides a method capable of smoothly completing the capacity expansion and capacity reduction, namely, the data packet is not lost during the capacity expansion and capacity reduction; the problem can be solved only by adjusting the corresponding relation between the data packet receiving queue and the CPU core without adjusting the number of the receiving queues of the network card during capacity expansion and capacity reduction; for example, during capacity expansion, a part of the receiving queues corresponding to other CPU cores is handed over to the newly added CPU core for processing, and during capacity reduction, the receiving queues corresponding to the subtracted CPU cores are handed over to the remaining CPU cores for processing. According to the method, the number of the CPU cores of the maximum concurrent operation forwarding program of the server is N, the network card is provided with N x (N-1) receiving queues of data packets, and each CPU core is ensured to correspond to a plurality of receiving queues of a certain number instead of only 1 receiving queue; when the number of the CPU cores which run the forwarding program concurrently is adjusted to M, the number of the receiving queues of the network card does not need to be adjusted, and only the corresponding relation between N x (N-1) data packet receiving queues and M CPU cores needs to be adjusted.
Further, as shown in fig. 2, based on the above method for avoiding packet loss for 5G data forwarding plane, the present invention also provides a server, where the server includes a processor and a memory connected to the processor, where the memory stores a program for avoiding packet loss for 5G data forwarding plane, and the program for avoiding packet loss for 5G data forwarding plane is executed by the processor to implement the steps of the method for avoiding packet loss for 5G data forwarding plane according to the first embodiment.
In one embodiment, the following steps are implemented when the processor executes the program for avoiding packet loss for the 5G data forwarding plane in the memory:
acquiring the number N of CPU cores of the maximum concurrent running forwarding program in the server;
setting N x (N-1) receiving queues of data packets, and dividing the N x (N-1) receiving queues into N groups of receiving queues, wherein each group of receiving queues comprises N-1 receiving queues, and each CPU core corresponds to the N-1 receiving queues;
acquiring the number of CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N;
all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly distributed to the 1 st CPU core to the Mth CPU core for processing;
and averagely distributing the total (N-M) × (N-1) receiving queues from the remaining M +1 th receiving queue to the N th receiving queue to M CPU cores for processing, and respectively and sequentially distributing the 1 st CPU core to the A th CPU core for processing when the receiving queues cannot be averagely distributed.
And the number of the CPU cores of the maximum concurrent running forwarding program in the server is equal to the number of all the CPU cores in the server. The number of the CPU cores which actually run the forwarding program concurrently in the server is less than or equal to the number of all the CPU cores in the server.
The setting of the N × N-1 receive queues for the data packets, and dividing the N × N-1 receive queues into N groups of receive queues, where each group of receive queues includes N-1 receive queues, and each CPU core corresponds to N-1 receive queues, specifically includes:
according to the number N of CPU cores of the maximum concurrent operation forwarding program in the server, fixedly setting receiving queues of N x (N-1) data packets through a network card, dividing the N x (N-1) receiving queues into N groups of receiving queues, numbering each receiving queue, wherein each group of receiving queues comprises N-1 receiving queues, and the division of the N x (N-1) receiving queues into N groups comprises the following steps:
the queue numbers in group 1 receive queue are: 1 to (N-1);
the queue numbers in the group 2 receiving queue are: 1 × (N-1) +1 to 2 × (N-1);
......
the queue numbers in the Nth group of receiving queues are as follows: (N-1) × (N-1) +1 to N × (N-1);
each CPU core corresponds to N-1 receiving queues;
wherein N is a positive integer.
And the number of the CPU cores for concurrently running the forwarding program is less than or equal to the number of all the CPU cores in the server.
The increasing or decreasing of the number of CPU cores currently concurrently running the forwarding program specifically includes:
when the number of the CPU cores of the current concurrent operation forwarding program is increased, the CPU cores are increased to participate in concurrent processing forwarding data packets, and capacity expansion is defined;
when the number of the CPU cores which run the forwarding program concurrently is reduced, the participation of the CPU cores in concurrent processing of forwarding data packets is reduced, and the reduction is defined as capacity reduction.
The obtaining of the number of the CPU cores of the increased or decreased actual concurrent running forwarding program is M, and specifically includes:
the number of CPU cores of the actual concurrent operation forwarding program after capacity expansion or capacity reduction is M, wherein M is less than or equal to N, and M is a positive integer.
Wherein A is less than M, wherein A is a positive integer.
The present invention also provides a storage medium, wherein the storage medium stores a program for avoiding packet loss of the 5G data forwarding plane, and the program for avoiding packet loss of the 5G data forwarding plane implements the steps of the method for avoiding packet loss of the 5G data forwarding plane as described above when being executed by a processor.
In summary, the present invention provides a method and a server for avoiding packet loss in a 5G data forwarding plane, where the method includes: acquiring the number N of CPU cores of the maximum concurrent running forwarding program in the server; setting N x (N-1) receiving queues of data packets, and dividing the N x (N-1) receiving queues into N groups of receiving queues, wherein each group of receiving queues comprises N-1 receiving queues, and each CPU core corresponds to the N-1 receiving queues; acquiring the number of CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N; all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly distributed to the 1 st CPU core to the Mth CPU core for processing; and averagely distributing the total (N-M) × (N-1) receiving queues from the remaining M +1 th receiving queue to the N th receiving queue to M CPU cores for processing, and respectively and sequentially distributing the 1 st CPU core to the A th CPU core for processing when the receiving queues cannot be averagely distributed. When the number of the CPU cores running the forwarding program concurrently increases or decreases, the number of the receiving queues of the network card does not need to be adjusted, and only the corresponding relation between the predetermined number of the data packet receiving queues and the plurality of CPU cores with the changed number needs to be adjusted, so that the problem that the data packets are lost when the number of the CPU cores running the forwarding program concurrently increases or decreases is solved, and the workload of all the CPU cores running the forwarding program is relatively balanced.
Of course, it will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program instructing relevant hardware (such as a processor, a controller, etc.), and the program may be stored in a computer readable storage medium, and when executed, the program may include the processes of the above method embodiments. The storage medium may be a memory, a magnetic disk, an optical disk, etc.
It is to be understood that the invention is not limited to the examples described above, but that modifications and variations may be effected thereto by those of ordinary skill in the art in light of the foregoing description, and that all such modifications and variations are intended to be within the scope of the invention as defined by the appended claims.

Claims (10)

1. A method for avoiding packet loss of a 5G data forwarding plane is characterized in that the method for avoiding packet loss of the 5G data forwarding plane comprises the following steps:
acquiring the number N of CPU cores of the maximum concurrent running forwarding program in the server;
setting N x (N-1) receiving queues of data packets, and dividing the N x (N-1) receiving queues into N groups of receiving queues, wherein each group of receiving queues comprises N-1 receiving queues, and each CPU core corresponds to the N-1 receiving queues;
acquiring the number of CPU cores of the current concurrent operation forwarding program, and acquiring the number of the CPU cores of the actual concurrent operation forwarding program after the increase or decrease as M when the number of the CPU cores of the current concurrent operation forwarding program is increased or decreased, wherein M is less than or equal to N;
all the receiving queues from the 1 st group of receiving queues to the Mth group of receiving queues are sequentially and correspondingly distributed to the 1 st CPU core to the Mth CPU core for processing;
and averagely distributing the total (N-M) × (N-1) receiving queues from the remaining M +1 th group of receiving queues to the N th group of receiving queues to M CPU cores for processing, and respectively and sequentially distributing the remaining A receiving queues to the 1 st CPU core to the A th CPU core for processing when the receiving queues cannot be averagely distributed.
2. The method for avoiding packet loss for the 5G data forwarding plane of claim 1, wherein the number of CPU cores in the server that run the forwarding program maximally concurrently is equal to the number of all CPU cores in the server.
3. The method for avoiding packet loss of 5G data forwarding plane according to claim 2, wherein the number of CPU cores in the server that actually run the forwarding program concurrently is less than or equal to the number of all CPU cores in the server.
4. The method for avoiding packet loss for a 5G data forwarding plane according to claim 1, wherein the setting of the receive queues for N × N-1 packets and the dividing of the N × N-1 receive queues into N groups of receive queues, each group of receive queues includes N-1 receive queues, and each CPU core corresponds to N-1 receive queues, specifically comprising:
according to the number N of CPU cores of the maximum concurrent operation forwarding program in the server, fixedly setting receiving queues of N x (N-1) data packets through a network card, dividing the N x (N-1) receiving queues into N groups of receiving queues, numbering each receiving queue, wherein each group of receiving queues comprises N-1 receiving queues, and the division of the N x (N-1) receiving queues into N groups comprises the following steps:
the queue numbers in group 1 receive queue are: 1 to (N-1);
the queue numbers in the group 2 receiving queue are: 1 × (N-1) +1 to 2 × (N-1);
......
the queue numbers in the Nth group of receiving queues are as follows: (N-1) × (N-1) +1 to N × (N-1);
each CPU core corresponds to N-1 receiving queues;
wherein N is a positive integer.
5. The method for avoiding packet loss for 5G data forwarding plane according to claim 1, wherein the number of CPU cores currently running the forwarding program concurrently is less than or equal to the number of all CPU cores in the server.
6. The method for avoiding packet loss for a 5G data forwarding plane according to claim 5, wherein the increasing or decreasing the number of CPU cores currently concurrently running a forwarding program specifically comprises:
when the number of the CPU cores of the current concurrent operation forwarding program is increased, the CPU cores are increased to participate in concurrent processing forwarding data packets, and capacity expansion is defined;
when the number of the CPU cores which run the forwarding program concurrently is reduced, the participation of the CPU cores in concurrent processing of forwarding data packets is reduced, and the reduction is defined as capacity reduction.
7. The method for avoiding packet loss for a 5G data forwarding plane according to claim 6, wherein the obtaining the number of CPU cores of the actual concurrently running forwarding program after the increase or decrease is M specifically includes:
the number of CPU cores of the actual concurrent operation forwarding program after capacity expansion or capacity reduction is M, wherein M is less than or equal to N, and M is a positive integer.
8. The method for avoiding packet loss for a 5G data forwarding plane according to claim 1, wherein a is smaller than M, wherein a is a positive integer.
9. A server, characterized in that the server comprises: a memory, a processor and a packet loss avoidance program of a 5G data forwarding plane stored on the memory and operable on the processor, the packet loss avoidance program of the 5G data forwarding plane implementing the steps of the packet loss avoidance method of the 5G data forwarding plane as claimed in any one of claims 1-8 when executed by the processor.
10. A storage medium, characterized in that the storage medium stores a program for avoiding packet loss of a 5G data forwarding plane, and the program for avoiding packet loss of the 5G data forwarding plane implements the steps of the method for avoiding packet loss of the 5G data forwarding plane according to any one of claims 1 to 8 when executed by a processor.
CN202010380578.XA 2020-05-08 2020-05-08 Method and server for avoiding data packet loss of 5G data forwarding plane Active CN111314249B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010380578.XA CN111314249B (en) 2020-05-08 2020-05-08 Method and server for avoiding data packet loss of 5G data forwarding plane

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010380578.XA CN111314249B (en) 2020-05-08 2020-05-08 Method and server for avoiding data packet loss of 5G data forwarding plane

Publications (2)

Publication Number Publication Date
CN111314249A CN111314249A (en) 2020-06-19
CN111314249B true CN111314249B (en) 2021-04-20

Family

ID=71161095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010380578.XA Active CN111314249B (en) 2020-05-08 2020-05-08 Method and server for avoiding data packet loss of 5G data forwarding plane

Country Status (1)

Country Link
CN (1) CN111314249B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113141633A (en) * 2021-03-16 2021-07-20 深圳震有科技股份有限公司 5G communication data packet forwarding method and terminal
CN113079504A (en) * 2021-03-23 2021-07-06 广州讯鸿网络技术有限公司 Method, device and system for realizing access of 5G message DM multi-load balancer

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970244A (en) * 2012-11-23 2013-03-13 上海寰创通信科技股份有限公司 Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance
CN106713185A (en) * 2016-12-06 2017-05-24 瑞斯康达科技发展股份有限公司 Load balancing method and apparatus of multi-core CPU
CN109284192A (en) * 2018-09-29 2019-01-29 网宿科技股份有限公司 Method for parameter configuration and electronic equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180285151A1 (en) * 2017-03-31 2018-10-04 Intel Corporation Dynamic load balancing in network interface cards for optimal system level performance
RU2703188C1 (en) * 2017-10-05 2019-10-15 НФВаре, Инц Load distribution method for a multi-core system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970244A (en) * 2012-11-23 2013-03-13 上海寰创通信科技股份有限公司 Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance
CN106713185A (en) * 2016-12-06 2017-05-24 瑞斯康达科技发展股份有限公司 Load balancing method and apparatus of multi-core CPU
CN109284192A (en) * 2018-09-29 2019-01-29 网宿科技股份有限公司 Method for parameter configuration and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于系统实时负载的网络流量均衡方法;周计 等;《计算机安全》;20140331;全文 *

Also Published As

Publication number Publication date
CN111314249A (en) 2020-06-19

Similar Documents

Publication Publication Date Title
CN107391317B (en) Data recovery method, device, equipment and computer readable storage medium
US8621074B2 (en) Intelligent work load manager
CN110096362B (en) Multitask unloading method based on edge server cooperation
US20230244537A1 (en) Efficient gpu resource allocation optimization method and system
CN111314249B (en) Method and server for avoiding data packet loss of 5G data forwarding plane
CN111367630A (en) Multi-user multi-priority distributed cooperative processing method based on cloud computing
CN108900626B (en) Data storage method, device and system in cloud environment
WO2020019743A1 (en) Traffic control method and device
CN112817728B (en) Task scheduling method, network device and storage medium
US20110161965A1 (en) Job allocation method and apparatus for a multi-core processor
CN112888005B (en) MEC-oriented distributed service scheduling method
US10614542B2 (en) High granularity level GPU resource allocation method and system
CN115858184B (en) RDMA memory management method, device, equipment and medium
CN111798113A (en) Resource allocation method, device, storage medium and electronic equipment
CN112882818A (en) Task dynamic adjustment method, device and equipment
CN114721818A (en) Kubernetes cluster-based GPU time-sharing method and system
CN115640113A (en) Multi-plane flexible scheduling method
CN112214299A (en) Multi-core processor and task scheduling method and device thereof
CN116782249A (en) Edge computing unloading and resource allocation method and system with user dependency relationship
US20230205418A1 (en) Data processing system and operating method thereof
US20230325082A1 (en) Method for setting up and expanding storage capacity of cloud without disruption of cloud services and electronic device employing method
CN112395063B (en) Dynamic multithreading scheduling method and system
CN117632457A (en) Method and related device for scheduling accelerator
CN114661415A (en) Scheduling method and computer system
CN113822485A (en) Power distribution network scheduling task optimization method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant