CN109450816B - Queue scheduling method, device, network equipment and storage medium - Google Patents
Queue scheduling method, device, network equipment and storage medium Download PDFInfo
- Publication number
- CN109450816B CN109450816B CN201811388030.9A CN201811388030A CN109450816B CN 109450816 B CN109450816 B CN 109450816B CN 201811388030 A CN201811388030 A CN 201811388030A CN 109450816 B CN109450816 B CN 109450816B
- Authority
- CN
- China
- Prior art keywords
- forwarding
- core
- cores
- forwarding core
- queue
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L47/00—Traffic control in data switching networks
- H04L47/50—Queue scheduling
Landscapes
- Engineering & Computer Science (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The invention relates to a queue scheduling method, a queue scheduling device, network equipment and a storage medium. The method is applied to network equipment, the network equipment comprises a control core and a plurality of forwarding cores, the network equipment is provided with a plurality of cache queues, each forwarding core in the plurality of forwarding cores should have at least one cache queue, and one cache queue only corresponds to one forwarding core; the method comprises the following steps: when a first cache queue with message accumulation exists in the plurality of cache queues, the control core determines a second forwarding core with processing capacity from each forwarding core except the first forwarding core corresponding to the first cache queue in the plurality of forwarding cores; and the control core deletes the configuration relationship between the first cache queue and the first forwarding core, establishes the configuration relationship between the first cache queue and the second forwarding core, and enables the second forwarding core to process the message in the first cache queue. The method reduces the risk of overload of the processing capacity of the forwarding cores, realizes the data flow scheduling among the forwarding cores, and avoids the problem of unnecessary data flow packet loss.
Description
Technical Field
The invention belongs to the technical field of communication, and particularly relates to a queue scheduling method, a queue scheduling device, network equipment and a storage medium.
Background
In a multi-core network processor architecture with a message processing requirement, the current mainstream technical scheme is as follows: taking ethernet data as an example, after receiving an ethernet packet, a network interface of a processor sends different packets to different queues according to configuration (a packet queue to be entered may be determined according to a packet five-tuple HASH value received by the interface, where the five-tuple HASH value is a HASH value calculated by a HASH algorithm on a five-tuple (source IP, destination IP, IP protocol value, source port number, destination port number) of a standard ethernet packet), and a forwarding core acquires the packet from the queue for processing (a mapping relationship between the forwarding core and the queues may be configured, for example, a certain queue or certain queues is assigned to a certain forwarding core for processing).
According to the flow of processing the ethernet packet by the network processor, the processor provides a flexible data processing mechanism, not only can select a data receiving queue according to the configuration, but also can allocate the queue to different forwarding cores for processing, but the configuration cannot be performed at will, and different configurations need to be performed according to different application scenarios, taking a router as an example, since the router belongs to a network intermediate device, and implements routing and forwarding of data, the HASH of the data stream (packet with the same five tuple h value) sequence cannot be changed under normal conditions, but if the data of the same data stream is processed by multiple forwarding cores, the sequence of the data stream cannot be guaranteed, and therefore, for the same data stream, based on the order preserving principle, the data stream can only be allocated to the same queue and processed by the same forwarding core, that is, the same data stream can only enter the same queue, while the same queue can only be processed by the same forwarding core.
Generally, the number of queues is much larger than the number of forwarding cores, so that there is a case that multiple data flows of multiple queues are allocated to the same forwarding core for processing, and when the resources of the forwarding core consumed for processing the data flows exceed the processing capability of a single forwarding core, packet loss of the data flows may be caused.
Disclosure of Invention
In view of the above, the present invention provides a queue scheduling method, a queue scheduling apparatus, a network device, and a storage medium, so as to effectively solve the above problem.
The embodiment of the invention is realized by the following steps:
in a first aspect, an embodiment of the present application provides a queue scheduling method, which is applied to a network device, where the network device includes a multi-core processor, the multi-core processor includes a control core and multiple forwarding cores, the network device is configured with multiple buffer queues, each forwarding core in the multiple forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core; the method comprises the following steps: when determining that a first cache queue for accumulating messages exists in the plurality of cache queues, the control core determines a second forwarding core with processing capability from each forwarding core except for the first forwarding core corresponding to the first cache queue in the plurality of forwarding cores; and the control core deletes the configuration relationship between the first cache queue and the first forwarding core, establishes the configuration relationship between the first cache queue and the second forwarding core, and enables the second forwarding core to process the message in the first cache queue. In the embodiment of the application, whether a queue has message accumulation is checked, when the queue has message accumulation, namely a corresponding forwarding core (a first forwarding core) is overloaded, a forwarding core (a second forwarding core) with processing capacity is determined from the remaining forwarding cores, and the queue with message accumulation is rescheduled to the second forwarding core for processing, so that the risk of overload of the processing capacity of the forwarding core is reduced, the response speed of a system to burst traffic is increased, data flow scheduling between the forwarding cores is realized, and the problem of unnecessary data flow packet loss is avoided.
With reference to an optional implementation manner of the first aspect, the determining, by the control core, a second forwarding core with processing capability from among forwarding cores other than the first forwarding core corresponding to the first cache queue in the multiple forwarding cores includes: the control core subtracts the consumption value of the first forwarding core within a preset time length from the consumption value of each forwarding core, except the first forwarding core, in the plurality of forwarding cores within the preset time length respectively to obtain a plurality of subtraction results; the control core selects the largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core determines that the target subtraction result is larger than the processing consumption value of the first forwarding core occupied by the first cache queue; and the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core. In the embodiment of the application, the consumption value of the first forwarding core in the preset time length is subtracted from the consumption values of the forwarding cores in the preset time length except the first forwarding core, the maximum value is selected from multiple subtraction results to serve as a target result, and when the target result is determined to be larger than the processing consumption value of the first forwarding core occupied by the first cache queue, the forwarding core corresponding to the target result serves as a second forwarding core, so that when a queue with message accumulation is dispatched to the second forwarding core for processing, the second forwarding core has enough processing capacity to process the messages in the queue, and the problem of data stream packet loss caused by insufficient processing capacity of the second forwarding core is avoided.
With reference to an optional implementation manner of the first aspect, before the step of subtracting, by the control core, consumption values of the first forwarding core within a preset time duration from consumption values of the forwarding cores, except the first forwarding core, within the preset time duration, to obtain a plurality of subtraction results, the method further includes: the control core acquires a consumption value of each forwarding core in the plurality of forwarding cores within the preset time length; and when the control core acquires the consumption value of each forwarding core, determining that a first cache queue for message accumulation exists in the plurality of cache queues. In the embodiment of the application, the consumption value of each forwarding core in the plurality of forwarding cores within the preset time is obtained first, and then whether the queue with the message accumulation exists is checked, compared with the scheme that whether the queue with the message accumulation exists is checked first, and when the queue with the message accumulation exists is determined, the consumption value of each forwarding core is obtained again, the influence of effectiveness on the second forwarding core can be reduced, and the situation that the finally determined second forwarding cores are different due to aging difference is avoided.
With reference to an optional implementation manner of the first aspect, the subtracting, by the control core, the consumption value of the first forwarding core in a preset time period from the consumption values of the forwarding cores, except the first forwarding core, in the preset time period, to obtain a plurality of subtraction results, where the subtracting includes: the control core acquires a consumption value of each forwarding core in the plurality of forwarding cores within the preset time length; and the control core subtracts the consumption value of the first forwarding core from the consumption values of the forwarding cores except the first forwarding core to obtain a plurality of subtraction results. In the embodiment of the application, when it is determined that a queue has message accumulation, the consumption value of each forwarding core in the multiple forwarding cores within a preset time is obtained, and compared with a scheme of obtaining the consumption value of each forwarding core first and then checking whether the queue with the message accumulation exists, the workload can be reduced, and the situation that no queue accumulation exists after the consumption value of each forwarding core is obtained is avoided.
With reference to an optional implementation manner of the first aspect, the acquiring, by the control core, a consumption value of each forwarding core in the multiple forwarding cores within the preset time duration includes: the control core acquires message data which are respectively counted by each forwarding core in the plurality of forwarding cores and acquired from the corresponding cache queue within the preset time length; the control core determines a consumption value for each of the forwarding cores based on the packet data. In the embodiment of the application, the message data which are respectively counted by each forwarding core and are obtained from the corresponding cache queue within the preset time length are obtained, and the consumption value of each forwarding core is calculated according to the message data, so that the reliability and the accuracy of the calculation result are ensured, and the reliability of the second forwarding core determined based on the consumption values is further ensured, thereby reducing the risk of overload of the processing capacity of a single forwarding core, and avoiding the problem of unnecessary data stream packet loss.
With reference to an optional implementation manner of the first aspect, the acquiring, by the control core, the packet data, which is obtained from the corresponding cache queue within the preset time and is counted by each forwarding core of the multiple forwarding cores, includes: the control core periodically changes the state of the global variable switch to enable the global variable switch to be in a first state or a second state, wherein in the first state, each forwarding core in the multiple forwarding cores respectively counts message data acquired from a corresponding cache queue within the preset time length; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores. In the implementation of the application, when the control core acquires the message data counted by each forwarding core, the state of the global variable switch is periodically changed, so that each forwarding core determines whether to count the message data according to the state of the global variable switch, the time consistency of the counted message data of each forwarding core is ensured, the accuracy and the reliability of the calculation result are further ensured, meanwhile, the acquisition of the message data counted by each forwarding core can be realized only by changing the state of the global variable switch, and the control flow is simplified.
In a second aspect, an embodiment of the present application further provides a network device, including a multi-core processor, where the multi-core processor includes a control core and multiple forwarding cores, the network device is configured with multiple buffer queues, each forwarding core in the multiple forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core; the control core is configured to determine, when it is determined that a first cache queue in which messages are accumulated exists in the multiple cache queues, a second forwarding core having processing capability from each of forwarding cores, except for a first forwarding core corresponding to the first cache queue, in the multiple forwarding cores; the control core is further configured to delete the configuration relationship between the first cache queue and the first forwarding core, establish the configuration relationship between the first cache queue and the second forwarding core, and enable the second forwarding core to process the packet in the first cache queue; each forwarding core in the multiple forwarding cores is configured to process a packet in the corresponding at least one cache queue.
With reference to an optional implementation manner of the second aspect, the control core is further configured to subtract a consumption value of the first forwarding core in a preset time duration from a consumption value of each of the forwarding cores, except the first forwarding core, in the preset time duration, to obtain a plurality of subtraction results; the control core is further configured to select a largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core is further configured to determine that the target subtraction result is greater than a processing consumption value of the first forwarding core occupied by the first cache queue; and the control core is further configured to use the forwarding core corresponding to the target subtraction result as the second forwarding core.
With reference to an optional implementation manner of the second aspect, the control core is further configured to obtain a consumption value of each forwarding core in the multiple forwarding cores within the preset time length; the control core is further configured to determine that a first cache queue in which packets are accumulated exists in the plurality of cache queues when the consumption value of each forwarding core is obtained.
With reference to an optional implementation manner of the second aspect, the control core is further configured to obtain a consumption value of each forwarding core in the multiple forwarding cores within the preset time length; the control core is further configured to subtract the consumption value of the first forwarding core from the consumption values of the forwarding cores except the first forwarding core in the multiple forwarding cores, respectively, so as to obtain multiple subtraction results.
With reference to an optional implementation manner of the second aspect, the control core is further configured to obtain packet data, which is obtained from a corresponding cache queue within the preset time and is counted by each forwarding core of the multiple forwarding cores; the control core is further configured to determine a consumption value of each forwarding core based on the packet data.
With reference to an optional implementation manner of the second aspect, the control core is further configured to periodically change a state of a global variable switch to be in a first state or a second state, where in the first state, each of the forwarding cores respectively counts packet data acquired from a corresponding cache queue within the preset time duration; and in the second state, the control core is further configured to obtain the packet data fed back by each of the forwarding cores.
In a third aspect, an embodiment of the present application further provides a queue scheduling apparatus, which is applied to a network device including a multi-core processor, where the multi-core processor includes a plurality of forwarding cores, the network device is configured with a plurality of buffer queues, each forwarding core in the plurality of forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core; the device comprises: a determining module and a configuring module; a determining module, configured to determine, when it is determined that a first cache queue in which messages are accumulated exists in the multiple cache queues, a second forwarding core with processing capability from each forwarding core, except for a first forwarding core corresponding to the first cache queue, in the multiple forwarding cores; and the configuration module is used for deleting the configuration relationship between the first cache queue and the first forwarding core, establishing the configuration relationship between the first cache queue and the second forwarding core, and enabling the second forwarding core to process the message in the first cache queue.
With reference to an optional implementation manner of the third aspect, the determining module is further configured to subtract, by the control core, the consumption value of the first forwarding core in a preset time duration from the consumption values of the forwarding cores, except the first forwarding core, in the preset time duration, so as to obtain multiple subtraction results; the control core selects the largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core determines that the target subtraction result is larger than the processing consumption value of the first forwarding core occupied by the first cache queue; and the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core.
In combination with an optional implementation manner of the third aspect, the apparatus further includes: the device comprises an acquisition module and a second determination module; an obtaining module, configured to obtain, by the control core, a consumption value of each forwarding core in the multiple forwarding cores within the preset time duration; a second determining module, configured to determine that a first cache queue in which packets are accumulated exists in the multiple cache queues when the control core obtains the consumption value of each forwarding core.
With reference to an optional implementation manner of the third aspect, the determining module is further configured to obtain, by the control core, a consumption value of each forwarding core in the multiple forwarding cores within the preset time duration; and the control core subtracts the consumption value of the first forwarding core from the consumption values of the forwarding cores except the first forwarding core to obtain a plurality of subtraction results.
With reference to an optional implementation manner of the third aspect, the obtaining module or the determining module is further configured to obtain, by the control core, packet data that is obtained from a corresponding cache queue within the preset time period and is counted by each forwarding core of the multiple forwarding cores; the control core determines a consumption value for each of the forwarding cores based on the packet data.
With reference to an optional implementation manner of the third aspect, the obtaining module or the determining module is further configured to periodically change a state of a global variable switch by the control core to enable the control core to be in a first state or a second state, where, in the first state, each forwarding core in the multiple forwarding cores respectively counts packet data obtained from a corresponding cache queue within the preset time period; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores.
In a fourth aspect, this application further provides a storage medium on which a computer program is stored, where the computer program, when executed by a processor, performs the method of the first aspect and/or the method provided in connection with an optional implementation manner of the first aspect.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. The above and other objects, features and advantages of the present invention will become more apparent from the accompanying drawings. Like reference numerals refer to like parts throughout the drawings. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Fig. 1 shows a schematic structural diagram of a multi-core processor architecture in a network device according to an embodiment of the present invention.
Fig. 2 is a flowchart illustrating a queue scheduling method according to an embodiment of the present invention.
Fig. 3 shows a schematic flowchart of step S101 in fig. 2 according to an embodiment of the present invention.
Fig. 4 is a schematic block diagram illustrating a queue scheduling apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
In the description of the present invention, it should be noted that the terms "first", "second", "third", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance. Further, the term "and/or" in the present application is only one kind of association relationship describing the associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone.
The basic network devices currently in use are multi-core processors, and the processor architecture thereof is shown in fig. 1. Taking ethernet data as an example, after receiving an ethernet packet, a network interface of a processor sends different packets to different queues according to configuration (a queue to which the packet is to enter may be determined according to a packet five-tuple HASH value received by the interface), and a forwarding core acquires the packet from the queue for processing (a mapping relationship between the forwarding core and the queue may be dynamically configured, for example, a certain queue or certain queues are assigned to a certain forwarding core for processing). The five-tuple HASH value is calculated by a HASH algorithm from the five-tuple (source IP, destination IP, IP protocol value, source port number, and destination port number) of the standard ethernet packet.
According to the processing flow of the network processor to the ethernet packet, the processor provides a flexible data processing mechanism, and not only can select a data receiving queue according to the configuration, but also can allocate the queue to different forwarding cores for processing, but the configuration cannot be performed at will and needs to be performed differently according to different application scenarios. Taking a router as an example, because the router belongs to an intermediate network device and implements routing forwarding of data, the sequence of data streams (packets having the same five-tuple HASH value) cannot be changed under normal conditions, but if data of the same data stream is processed by multiple forwarding cores, the sequence of the data stream cannot be guaranteed, and therefore, for the same data stream, based on the principle of order preservation, the data stream can only be allocated to the same queue and processed by the same forwarding core.
In order to implement the order-preserving processing of data streams, in a multi-core network processor architecture, it is necessary to first configure a packet five-tuple HASH value calculated according to hardware to perform queue allocation, and the same five-tuple HASH value enters the same queue, so that a data stream may enter only one queue.
Generally, the number of queues is much larger than that of forwarding cores, and there are many packets with the same five-tuple HASH value, so that there is a case that multiple data flows of multiple queues are allocated to the same forwarding core for processing, and when the forwarding core resources consumed for processing the data flows exceed the processing capability of a single forwarding core, packet loss of the data flows will be caused.
The inventors of the present application, when studying the above problems, found that: the main stream scheme has the problem that the mapping relationship between the queues and the forwarding cores is fixed, and when the sum of the data traffic in all the queues allocated to the forwarding cores exceeds the processing capacity of the forwarding cores, the traffic cannot be scheduled to other idle forwarding cores in a dynamic scheduling mode on the premise of order preservation of the data stream. It should be noted that the defects existing in the above solutions and the causes of the defects are the results obtained after the inventors have conducted practical and careful study, and therefore, the discovery process of the above problems and the solutions proposed by the following embodiments of the present invention to the above problems should be the contribution of the inventors to the present invention in the process of the present invention.
In view of this, as shown in fig. 2, an embodiment of the present application provides a queue scheduling method, which is applied to a network device with a multi-core processor and a message processing requirement, such as a router and a firewall. The multi-core processor comprises a control core and a plurality of forwarding cores. The network device is configured with a plurality of buffer queues, each of the plurality of forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core. The following will be explained with reference to the steps shown in fig. 2.
Step S101: when determining that a first cache queue for accumulating messages exists in the plurality of cache queues, the control core determines a second forwarding core with processing capability from each forwarding core except the first forwarding core corresponding to the first cache queue in the plurality of forwarding cores.
And the control core periodically polls all the buffer queues to check whether messages are accumulated. When the first cache queue with the message accumulation is determined to exist, that is, the first forwarding core corresponding to the first cache queue is in an overload state currently, at this time, the control core determines a second forwarding core with processing capability from each forwarding core except the first forwarding core in the multiple forwarding cores, so that the queue with the message accumulation is rescheduled to the second forwarding core for processing, thereby reducing the risk of processing capability overload of the forwarding cores, improving the response speed of the system to burst traffic, realizing data flow scheduling between the forwarding cores, and avoiding the problem of unnecessary data flow packet loss.
When a second forwarding core with processing capability is determined from each forwarding core except the first forwarding core, the second forwarding core may be determined according to a consumption value of each forwarding core within a preset time, or may be determined according to a memory occupation ratio and/or a memory size of each forwarding core. As an alternative embodiment, a specific process of the control core determining the second forwarding core is described below with reference to the steps shown in fig. 3.
Step S201: and the control core subtracts the consumption value of the first forwarding core in a preset time length from the consumption value of each forwarding core except the first forwarding core in the preset time length in the plurality of forwarding cores respectively to obtain a plurality of subtraction results.
For example, the control core will turn the firstAnd subtracting the consumption values of the forwarding cores within the preset time length from the consumption values of the forwarding cores within the preset time length except the first forwarding core in the N forwarding cores respectively to obtain N-1 subtraction results. Assuming that the consumption value of the first forwarding core in a preset time length is E 1 The consumption values of the other forwarding cores within the preset time length are respectively E 2 ,E 3 ,……E n Then the N-1 subtraction results are obtained as: (E) 1 -E 2 ),(E 1 -E 3 ),(E 1 -E 4 ),……(E 1 -E n-1 ),(E 1 -E n ). Wherein (E) 1 -E 2 ),(E 1 -E 3 ),(E 1 -E 4 ),……(E 1 -E n-1 ),(E 1 -E n ) These differences reflect the idle processing power of the forwarding core. Wherein N is an integer of 2 or more.
Step S202: the control core selects a largest subtraction result from the plurality of subtraction results as a target subtraction result.
For example, after obtaining N-1 subtraction results, the control core selects the largest subtraction result from the N-1 subtraction results as the target subtraction result. Control core slave (E) 1 -E 2 ),(E 1 -E 3 ),(E 1 -E 4 ),……(E 1 -E n-1 ),(E 1 -E n ) The largest difference among the N-1 differences is selected as the target subtraction result, and the assumption is that (E) 1 -E 2 )。
Step S203: and the control core determines that the target subtraction result is larger than the processing consumption value of the first forwarding core occupied by the first cache queue.
After obtaining the target subtraction result (assuming as (E) 1 -E 2 ) After that), the control core determines whether the target subtraction result is greater than the processing consumption value of the first forwarding core occupied by the first cache queue, and executes step S204 when determining that the target subtraction result is greater than the processing consumption value of the first forwarding core occupied by the first cache queue, otherwise, does not process, and waits for the next polling cycle to arriveThe consumption value of each forwarding core is recalculated, and then judgment is carried out. Due to (E) 1 -E 2 ),(E 1 -E 3 ),(E 1 -E 4 ),……(E 1 -E n-1 ),(E 1 -E n ) These differences reflect the idle processing capacity of the corresponding forwarding core, and indicate the corresponding forwarding core (e.g., E) when the maximum value is greater than the processing consumption value of the first forwarding core occupied by the first buffer queue 2 The corresponding forwarding core) has sufficient processing power to process the packet of the first buffer queue.
Step S204: and the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core.
After the target subtraction result is obtained, the forwarding core corresponding to the target subtraction result is used as a second forwarding core, and the target subtraction result is assumed to be (E) 1 -E 2 ) The second forwarding core is E 2 A corresponding forwarding core.
The control core acquires the consumption value of each forwarding core within the preset time, and the consumption value of the forwarding core is related to the capability of processing the data stream, and the capability of processing the data stream is related to the flow size of the data stream and the type of the data stream. (different types of data streams, different processing flows, and different consumed processor resources). Therefore, it is necessary to perform quantitative analysis on forwarding core resources consumed by different types of messages, how to obtain resources consumed by forwarding cores that process different types of data streams? The method can respectively make a single overload flow aiming at different data types, namely the flow of the data flow exceeds the processing capacity of a single core, and because one data flow only enters one queue and is only processed by one forwarding core, the maximum capacity of the single forwarding core for processing the data of the corresponding type can be obtained, wherein the three types of messages of IPv4, IPv6 and wide area network POS are taken as examples, the overload flow is made on the three types of messages respectively (whether overload is judged through the accumulation state of a cache queue), namely the single data flow of the corresponding message needs to exceed the processing capacity of the single forwarding core, so that the single message of the three types of messages can be obtainedThe single-forwarding core processor consumes resources, here labeled Δ t ipv4 ,Δt ipv6 ,Δt pos . For example, for IPv4, if a single forwarding core has a processing capacity of 800Kpps, then the resource consumed by each packet for the forwarding core is 1/800000s, where Δ t is the consumed resource ipv4 Similarly, for IPv6 and wide area network POS messages, the relative resource consumption of a single corresponding message single forwarding core is obtained to be delta t ipv6 And Δ t pos 。
It should be noted that, the relative resource consumption of a single packet of each packet needs to be calculated in advance. In this way, when calculating the consumption value of each forwarding core within the preset time length, only the packet data obtained by each forwarding core from the corresponding cache queue within the preset time length needs to be counted, for example, when calculating the consumption value E of the forwarding core 1 within the preset time length 1 Meanwhile, message data acquired by the forwarding core 1 from the corresponding cache queue within the preset time length is acquired, assuming that the cache queues corresponding to the forwarding core 1 are queue 1 and queue 2, if within the preset time length, the IPv4, IPv6 and POS messages acquired by the forwarding core 1 from the queue 1 are respectively counted as Q1 ipv4 、Q1 ipv6 、Q1 pos Similarly, the IPv4, IPv6, and POS messages obtained by the forwarding core 1 from the queue 2 are respectively counted as Q2 ipv4 、Q2 ipv6 、Q2 pos Then the consumption value of forwarding core 1 in the preset time is (Q1) ipv4 +Q2 ipv4 )*Δt ipv4 +(Q1 ipv6 +Q2 ipv6 )*Δt ipv6 +(Q1 pos +Q2 pos )*Δt pos . Similarly, the consumption value E of the forwarding core 2 in the preset time length is calculated 2 Meanwhile, message data acquired by the forwarding core 2 from the corresponding cache queue within the preset time length is acquired, assuming that the cache queues corresponding to the forwarding core 2 are queue 3 and queue 4, if within the preset time length, the IPv4, IPv6 and POS messages acquired by the forwarding core 2 from the queue 3 are respectively counted as Q3 ipv4 、Q3 ipv6 、Q3 pos Similarly, the IPv4, IPv6, and POS messages obtained by the forwarding core 2 from the queue 4 are respectively counted as Q4 ipv4 、Q4 ipv6 、Q4 pos Then the consumption value of forwarding core 2 in the preset time is (Q3) ipv4 +Q4 ipv4 )*Δt ipv4 +(Q3 ipv6 +Q4 ipv6 )*Δt ipv6 +(Q3 pos +Q4 pos )*Δt pos . By analogy, the processor consumption values E1 and E2 … En of all forwarding cores in the preset time can be obtained. If packet accumulation occurs in queue 1, that is, queue 1 is a first buffer queue, and since queue 1 is allocated to forwarding core 1 for processing, forwarding core 1 is a first forwarding core, and the processing consumption value of the first forwarding core occupied by the first buffer queue is Q1 ipv4 *Δt ipv4 +Q1 ipv6 *Δt ipv6 +Q1 pos *Δt pos 。
How does the control core obtain the packet data that each forwarding core of the multiple forwarding cores obtains from the corresponding cache queue within a preset time? As an optional implementation manner, the state of the global variable switch may be changed, and optionally, the control core periodically changes the state of the global variable switch to be in the first state or the second state; when the first state is reached, each forwarding core in the multiple forwarding cores respectively counts the message data acquired from the corresponding cache queue within the preset time length; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores. For example, the control core periodically changes the state of the global variable switch to enable the global variable switch to be in an on state or an off state, and the forwarding core determines whether to count the data according to the state of the global variable switch, for example, when the global variable switch is in the on state, each forwarding core respectively counts the message data acquired from the corresponding cache queue within a preset time length; when the global variable switch is in a closed state, each forwarding core stops counting, and the message data counted by each forwarding core is fed back to the control core. The control core may turn on the global variable switch before the last polling task is finished, and turn off the global variable switch after the polling task of the current round is started. The above statistics may be performed when the global variable switch is turned off, and may be stopped when the global variable is turned on.
The control core obtaining the consumption value of each forwarding core in the multiple forwarding cores within the preset time length may be before the control core periodically polls all the cache queues to check whether there is a message accumulation step, that is, the control core periodically obtains the consumption value of each forwarding core in the multiple forwarding cores within the preset time length, after obtaining the consumption value of each forwarding core within the preset time length, the control core checks whether there is a message accumulation queue, if not, waits for the next cycle, and then obtains the consumption value of each forwarding core in the multiple forwarding cores within the preset time length again and checks whether there is a message accumulation queue; if so, determining whether core-splitting scheduling needs to be performed on the accumulation queue according to the processor resources occupied by the accumulation queue and the overall processor consumption of each forwarding core. Compared with the scheme that whether the queues have message accumulation is checked firstly, and the consumption value of each forwarding core is obtained when the queues with message accumulation are determined, the influence of effectiveness on the second forwarding core can be reduced, and the situation that the finally determined second forwarding cores are different due to aging difference is avoided.
The control core may acquire the consumption value of each forwarding core in the multiple forwarding cores within the preset time duration after the step of periodically polling all the cache queues by the control core to check whether there is a message accumulation, that is, when it is determined that there is a first cache queue with a message accumulation in the multiple cache queues, the control core acquires the consumption value of each forwarding core in the multiple forwarding cores within the preset time duration again. And if the queue of the message accumulation does not appear, not acquiring the consumption value of each forwarding core in the plurality of forwarding cores within the preset time length. Compared with the scheme that the consumption value of each forwarding core is obtained firstly and then whether the queue with the accumulated message exists or not is checked, the workload can be reduced, and the situation that the queue is not accumulated after the consumption value of each forwarding core is obtained is avoided.
It should be noted that the preset time is preset, and different times can be set according to different forwarding tasks, and the preset time can be flexibly set.
Step S102: and the control core deletes the configuration relationship between the first cache queue and the first forwarding core, establishes the configuration relationship between the first cache queue and the second forwarding core, and enables the second forwarding core to process the message in the first cache queue.
After determining a second forwarding core with processing capability from each forwarding core except the first forwarding core in the plurality of forwarding cores, the control core deletes the configuration relationship between the first cache queue and the first forwarding core, establishes the configuration relationship between the first cache queue and the second forwarding core, and enables the second forwarding core to process the message in the first cache queue. The dynamic queue core-division scheduling mechanism not only can quickly identify the network burst flow and carry out corresponding processing, but also can reduce the risk of overload of the processing capacity of a single forwarding core and avoid the problem of unnecessary data stream packet loss.
It should be noted that, in the present application, "queue" and "buffer queue" may be interchanged with each other.
Second embodiment
The embodiment of the application provides network equipment, including multicore processor, multicore processor includes control core and a plurality of core that forwards, network equipment disposes a plurality of buffer queues, it should have at least one buffer queue to forward every in the core to forwardding in a plurality of cores, and a buffer queue only corresponds one and forwards the core, every in the core of forwardding in a plurality of cores forwards for handle the message in corresponding at least one buffer queue. The architecture of the multi-core processor can be seen in fig. 1. It should be noted that the control core may be randomly assigned from a plurality of cores, for example, a certain core is assigned as the control core, and all the remaining cores are forwarding cores.
The control core is configured to determine, when it is determined that a first cache queue for accumulating messages exists in the multiple cache queues, a second forwarding core with processing capability from each of forwarding cores, except for a first forwarding core corresponding to the first cache queue, in the multiple forwarding cores; the control core is further configured to delete the configuration relationship between the first cache queue and the first forwarding core, establish the configuration relationship between the first cache queue and the second forwarding core, and enable the second forwarding core to process the packet in the first cache queue.
Optionally, the control core is further configured to subtract the consumption value of the first forwarding core within a preset time period from the consumption values of the forwarding cores, except the first forwarding core, within the preset time period, to obtain a plurality of subtraction results; the control core is further configured to select a largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core is further configured to determine that the target subtraction result is greater than a processing consumption value of the first forwarding core occupied by the first cache queue; and the control core is further configured to use the forwarding core corresponding to the target subtraction result as the second forwarding core.
Optionally, the control core is further configured to obtain a consumption value of each forwarding core in the multiple forwarding cores within the preset time length; the control core is further configured to determine that a first cache queue in which packets are accumulated exists in the plurality of cache queues when the consumption value of each forwarding core is obtained.
Optionally, the control core is further configured to obtain a consumption value of each forwarding core in the multiple forwarding cores within the preset time length; the control core is further configured to subtract the consumption value of the first forwarding core from the consumption values of the forwarding cores except the first forwarding core in the multiple forwarding cores, respectively, so as to obtain multiple subtraction results.
Optionally, the control core is further configured to obtain packet data, which is obtained from a corresponding cache queue within the preset time and counted by each forwarding core of the multiple forwarding cores; the control core is further configured to determine a consumption value of each forwarding core based on the packet data.
Optionally, the control core is further configured to periodically change a state of the global variable switch to be in a first state or a second state, where in the first state, each forwarding core of the multiple forwarding cores separately counts packet data acquired from a corresponding cache queue within the preset time duration; and in the second state, the control core is further configured to obtain the packet data fed back by each of the forwarding cores.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
Third embodiment
The embodiment of the present application further provides a queue scheduling apparatus 100 applied in a network device including a multi-core processor, as shown in fig. 4. The multi-core processor further comprises a plurality of forwarding cores, the network device is configured with a plurality of buffer queues, each forwarding core in the plurality of forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core. The queue scheduling apparatus 100 includes: a determination module 110 and a configuration module 120.
A determining module 110, configured to determine, when it is determined that a first cache queue for accumulating messages exists in the multiple cache queues, a second forwarding core with processing capability from each of the forwarding cores except for the first forwarding core corresponding to the first cache queue in the multiple forwarding cores. Optionally, the determining module 110 is further configured to subtract, by the control core, the consumption value of the first forwarding core in a preset time period from the consumption values of the forwarding cores, except for the first forwarding core, in the preset time period, so as to obtain a plurality of subtraction results; the control core selects the largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core determines that the target subtraction result is larger than the processing consumption value of the first forwarding core occupied by the first cache queue; and the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core. Optionally, the determining module 110 is further configured to obtain, by the control core, a consumption value of each forwarding core in the multiple forwarding cores within the preset time length; and the control core subtracts the consumption value of the first forwarding core corresponding to the first cache queue from the consumption value of each forwarding core except the first forwarding core in the forwarding cores to obtain a plurality of subtraction results. Optionally, the determining module 110 is further configured to obtain, by the control core, packet data that is obtained from a corresponding cache queue within the preset time and is counted by each forwarding core of the multiple forwarding cores; the control core determines a consumption value for each of the forwarding cores based on the packet data. Optionally, the determining module 110 is further configured to periodically change a state of a global variable switch by the control core to enable the control core to be in a first state or a second state, where in the first state, each forwarding core in the multiple forwarding cores separately counts packet data acquired from a corresponding cache queue within the preset time duration; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores.
A configuration module 120, configured to delete the configuration relationship between the first cache queue and the first forwarding core, establish the configuration relationship between the first cache queue and the second forwarding core, and enable the second forwarding core to process the packet in the first cache queue.
Optionally, the queue scheduling apparatus 100 further includes: the device comprises an acquisition module and a second determination module.
An obtaining module, configured to obtain, by the control core, a consumption value of each forwarding core in the multiple forwarding cores within the preset time duration. Optionally, the obtaining module is further configured to periodically change a state of a global variable switch by the control core to enable the control core to be in a first state or a second state, where in the first state, each forwarding core in the multiple forwarding cores separately counts packet data obtained from a corresponding cache queue within the preset time duration; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores. Optionally, the obtaining module is further configured to periodically change a state of the global variable switch by the control core to enable the control core to be in a first state or a second state, where in the first state, each forwarding core in the multiple forwarding cores separately counts packet data obtained from a corresponding cache queue within the preset time period; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores.
A second determining module, configured to determine that a first cache queue in which packets are accumulated exists in the multiple cache queues when the control core obtains the consumption value of each forwarding core.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The implementation principle and the technical effect of the queue scheduling apparatus 100 provided by the embodiment of the present invention are the same as those of the foregoing method embodiments, and for the sake of brief description, no part of the embodiment of the apparatus is mentioned, and reference may be made to the corresponding contents in the foregoing method embodiments.
Fourth embodiment
The present application further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program performs the steps of the method described in the second embodiment. For specific implementation, reference may be made to the method embodiment, which is not described herein again.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a notebook computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes. It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (8)
1. The queue scheduling method is characterized by being applied to network equipment, wherein the network equipment comprises a multi-core processor, the multi-core processor comprises a control core and a plurality of forwarding cores, the network equipment is configured with a plurality of cache queues, each forwarding core in the forwarding cores should have at least one cache queue, and one cache queue only corresponds to one forwarding core; the method comprises the following steps:
when determining that a first cache queue for accumulating messages exists in the plurality of cache queues, the control core determines a second forwarding core with processing capability from each forwarding core except for the first forwarding core corresponding to the first cache queue in the plurality of forwarding cores;
the control core deletes the configuration relationship between the first cache queue and the first forwarding core, establishes the configuration relationship between the first cache queue and the second forwarding core, and enables the second forwarding core to process the message in the first cache queue;
wherein the determining, by the control core, a second forwarding core with processing capability from among the forwarding cores except the first forwarding core corresponding to the first cache queue includes:
the control core subtracts the consumption value of the first forwarding core within a preset time length from the consumption value of each forwarding core, except the first forwarding core, in the plurality of forwarding cores within the preset time length respectively to obtain a plurality of subtraction results;
the control core selects the largest subtraction result from the plurality of subtraction results as a target subtraction result;
the control core determines that the target subtraction result is larger than the processing consumption value of the first forwarding core occupied by the first cache queue;
the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core;
the consumption value of each forwarding core is determined according to the message data acquired by the forwarding core from the corresponding cache queue within a preset time length, the message data comprises the message type of the message and the message quantity corresponding to each message type, the consumption value of the forwarding core is equal to the sum of the consumption values of each message type, and the consumption value of each message type is equal to the message quantity of the message type and the relative resource consumption of a single forwarding core of a single message of the message type.
2. The method according to claim 1, wherein before the step of subtracting, by the control core, consumption values of the first forwarding core within a preset time period from consumption values of the forwarding cores, except the first forwarding core, within the preset time period, of the forwarding cores, respectively, to obtain a plurality of subtraction results, the method further comprises:
the control core acquires a consumption value of each forwarding core in the plurality of forwarding cores within the preset time length;
and when the control core acquires the consumption value of each forwarding core, determining that a first cache queue for message accumulation exists in the plurality of cache queues.
3. The method according to claim 1, wherein the control core subtracts the consumption value of the first forwarding core within a preset time period from the consumption value of each forwarding core, except the first forwarding core, in the plurality of forwarding cores within the preset time period, respectively, to obtain a plurality of subtraction results, and the method includes:
the control core acquires a consumption value of each forwarding core in the plurality of forwarding cores within the preset time length;
and the control core subtracts the consumption value of the first forwarding core from the consumption values of the forwarding cores except the first forwarding core to obtain a plurality of subtraction results.
4. The method according to claim 2 or 3, wherein the obtaining, by the control core, the consumption value of each forwarding core in the plurality of forwarding cores within the preset time period comprises:
the control core acquires message data which are respectively counted by each forwarding core in the plurality of forwarding cores and acquired from the corresponding cache queue within the preset time length;
the control core determines a consumption value for each of the forwarding cores based on the packet data.
5. The method according to claim 4, wherein the acquiring, by the control core, the packet data, which is obtained from the corresponding buffer queue within the preset time period and is counted by each forwarding core of the plurality of forwarding cores, comprises:
the control core periodically changes the state of the global variable switch to enable the global variable switch to be in a first state or a second state, wherein in the first state, each forwarding core in the multiple forwarding cores respectively counts message data acquired from a corresponding cache queue within the preset time length; and in the second state, the control core acquires the message data fed back by each forwarding core in the plurality of forwarding cores.
6. The network equipment is characterized by comprising a multi-core processor, wherein the multi-core processor comprises a control core and a plurality of forwarding cores, the network equipment is configured with a plurality of cache queues, each forwarding core in the forwarding cores should have at least one cache queue, and one cache queue only corresponds to one forwarding core;
the control core is configured to determine, when it is determined that a first cache queue in which messages are accumulated exists in the multiple cache queues, a second forwarding core having processing capability from each of forwarding cores, except for a first forwarding core corresponding to the first cache queue, in the multiple forwarding cores;
the control core is further configured to delete the configuration relationship between the first cache queue and the first forwarding core, establish the configuration relationship between the first cache queue and the second forwarding core, and enable the second forwarding core to process the packet in the first cache queue;
each forwarding core in the multiple forwarding cores is used for processing the message in the corresponding at least one cache queue;
the control core is further configured to:
subtracting the consumption value of the first forwarding core within a preset time length from the consumption value of each forwarding core, except the first forwarding core, in the plurality of forwarding cores within the preset time length to obtain a plurality of subtraction results;
selecting a largest subtraction result from the plurality of subtraction results as a target subtraction result;
determining that the target subtraction result is greater than the processing consumption value of the first forwarding core occupied by the first cache queue;
taking the forwarding core corresponding to the target subtraction result as the second forwarding core;
the consumption value of each forwarding core is determined according to the message data acquired by the forwarding core from the corresponding cache queue within a preset time length, the message data comprises the message type of the message and the message quantity corresponding to each message type, the consumption value of the forwarding core is equal to the sum of the consumption values of each message type, and the consumption value of each message type is equal to the message quantity of the message type and the relative resource consumption of a single forwarding core of a single message of the message type.
7. The queue scheduling device is applied to a network device comprising a multi-core processor, wherein the multi-core processor further comprises a plurality of forwarding cores, the network device is configured with a plurality of buffer queues, each forwarding core in the plurality of forwarding cores should have at least one buffer queue, and one buffer queue only corresponds to one forwarding core, and the device comprises:
a determining module, configured to determine, when it is determined that a first cache queue in which messages are accumulated exists in the multiple cache queues, a second forwarding core with processing capability from each forwarding core, except for a first forwarding core corresponding to the first cache queue, in the multiple forwarding cores;
a configuration module, configured to delete the configuration relationship between the first cache queue and the first forwarding core, establish the configuration relationship between the first cache queue and the second forwarding core, and enable the second forwarding core to process the packet in the first cache queue;
the determining module is further configured to subtract, by the control core, the consumption value of the first forwarding core within a preset time period from the consumption value of each forwarding core, except the first forwarding core, in the plurality of forwarding cores within the preset time period, so as to obtain a plurality of subtraction results; the control core selects the largest subtraction result from the plurality of subtraction results as a target subtraction result; the control core determines that the target subtraction result is larger than a processing consumption value of the first forwarding core occupied by the first cache queue; the control core takes the forwarding core corresponding to the target subtraction result as the second forwarding core;
the consumption value of each forwarding core is determined according to the message data acquired by the forwarding core from the corresponding cache queue within a preset time length, the message data comprises the message type of the message and the message quantity corresponding to each message type, the consumption value of the forwarding core is equal to the sum of the consumption values of each message type, and the consumption value of each message type is equal to the message quantity of the message type and the relative resource consumption of a single forwarding core of a single message of the message type.
8. A storage medium having stored thereon a computer program which, when executed by a processor, performs the method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811388030.9A CN109450816B (en) | 2018-11-19 | 2018-11-19 | Queue scheduling method, device, network equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811388030.9A CN109450816B (en) | 2018-11-19 | 2018-11-19 | Queue scheduling method, device, network equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109450816A CN109450816A (en) | 2019-03-08 |
CN109450816B true CN109450816B (en) | 2022-08-12 |
Family
ID=65552804
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811388030.9A Active CN109450816B (en) | 2018-11-19 | 2018-11-19 | Queue scheduling method, device, network equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109450816B (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113328960B (en) * | 2020-02-28 | 2023-11-17 | 华为技术有限公司 | Queue cache management method, device, storage medium and equipment |
CN112073332A (en) * | 2020-08-10 | 2020-12-11 | 烽火通信科技股份有限公司 | Message distribution method, multi-core processor and readable storage medium |
CN112068965A (en) * | 2020-09-23 | 2020-12-11 | Oppo广东移动通信有限公司 | Data processing method and device, electronic equipment and readable storage medium |
CN113176940A (en) * | 2021-03-29 | 2021-07-27 | 新华三信息安全技术有限公司 | Data flow splitting method and device and network equipment |
CN113992589B (en) * | 2021-10-21 | 2023-05-26 | 绿盟科技集团股份有限公司 | Message distribution method and device and electronic equipment |
CN114024915B (en) * | 2021-10-28 | 2023-06-16 | 北京锐安科技有限公司 | Traffic migration method, device and system, electronic equipment and storage medium |
CN116185649A (en) * | 2021-11-26 | 2023-05-30 | 中兴通讯股份有限公司 | Storage control method, storage controller, storage chip, network card, and readable medium |
CN118301074A (en) * | 2022-12-26 | 2024-07-05 | 锐捷网络股份有限公司 | Message processing method and device and electronic equipment |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101616083B (en) * | 2009-08-06 | 2012-01-04 | 杭州华三通信技术有限公司 | Message forwarding method and device |
CN103338157B (en) * | 2013-07-01 | 2016-04-06 | 杭州华三通信技术有限公司 | A kind of internuclear data message caching method of multiple nucleus system and equipment |
CN105159779B (en) * | 2015-08-17 | 2020-03-13 | 深圳中兴网信科技有限公司 | Method and system for improving data processing performance of multi-core CPU |
CN105634958B (en) * | 2015-12-24 | 2019-05-31 | 东软集团股份有限公司 | Message forwarding method and device based on multiple nucleus system |
US10681131B2 (en) * | 2016-08-29 | 2020-06-09 | Vmware, Inc. | Source network address translation detection and dynamic tunnel creation |
CN106713185B (en) * | 2016-12-06 | 2019-09-13 | 瑞斯康达科技发展股份有限公司 | A kind of load-balancing method and device of multi-core CPU |
US20180285151A1 (en) * | 2017-03-31 | 2018-10-04 | Intel Corporation | Dynamic load balancing in network interface cards for optimal system level performance |
CN108259369B (en) * | 2018-01-26 | 2022-04-05 | 迈普通信技术股份有限公司 | Method and device for forwarding data message |
CN108777662B (en) * | 2018-06-20 | 2021-05-18 | 迈普通信技术股份有限公司 | Table item management method and device |
-
2018
- 2018-11-19 CN CN201811388030.9A patent/CN109450816B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN109450816A (en) | 2019-03-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109450816B (en) | Queue scheduling method, device, network equipment and storage medium | |
US10581960B2 (en) | Performing context-rich attribute-based load balancing on a host | |
CA2940976C (en) | Dynamic allocation of network bandwidth | |
US9392050B2 (en) | Automatic configuration of external services based upon network activity | |
CN105025080B (en) | A kind of overload protection method and server of distributed system | |
US8457142B1 (en) | Applying backpressure to a subset of nodes in a deficit weighted round robin scheduler | |
EP2670085B1 (en) | System for performing Data Cut-Through | |
JP2022532731A (en) | Avoiding congestion in slice-based networks | |
US8208406B1 (en) | Packet forwarding using feedback controlled weighted queues dynamically adjusted based on processor utilization | |
CN111857992B (en) | Method and device for allocating linear resources in Radosgw module | |
US20210014163A1 (en) | Per path and per link traffic accounting | |
CN109922003B (en) | Data sending method, system and related components | |
CN114079638A (en) | Data transmission method, device and storage medium of multi-protocol hybrid network | |
CN111404839B (en) | Message processing method and device | |
US20230327967A1 (en) | Generating network flow profiles for computing entities | |
JP2017011423A (en) | System and method for data processing | |
Breitgand et al. | On cost-aware monitoring for self-adaptive load sharing | |
CN110958184B (en) | Bandwidth adjusting method and device | |
CN109086128B (en) | Task scheduling method and device | |
CN109379163A (en) | A kind of message forwarding rate control method and device | |
CN108536535A (en) | A kind of dns server and its thread control method and device | |
US11003506B2 (en) | Technique for determining a load of an application | |
US10033616B1 (en) | State synchronization for global control in a distributed security system | |
JPWO2009098819A1 (en) | Communications system | |
JP6829156B2 (en) | Network load balancer and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |