Disclosure of Invention
In view of the above deficiencies of the prior art, the present application is directed to a technique for cluster-based sharing of traffic speed limit thresholds to achieve efficient traffic management between each node, queue and CPU core in a cluster. The technical scheme of the application can dynamically allocate the flow rate limiting resources in the distributed environment so as to ensure that each node and each queue can operate within a reasonable rate limiting range. Meanwhile, the technology can realize fair distribution of traffic in the cluster, and avoid that a certain node or queue occupies excessive traffic resources.
The first aspect of the present application provides a method for limiting a cluster shared traffic threshold, which includes:
S1, a cluster manager transmits a cluster speed limit threshold of a specified data stream to each node in a network cluster, and adds a forwarding process of each node into multicast so as to share flow information among the nodes, wherein the forwarding process in the node acquires the cluster speed limit threshold;
S2, the node controller counts and marks the flow of the appointed data flow flowing into the network card every second, sends out the unique node identification and the network card flow received every second in a multicast mode to be shared to other nodes in the cluster, and calculates the node speed limit quota threshold of the node according to the multicast message received from the cluster;
s3, the bandwidth allocation scheduler allocates a core speed limit quota threshold of each node according to the bandwidth ratio of each service core processing the queue flow and the total flow of the node in the node forwarding process;
And S4, the token bucket manager uses a token bucket algorithm to periodically and dynamically adjust according to the real-time data of the ratio of the flow processed by each queue in the node to the total bandwidth of the node, so as to obtain the ratio of each queue in the cluster speed-limiting bandwidth, and speed-limiting the flow passing by the queue.
Further, when the node receives the multicast message sent by other nodes, comparing the node unique identifier in the multicast message with the node unique identifier of the node itself, if the comparison result is equal, ignoring the multicast message, if the comparison result is unequal, recording the multicast message, and calculating the node speed limit quota threshold of the node.
Further, the calculation formula of the node speed limit quota threshold is as follows:
speednode(i)=speedcluster*trafficnode(i)/ trafficnode(i)
Speed cluster is a cluster speed limit threshold of a designated data stream, traffic node(i) is the flow of the designated data stream flowing into the node network card in the second, i is the node number, the value is 1-N, and N is the number of the cluster nodes.
Further, in step S3, the calculation formula of the core speed limit quota threshold is as follows:
speednode(i)_lcore(n)=speednode(i)*trafficnode(i)_lcore(n)/trafficnode(i)_lcore(n)
the traffic node(i)_lcore(n) is the flow of the designated data flow flowing into the service core n in the second, i is the node number, the value is 1-N, N is the maximum node number of the cluster, n is the service core number, the value is 1-M, and M is the maximum number of the service core.
The second aspect of the present application proposes a speed limiting device for a cluster shared traffic threshold, the device being configured to implement the above method, wherein the device includes a cluster manager and a plurality of nodes located in a same network cluster;
the cluster manager is respectively connected with the plurality of nodes and is used for coordinating and managing the whole multi-node cluster, and setting, adjusting and issuing a cluster speed limit threshold according to the speed limit requirement;
the node controller is arranged on each node in the plurality of nodes and used as a control main thread of a node forwarding process and is used for monitoring the local bandwidth utilization condition and flow;
The bandwidth allocation scheduler is arranged in a service core of the service process of each node and used for ensuring the fair allocation and utilization efficiency of bandwidth resources;
The token bucket manager is arranged on each queue binding thread served by each node and is used for limiting the speed of the flow passing by the queues by using a token bucket algorithm so as to limit the speed of the whole cluster.
Furthermore, the node controller counts the local flow data flowing into the network card every second, and shares the local flow data in a multicast mode.
Further, sharing the local traffic data includes the node controller sending the local traffic data to each node in the cluster in a multicast manner, the node receiving traffic data from other nodes in the cluster, and calculating a node speed limit quota threshold of the node according to a traffic proportion after excluding the own traffic data.
Further, the bandwidth resource fair allocation and utilization efficiency is ensured, and the bandwidth allocation scheduler specifically comprises a node speed limit quota threshold value determined according to the traffic bandwidth condition ratio of nodes in the cluster, and a core speed limit quota threshold value of each node is allocated according to the bandwidth ratio of each service core processing the queue traffic and the total traffic of the nodes in the node forwarding process.
The token bucket manager periodically and dynamically adjusts the duty ratio of the traffic processed by each queue in the node to the total bandwidth of the node according to the duty ratio real-time data of the traffic processed by each queue in the node to obtain the duty ratio of each queue in the cluster speed-limiting bandwidth.
In a third aspect the application provides an electronic device comprising a memory unit and a processor unit, the memory unit having stored thereon a computer program, the processor unit implementing the above method when executing the program.
In the prior art, the flow rate limiting of the cluster is limited only by the nodes, and the flow rate is not dynamically regulated, so that the accurate integral speed limiting of the cluster is difficult to achieve. The application effectively and rapidly adjusts the speed limit quota dynamically among different cores (threads) served by the same node process and among different nodes of the same cluster by monitoring the traffic bandwidth ratio in real time, determines each node, effectively limits the generation speed of the token bucket of each queue, regards the whole cluster as a logical token bucket, realizes the speed limit of the cluster in a mode of independent token bucket of each core of each node, and does not influence the performance of cause traffic processing.
Detailed Description
In order to make the technical solutions of the embodiments of the present application better understood by those skilled in the art, the technical solutions of the present application will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. It should be understood that the description is only illustrative and is not intended to limit the scope of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, shall fall within the scope of the application.
In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
In the description of the present application, it should be noted that unless explicitly stated and limited otherwise, the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are merely for convenience in describing the present application and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The terms "mounted," "connected," "coupled," and "connected" are to be construed broadly, and may, for example, be fixedly connected, detachably connected, or integrally connected, mechanically connected, electrically connected, directly connected, indirectly connected via an intermediate medium, or communicate between the two elements. The specific meaning of the above terms in the present application will be understood in specific cases by those of ordinary skill in the art.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the application. Rather, they are merely examples of methods and systems that are consistent with aspects of the application as detailed in the accompanying claims.
The meaning of the technical terms cited in the technical scheme is first described below:
Cluster (Cluster) refers to an architecture in which multiple computers (also called nodes or servers) are connected together to collectively perform computing tasks
The Node (Node) is a server Node in the cluster, and refers to each independent computer or server in the cluster. Each node is a separate entity.
QoS (quality of service) refers to a technology that a network can provide better service capability for specified network communication by using various basic technologies, and is a security mechanism of the network, which is used to solve the problems of network delay and blocking.
The forwarding process is a main service process running in the cluster node and is responsible for forwarding the data packet from one network node to another network node, and is a host process realized by the speed limiting method of the invention.
In a first aspect, the present application proposes a token bucket algorithm-based cluster sharing flow threshold speed limiting device, as shown in fig. 1, where the device includes a cluster manager and a plurality of nodes located in the same network cluster;
the cluster manager is respectively connected with the plurality of nodes and is used for coordinating and managing the whole multi-node cluster, and setting, adjusting and issuing a cluster speed limit threshold according to the speed limit requirement;
the node controller is arranged on each node in the plurality of nodes and used as a control main thread of a node forwarding process and is used for monitoring the local bandwidth utilization condition and flow;
The bandwidth allocation scheduler is arranged in a service core of the service process of each node and used for ensuring the fair allocation and utilization efficiency of bandwidth resources;
the token bucket manager is arranged on each queue binding thread served by each node and is used for limiting the speed of the flow passing through the queues by using a token bucket algorithm so as to limit the speed of the whole cluster;
the node controller counts the local flow data flowing into the network card every second, and shares the local flow data in a multicast mode;
The node controller sends the local flow data to each node in the cluster in a multicast mode, the node receives the flow data from other nodes in the cluster, and after the self flow data is eliminated, the node speed limit quota threshold of the node is calculated according to the flow proportion;
The bandwidth allocation scheduler determines a node speed limit quota threshold according to the traffic bandwidth condition ratio of nodes in a cluster, and allocates a core speed limit quota threshold of each node according to the bandwidth ratio of each service core processing queue traffic and the total traffic of the nodes in a node forwarding process;
the token bucket manager periodically and dynamically adjusts the rate of the flow processed by each queue in the node and the ratio of the total bandwidth of the node according to the real-time data of the rate processed by each queue in the node to obtain the ratio of the rate-limiting bandwidth of each queue in the cluster;
in a second aspect, the present application proposes a method for cluster sharing traffic threshold based on token bucket algorithm, the method comprising the steps of:
S1, a cluster manager transmits a cluster speed limit threshold of a specified data stream to each node in a network cluster, and adds a forwarding process of each node into multicast so as to share flow information among the nodes, wherein the forwarding process in the node acquires the cluster speed limit threshold;
in the above step, the cluster manager issues a certain traffic speed limit threshold, such as a traffic speed limit threshold of UDP, TCP-SYN, TCP-ACK, ICMP, etc., where, taking speed limit cluster TCP-SYN as an example, a traffic threshold identified by speed limit cluster TCP-SYN is speed cluster (unit bit/s), and forwarding processes running on N nodes in the cluster obtain cluster TCP-SYN traffic speed limit speed cluster, as shown in fig. 1.
In the above steps, this is achieved by adding the forwarding process of the node to the multicast address, such as multicast address 224.0.1.100;
S2, the node controller counts and marks the flow of the appointed data flow flowing into the network card every second, sends out the unique node identification and the network card flow received every second in a multicast mode to be shared to other nodes in the cluster, and calculates the node speed limit quota threshold of the node according to the multicast message received from the cluster;
In the above step, a node speed limit quota threshold of the present node is calculated according to a multicast message received from the cluster, which specifically includes:
When a node receives a multicast message sent by other nodes, comparing the node unique identifier of the multicast message with the node unique identifier of the node, if the comparison result is equal, ignoring the multicast message, if the comparison result is unequal, recording the multicast message, and calculating the node speed limit quota threshold of the node, wherein the calculation formula is as follows:
speed cluster is a cluster speed limit threshold of a designated data stream, traffic node(i) is the flow of the designated data stream flowing into a node network card in the second, i is the node number, the value is 1-N, and N is the number of the cluster nodes;
Continuing with the description of the foregoing example in steps, the node forwarding process counts the TCP-SYN traffic flowing into the network card every second (by parsing so that the network card queues traffic packets, determining that the transport layer protocol number in the IP packet is 6 is TCP protocol traffic, determining that the Flag field SYN of the TCP layer header is 1 and the other fields of the Flag are all 0, marking the counted TCP-SYN traffic as traffic node1、trafficnode2、......trafficnodeN, sending the node unique identifier ID nodeN and the received network card traffic nodeN every second to other nodes sharing the cluster by multicast, the node receives the multicast message received by other nodes, ignores the multicast message according to the comparison of the unique node identifier ID nodeN in the multicast message and the unique node identifier ID nodeN in the node, and then excludes the identifier ID from being sent by the node, otherwise records the comparison by calculating the flow proportion of the node to the sum of the incremental TCP-SYN flow traffics node(i) flowing into the network card of the node in the second of the node and the incremental TCP-SYN flow flowing into the intranet card in the second of all the nodes of the cluster (I is the number of the node, the value is 1-N, N is the number of the cluster nodes), and the threshold value of the speed limit quota of the node is calculated as
S3, the bandwidth allocation scheduler allocates a core speed limit quota threshold of each node according to the bandwidth ratio of each service core processing the queue flow and the total flow of the node in the node forwarding process;
the calculation formula of the core speed limit quota threshold is as follows:
wherein traffic node(i)_lcore(n) is the flow of the designated data flow flowing into the service core n in the second by the node i, i is the node number, the value is 1-N, N is the maximum node number of the cluster, n is the service core number, the value is 1-M, and M is the maximum number of the service cores;
In the above steps, the description will be continued with the examples in the above steps. The speed limit quota of TCP-SYN identification flow flowing into each queue of each node network card is obtained, wherein each network card queue n (n is a network card queue number, taking Intel network card 82599 as an example, 16 queues of the network card, and n is 0-15) of each node network card is independently allocated with a service core thread to process lcore (n) and bind to CPU (n) for exclusive processing, each second of the service core thread of lcore (n) counts TCP-SYN flow flowing into the bound network card queue n correspondingly (by analyzing the queue n flow message, judging that the protocol number of a transmission layer in an IP message is 6 as TCP protocol flow, judging that the Flag field SYN bit of a TCP layer header is 1 and the other field bits of Flag are all 0, the TCP-SYN flow conforming to statistics) is identified as traffic node1_lcore1,trafficnode1_lcore2,trafficnode1_lcoreM, and a certain service core flow speed limit quota of each node can be calculated
S4, the token bucket manager uses a token bucket algorithm to periodically and dynamically adjust the duty ratio of the traffic processed by each queue in the node and the total bandwidth of the node according to the duty ratio real-time data, so as to obtain the duty ratio of each queue in the cluster speed-limiting bandwidth, and speed-limit the traffic passing by the queues;
The bucket depth size lcore(i)=speednode(i)_lcore(n) of the token bucket, the token generation rate lcore(i)=speednode(i)_lcore(n) of the token bucket, and the token number token of the token bucket are initialized lcore(i)=speednode(i)_lcore(n)
In the above steps, the description will be continued with the examples in the above steps. The score (n) thread creates or adjusts a token bucket for the traffic identified by the TCP-SYN every second, the depth size lcore(i)=speednode(i)_lcore(n) of the bucket, the bucket token generation rate lcore(i)=speednode(i)_lcore(n), initializes the token bucket token number token lcore(i)=speednode(i)_lcore(n), generates the token number rate lcore(i) every second according to the TCP-SYN message length pkt of each passing bucket of the token bucket algorithm, the bucket depth does not exceed the size lcore(i), if the token length in the bucket is greater than length pkt, the message is discarded if the token length is greater than the token length and the token=token-length pkt,lengthpkt is greater than the token length, the score (n) thread can limit the traffic rate of the queue according to the dynamically adjusted quota every second, and thus the TCP-SYN traffic configured for the cluster reaches the cluster rate limiting result, as shown in fig. 3. The token count of the bucket minus the message length is the total length of the remaining available messages after each pass of a message length, the token represents that the changed token count of the current bucket is a dynamic variable, and the token lcore(i) represents the initial value of the token bucket in the process of initializing.
When the node service is abnormal, the node is withdrawn or newly added, and the node controller in the forwarding process can still dynamically adjust through the multicast message received in the period time.
In summary, the present application adopts the flow control method of the token bucket algorithm, combines the communication and coordination mechanism inside the cluster, and through the data interaction between the cluster manager and the node controller, the present application can monitor the flow condition of each node and the queue in real time, and perform dynamic token generation and allocation according to the shared flow speed limit threshold inside the cluster. Meanwhile, the application can adaptively adjust the flow rate limiting strategy according to the states of the nodes and the queues in the cluster so as to adapt to different load conditions and demand changes.
In a third aspect, the present application is directed to an electronic device comprising a memory and one or more processors.
The memory has stored therein one or more applications adapted to be executed by the one or more processors to implement the method of the first aspect.
An electronic device includes a processor and a memory. Wherein the processor is coupled to the memory, such as via a bus.
The structure of the electronic device is not limited to the embodiment of the present application.
The processor may be a CPU, general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various exemplary logic blocks, modules and circuits described in connection with this disclosure. A processor may also be a combination that performs computing functions, e.g., including one or more microprocessors, a combination of a DSP and a microprocessor, and the like.
A bus may include a path that communicates information between the components. The bus may be a PCI bus or an EISA bus, etc. The buses may be divided into address buses, data buses, control buses, etc.
The memory may be, but is not limited to, ROM or other type of static storage device, RAM or other type of dynamic storage device, which can store static information and instructions, EEPROM, CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disc, etc.), magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
In a fourth aspect, the present application proposes a computer readable storage medium having stored thereon a computer program which is loadable and executable by a processor for the method according to the first aspect.
While the applicant has described and illustrated the embodiments of the present application in detail with reference to the drawings, it should be understood by those skilled in the art that the above embodiments are only preferred embodiments of the present application, and the detailed description is only for the purpose of helping the reader to better understand the spirit of the present application, and not to limit the scope of the present application, but any improvements or modifications based on the spirit of the present application should fall within the scope of the present application.