CN112153138A - Traffic scheduling method and device, electronic equipment and storage medium - Google Patents

Traffic scheduling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112153138A
CN112153138A CN202011007857.8A CN202011007857A CN112153138A CN 112153138 A CN112153138 A CN 112153138A CN 202011007857 A CN202011007857 A CN 202011007857A CN 112153138 A CN112153138 A CN 112153138A
Authority
CN
China
Prior art keywords
server
traffic
servers
data
running state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011007857.8A
Other languages
Chinese (zh)
Inventor
李亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou DPTech Technologies Co Ltd
Original Assignee
Hangzhou DPTech Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou DPTech Technologies Co Ltd filed Critical Hangzhou DPTech Technologies Co Ltd
Priority to CN202011007857.8A priority Critical patent/CN112153138A/en
Publication of CN112153138A publication Critical patent/CN112153138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer And Data Communications (AREA)

Abstract

The application provides a method and a device for traffic scheduling, an electronic device and a storage medium, wherein the method comprises the following steps: acquiring running state data of each server in a physical server cluster corresponding to the virtual service; determining the priority corresponding to each server according to the running state data; upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers. By the technical scheme, the traffic can be scheduled to the server with the best performance, the server with the poorer traffic scheduling running state is avoided, and the influence on normal traffic transmission due to overlarge running load of part of servers in the traffic scheduling process is avoided.

Description

Traffic scheduling method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of network communication technologies, and in particular, to a method and an apparatus for traffic scheduling, an electronic device, and a storage medium.
Background
To provide better network service, traffic may be distributed to different servers through load balancing devices.
In the related art, the load balancing device generally allocates the servers by using a scheduling algorithm in a polling manner, a minimum connection manner, a minimum traffic manner, and the like. However, these scheduling algorithms may cause the performance of the scheduled servers themselves to be low, or cause the utilization rate of some servers to be too high, which in turn causes the service capability provided by these servers to be reduced, and affects the user experience.
Disclosure of Invention
In view of this, the present application provides a method, an apparatus, an electronic device, and a storage medium for traffic scheduling, so as to allocate traffic to a server with optimal performance and improve network service quality.
Specifically, the method is realized through the following technical scheme:
according to a first aspect of the present application, a method for traffic scheduling is provided, which is applied to a load balancing device, and includes:
acquiring running state data of each server in a physical server cluster corresponding to the virtual service;
determining the priority corresponding to each server according to the running state data;
upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers.
According to a second aspect of the present application, an apparatus for traffic scheduling is provided, which is applied to a load balancing device, and includes:
an acquisition unit: acquiring running state data of each server in a physical server cluster corresponding to the virtual service;
a first determination unit: determining the priority corresponding to each server according to the running state data;
a scheduling unit: upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers.
According to a third aspect of the present application, there is provided an electronic device comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method as described in the embodiments of the first aspect above by executing the executable instructions.
According to a fourth aspect of embodiments of the present application, there is provided a computer-readable storage medium having stored thereon computer instructions which, when executed by a processor, implement the steps of the method as described in the embodiments of the first aspect above.
According to the technical scheme provided by the application, the priority of the server is determined by acquiring the server running state data, so that the traffic can be scheduled to the server with the best performance, the server with the poor traffic scheduling running state is avoided, and the influence on normal traffic transmission caused by overlarge running load of part of servers in the traffic scheduling process is prevented.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present application and together with the description, serve to explain the principles of the application.
Fig. 1 is a flow chart illustrating a traffic scheduling method according to an exemplary embodiment of the present application;
fig. 2 is a schematic diagram of a network architecture of a traffic scheduling system to which an embodiment of the present application is applied;
FIG. 3 is a multi-party interactive flow diagram illustrating a traffic scheduling method according to an exemplary embodiment of the present application;
FIG. 4 is a schematic diagram illustrating a traffic scheduling electronic device according to an exemplary embodiment of the present application;
fig. 5 is a block diagram illustrating a traffic scheduling apparatus according to an exemplary embodiment of the present application.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present application. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present application, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, such information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
The following provides a detailed description of examples of the present specification.
Fig. 1 is a flowchart illustrating a traffic scheduling method according to an exemplary embodiment of the present application. As shown in fig. 1, the method applied to the load balancing device may include the following steps:
step 102: and acquiring the running state data of each server in the physical server cluster corresponding to the virtual service.
The operation status data may include performance data of each server, or data transmission status information of a network link monitoring the load balancing device and each server, or performance data of each server and data transmission status information of a network link monitoring the load balancing device and each server.
In an embodiment, the performance data of the server may include a server CPU usage rate, a server memory usage rate, and the like, and practically, all parameter values capable of representing the performance of the server may be applied thereto, which is not limited in this application; the data transmission status information of the network link between the load balancing device and each server may be a network delay duration between the load balancing device and each server.
The utilization rate of a CPU and a memory of each server in the physical server cluster can be acquired by load balancing equipment in real time according to SNMP (Simple Network Management Protocol); the network delay time between the load balancing device and the server may be obtained by the load balancing device according to an ICMP (Internet Control Message Protocol).
Step 104: and determining the priority corresponding to each server according to the running state data.
And calculating the priority value of each server according to the running state data of each server and the weight preset for each running state data through a preset running state priority algorithm.
The weight configured for each operation state data in advance may be an individualized weight configured for each server respectively, or a unified weight set for all servers of the physical server cluster.
Because the influence of each running state data in the servers with different hardware configurations on the performance of the servers is different, if the same weight is configured for each server, the calculated priority may be unreasonable. In order to better calculate the priority of each server, the personalized weight configured by each server can be preferentially selected during calculation.
In this embodiment, whether each server is configured with an individualized weight is judged, and if yes, the corresponding priority of each server is calculated according to the running state data and the individualized weight configured by each server; and if not, calculating the priority corresponding to each server according to the running state data and the unified weight set by all the servers of the physical server cluster.
For ease of understanding, the running state priority algorithm is briefly exemplified below:
the operating state priority algorithm may be
Figure BDA0002696586130000041
Wherein A is the priority value of the running state, C is the utilization rate of the CPU of the server, X is the weight of the CPU of the server, M is the utilization rate of the memory of the server, and Y isAnd (4) the weight of the server memory, T is the delay time length, and S is the average delay time length of the server.
The running state priority value calculated according to the running state priority algorithm formula is normally related to the CPU utilization rate, the memory utilization rate and the delay time. As known in the common knowledge of those skilled in the art, the larger the CPU utilization, the memory utilization, and the delay time, the worse the performance of the server.
Therefore, the higher the operating state priority value obtained according to the operating state priority algorithm described above, the lower the priority, and the smaller the operating state priority value, the higher the priority.
And if the priorities of the at least two servers are calculated to be the same according to the running state data, acquiring the load data of the at least two servers, and determining the priorities of the at least two servers according to the load data.
The load data may include an incoming flow, an outgoing flow, a session concurrency number, a session new number, and the like of the server, and practically all parameter values capable of being used for characterizing the server load may be applied to this, which is not limited in this application.
In this embodiment, through a pre-designed load priority algorithm, the priority values of the at least two servers are calculated according to the load data of the at least two servers and the weights pre-configured for each load data.
Specifically, whether the at least two servers are configured with personalized weights is judged, and if yes, the priorities corresponding to the at least two servers are calculated according to the load data and the personalized weights configured by the at least two servers; and if not, calculating the priorities corresponding to the at least two servers according to the load data and the unified weight set by all the servers of the physical server cluster.
Similar to the running state priority algorithm, the preset load priority algorithm of the preset load priority algorithm may be B ═ D × V + E × W, where B is a load priority value, D is an inflow traffic of the server, E is a session concurrency number, V is a server inflow traffic weight, and W is a session concurrency number weight.
The load priority value calculated according to the load priority algorithm formula is normally related to the inflow flow, the outflow flow, the conversation concurrency number and the conversation new number of the server. As is known in the art, the larger the values of the incoming traffic, the outgoing traffic, the number of concurrent sessions, and the number of new sessions of the server are, the higher the load of the server is, and the worse the performance is.
Therefore, the larger the load priority value obtained according to the above load priority algorithm, the lower the priority, and the smaller the load priority value, the higher the priority.
And if the priorities of the at least two servers are calculated to be the same according to the load data, acquiring a predefined arrangement sequence of the at least two servers in the physical server cluster, and determining that the server arranged in the front corresponds to a relatively higher priority.
In another embodiment of the present application, step 104 determines priorities corresponding to the servers according to the operating state data, or may directly add the obtained operating state parameters, where the smaller the sum of the operating state parameter values is, the higher the priority is, the lower the priority is, the higher the sum of the operating state parameter values is, the lower the priority is.
Step 106: upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers.
According to the technical scheme provided by the application, when the load balancing equipment receives the flow sent by the client, the priority of each server is determined according to the obtained running state data of each server in the physical server cluster, and the flow is preferentially scheduled to the optimal performance server in the physical server cluster. By considering the performance of the server, the running data of the server is brought into the scheduling algorithm, so that the load balancing equipment avoids scheduling the traffic to the server with poor performance in the process of scheduling the traffic, and the network service with higher quality can be provided.
Fig. 2 is a schematic diagram of a network architecture of a traffic scheduling system to which the embodiment of the present invention is applied. As shown in fig. 2, the traffic scheduling system may include a client 21, a load balancing device 22 and a server cluster 23, and the load balancing device 22 implements traffic scheduling between the client 21 and the server cluster 23. The server cluster 23 includes a plurality of servers, such as the server 23a, the server 23b, and the server 23c shown in fig. 2, but the number of servers included in the server cluster 23 is not limited in the present application. The servers 23a to 23c can provide the same service, so that the load balancing device 22 can arbitrarily distribute the traffic from the client 21 to a certain server, and all the servers can meet the service requirement of the client 21. The load balancing device 22 needs to select one server from the server cluster 23 to distribute the traffic from the client 21 to the selected server, and similarly, the load balancing device 22 also selects one server from the server cluster 23 to distribute the received traffic to the selected server when receiving the traffic sent by other clients, thereby realizing load balancing among the servers.
In the technical solution of the present application, the optimization of the load balancing may be achieved by improving the selection process of the load balancing device 22 for the server, which is described in detail below with reference to fig. 3. Fig. 3 is a multi-party interaction flowchart illustrating a traffic scheduling method according to an exemplary embodiment of the present application. As shown in fig. 3, the interaction process between the client 21, the load balancing device 22, and the server cluster 23 includes the following steps:
in step 301, the client 21 sends traffic to the load balancing device 22.
Step 302, the load balancing device 22 obtains the operation status data of each server in the server cluster 23.
The operation status data of the server may include performance data of each server or data transmission status information of a network link between the load balancing device and each server or performance data of each server and data transmission status information of a network link between the load balancing device and each server. Specifically, the performance data of the server may include a server CPU utilization rate, a server memory utilization rate, and the like, and practically, all parameter values capable of representing the performance of the server may be applied thereto, which is not limited in the present application; the data transmission status information of the network link between the load balancing device and each server may be a network delay duration between the load balancing device and each server.
For example, the operation status data of each server in the server cluster is shown in table 1:
CPU utilization Memory usage rate Delay duration/ms
Server 1 60% 25% 8
Server 2 40% 60% 8
Server 3 20% 30% 15
Server 4 10% 50% 13
Server 5 50% 60% 6
TABLE 1
In step 303, the load balancing device 22 determines the priority of each server according to the acquired running state data.
For example, the set weights are shown in table 2:
Figure BDA0002696586130000071
Figure BDA0002696586130000081
TABLE 2
The priority algorithm of the running state is set as
Figure BDA0002696586130000082
Wherein A is a running state priority value, C is the utilization rate of a server CPU, X is the weight of the server CPU, M is the server memory utilization rate, Y is the server memory weight, T is the delay time, and S is the average delay time of the server.
S=(8+8+15+13+6)/5=10
A1=0.6*0.4+0.25*0.4+(8/10)*0.2=0.5
A2=0.4*0.4+0.5*0.4+(8/10)*0.2=0.52
A3=0.2*0.4+0.3*0.4+(15/10)*0.2=0.5
A4=0.1*0.4+0.5*0.4+(13/10)*0.2=0.5
A5=0.5*0.4+0.6*0.4+(6/10)*0.2=0.56
The higher the running state priority value obtained according to the running state priority algorithm is, the lower the priority is, the smaller the running state priority value is, and the higher the priority is.
Therefore, the priority of each server in the server cluster is: server 1, server 4, server 2, server 5.
If the highest priority server can be distinguished in step 303 on the basis of the operating state data, a jump can be made directly from step 303 to step 308 without proceeding to step 304.
Step 304: the load balancing device 22 obtains load data of each server in the physical server cluster.
And when the priorities of the at least two servers calculated according to the running state data are the same, acquiring the load data of the at least two servers, and determining the priorities of the at least two servers according to the load data.
In this embodiment, it is calculated that the priorities of the servers 1, 3, and 4 are the same according to the operation state data, and the load balancing device obtains the load data of the servers 1, 3, and 4.
The load data may include an incoming flow, an outgoing flow, a session concurrency number, a session new number, and the like of the server, and practically all parameter values capable of being used for characterizing the server load may be applied to this, which is not limited in this application.
Step 305: the load balancing device 22 determines the priorities of the at least two servers according to the load data.
In this embodiment, through a pre-designed load priority algorithm, the priority values of the at least two servers are calculated according to the load data of the servers 1, 3, and 4 and the personalized weights pre-configured for each load data by the servers 1, 3, and 4.
For example, the preset load priority algorithm is B ═ D × V + E × W, where B is a load priority value, D is an inflow traffic of the server, E is a session concurrency number, V is a server inflow traffic weight, and W is a session concurrency number weight.
The load data of the servers 1, 3, 4 in the server cluster and the personalized weights of the configured load data are shown in table 3:
inflow rate of fluid Number of concurrent sessions Inflow traffic weight Session concurrency number weighting
Server 1 12 8 0.4 0.6
Server 3 7.2 12 0.5 0.5
Server 4 20 4 0.4 0.6
TABLE 3
B1=12*0.4+8*0.6=9.6
B3=7.2*0.5+12*0.5=9.6
B4=20*0.4+4*0.6=10.4
The higher the load priority value obtained according to the load priority algorithm is, the lower the priority is, the smaller the load priority value is, the higher the priority is.
Therefore, the priority of each server in the server cluster is: server 1 > server 3 > server 4 > server 2 > server 5.
If the server with the highest priority can be distinguished according to the load data in step 305, the step 305 can directly jump to step 308 without going to step 306.
Step 306: the load balancing device 22 obtains the arrangement order of the servers in the physical server cluster.
Step 307: the load balancing device 22 determines the server priority according to the rank order of the servers.
And when the priorities of the at least two servers calculated according to the load data are the same, acquiring a predefined arrangement sequence of the at least two servers in the physical server cluster, and determining that the server arranged in the first is corresponding to a relatively higher priority.
In this embodiment, server 1 and server 3 have the same priority, and according to the predefined order of arrangement of the two servers in the physical server cluster, server 1 is determined to have a higher priority than server 3.
In summary, the priority of each server in the server cluster is: server 1 > server 3 > server 4 > server 2 > server 5.
In step 308, the load balancing device 22 schedules the traffic sent by the client 21 to the server with the highest priority in the server cluster 23.
According to the priority ranking of the servers in the server cluster exemplified above, after the load balancing device receives the traffic sent from the client, the traffic is preferentially scheduled to the server 1.
Corresponding to the method embodiments, the present specification also provides an embodiment of an apparatus.
Fig. 4 is a schematic diagram illustrating a traffic scheduling electronic device according to an exemplary embodiment of the present application. Referring to fig. 4, at the hardware level, the electronic device includes a processor 402, an internal bus 404, a network interface 406, a memory 408, and a non-volatile memory 410, but may also include hardware required for other services. The processor 402 reads the corresponding computer program from the non-volatile memory 410 into the memory 408 and then runs the computer program, thereby forming a device for solving the dual-host hot-standby dual-master problem on a logic level. Of course, besides the software implementation, the present application does not exclude other implementations, such as logic devices or a combination of software and hardware, and the like, that is, the execution subject of the following processing flow is not limited to each logic unit, and may also be hardware or logic devices.
Fig. 5 is a block diagram illustrating a traffic scheduling apparatus according to an exemplary embodiment of the present application. Referring to fig. 5, the apparatus includes an obtaining unit 502, a first determining unit 504, and a scheduling unit 506, where:
the obtaining unit 502 is configured to obtain the operation state data of each server in the physical server cluster corresponding to the virtual service.
The first determining unit 504 is configured to determine priorities corresponding to the servers according to the operating status data.
The scheduling unit 506 is configured to, upon receipt of traffic for the virtual service, preferentially schedule the traffic to a relatively higher priority server in the cluster of physical servers.
Optionally, the operation state data specifically includes performance data of each server, and/or data transmission state information of a network link between the load balancing device and each server is monitored.
Optionally, the performance data includes: the utilization rate of a server CPU and/or the utilization rate of a server memory; the data transmission state information includes: the data transmission delay time of the network link between the load balancing device and the server.
Optionally, the first determining unit 504 is specifically configured to: and calculating the priority value of each server according to the running state data of each server and the weight configured for each running state data in advance.
Optionally, the weight configured for each item of operation state data in advance includes an individualized weight configured for each server, or a unified weight set for all servers of the physical server cluster.
Optionally, the apparatus further comprises:
a second determining unit 508, configured to, when the priorities of the at least two servers calculated according to the operation state data are the same, obtain load data of the at least two servers, and determine the priorities of the at least two servers according to the load data.
The load data may include incoming traffic, outgoing traffic, session concurrency number, session new number, etc. of the server, and practically all parameter values capable of representing the server load may be applied to this, which is not limited in this application
Optionally, the apparatus further comprises:
a third determining unit 510, configured to, in a case that priorities of at least two servers calculated according to the load data are the same, obtain a predefined order of arrangement of the at least two servers in the physical server cluster, and determine that a server arranged in the first corresponds to a relatively higher priority.
The implementation process of the functions and actions of each unit in the above device is specifically described in the implementation process of the corresponding step in the above method, and is not described herein again.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules can be selected according to actual needs to achieve the purpose of the scheme of the application. One of ordinary skill in the art can understand and implement it without inventive effort.
In an exemplary embodiment, there is also provided a non-transitory computer readable storage medium, e.g. a memory, comprising instructions executable by a processor of a traffic scheduling apparatus to perform a method as in any one of the above embodiments, such as the method may comprise:
acquiring running state data of each server in a physical server cluster corresponding to the virtual service; determining the priority corresponding to each server according to the running state data; upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers.
The non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, etc., which is not limited in this application.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the scope of protection of the present application.

Claims (11)

1. A method for traffic scheduling is applied to a load balancing device, and the method comprises the following steps:
acquiring running state data of each server in a physical server cluster corresponding to the virtual service;
determining the priority corresponding to each server according to the running state data;
upon receiving traffic for the virtual service, preferentially scheduling the traffic to a relatively higher priority server in the cluster of physical servers.
2. The method of claim 1, wherein the obtaining the operating state data of each server in the physical server cluster corresponding to the virtual service comprises:
acquiring performance data of each server as the running state data; and/or
And monitoring data transmission state information of network links between the load balancing equipment and each server to serve as the running state data.
3. The method of claim 2,
the performance data includes: the utilization rate of a server CPU and/or the utilization rate of a server memory;
the data transmission state information includes: the data transmission delay time of the network link between the load balancing device and the server.
4. The method of claim 1, wherein the determining the priorities corresponding to the servers comprises:
and calculating the priority value of each server according to the running state data of each server and the weight configured for each running state data in advance.
5. The method of claim 4, wherein the pre-configuring the weights for the run-state data comprises:
personalized weights configured for the servers respectively;
or, a uniform weight is set for all servers of the physical server cluster.
6. The method of claim 1, further comprising:
and if the priorities of the at least two servers calculated according to the running state data are the same, acquiring the load data of the at least two servers, and determining the priorities of the at least two servers according to the load data.
7. The method of claim 6, wherein the server load data comprises at least one of:
incoming traffic, outgoing traffic, session concurrency number, and session new number for the server.
8. The method of claim 6, further comprising:
and if the priorities of the at least two servers calculated according to the load data are the same, acquiring a predefined arrangement sequence of the at least two servers in the physical server cluster, and determining that the server arranged in the first is corresponding to a relatively higher priority.
9. An apparatus for traffic scheduling, applied to a load balancing device, the apparatus comprising:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring the running state data of each server in a physical server cluster corresponding to the virtual service;
the first determining unit is used for determining the priority corresponding to each server according to the running state data;
and the scheduling unit is used for preferentially scheduling the traffic to a server with a higher priority in the physical server cluster when the traffic aiming at the virtual service is received.
10. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor implements the method of any one of claims 1-8 by executing the executable instructions.
11. A computer-readable storage medium having stored thereon computer instructions, which, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 8.
CN202011007857.8A 2020-09-23 2020-09-23 Traffic scheduling method and device, electronic equipment and storage medium Pending CN112153138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011007857.8A CN112153138A (en) 2020-09-23 2020-09-23 Traffic scheduling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011007857.8A CN112153138A (en) 2020-09-23 2020-09-23 Traffic scheduling method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112153138A true CN112153138A (en) 2020-12-29

Family

ID=73897874

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011007857.8A Pending CN112153138A (en) 2020-09-23 2020-09-23 Traffic scheduling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112153138A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467894A (en) * 2021-07-16 2021-10-01 广东电网有限责任公司 Communication load balancing method, device, equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228884A1 (en) * 2002-06-10 2005-10-13 Caplin Systems Limited Resource management
US20090157879A1 (en) * 2007-09-27 2009-06-18 Philip Stoll System and method for providing web services with load balancing
CN107124472A (en) * 2017-06-26 2017-09-01 杭州迪普科技股份有限公司 Load-balancing method and device, computer-readable recording medium
CN107800756A (en) * 2017-03-13 2018-03-13 平安科技(深圳)有限公司 A kind of load-balancing method and load equalizer
CN109032800A (en) * 2018-07-26 2018-12-18 郑州云海信息技术有限公司 A kind of load equilibration scheduling method, load balancer, server and system
CN109327540A (en) * 2018-11-16 2019-02-12 平安科技(深圳)有限公司 Electronic device, server load balancing method and storage medium
CN109408227A (en) * 2018-09-19 2019-03-01 平安科技(深圳)有限公司 Load-balancing method, device and storage medium
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN111651246A (en) * 2020-04-24 2020-09-11 平安科技(深圳)有限公司 Task scheduling method, device and scheduler between terminal and server

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050228884A1 (en) * 2002-06-10 2005-10-13 Caplin Systems Limited Resource management
US20090157879A1 (en) * 2007-09-27 2009-06-18 Philip Stoll System and method for providing web services with load balancing
CN107800756A (en) * 2017-03-13 2018-03-13 平安科技(深圳)有限公司 A kind of load-balancing method and load equalizer
CN107124472A (en) * 2017-06-26 2017-09-01 杭州迪普科技股份有限公司 Load-balancing method and device, computer-readable recording medium
CN109032800A (en) * 2018-07-26 2018-12-18 郑州云海信息技术有限公司 A kind of load equilibration scheduling method, load balancer, server and system
CN109408227A (en) * 2018-09-19 2019-03-01 平安科技(深圳)有限公司 Load-balancing method, device and storage medium
CN109327540A (en) * 2018-11-16 2019-02-12 平安科技(深圳)有限公司 Electronic device, server load balancing method and storage medium
CN110333937A (en) * 2019-05-30 2019-10-15 平安科技(深圳)有限公司 Task distribution method, device, computer equipment and storage medium
CN111651246A (en) * 2020-04-24 2020-09-11 平安科技(深圳)有限公司 Task scheduling method, device and scheduler between terminal and server

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113467894A (en) * 2021-07-16 2021-10-01 广东电网有限责任公司 Communication load balancing method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109618002B (en) Micro-service gateway optimization method, device and storage medium
US10027760B2 (en) Methods, systems, and computer readable media for short and long term policy and charging rules function (PCRF) load balancing
JP2006259812A (en) Dynamic queue load distribution method, system, and program
CN110166524B (en) Data center switching method, device, equipment and storage medium
CN108933829A (en) A kind of load-balancing method and device
CN109933431B (en) Intelligent client load balancing method and system
CN105791254B (en) Network request processing method and device and terminal
CN114205316B (en) Network slice resource allocation method and device based on power service
CN111314236A (en) Message forwarding method and device
Shifrin et al. Optimal control of VNF deployment and scheduling
KR101448413B1 (en) Method and apparatus for scheduling communication traffic in atca-based equipment
CN103401799A (en) Method and device for realizing load balance
CN115633039A (en) Communication establishing method, load balancing device, equipment and storage medium
CN112153138A (en) Traffic scheduling method and device, electronic equipment and storage medium
CN108023936B (en) Distributed interface access control method and system
CN109815204A (en) A kind of metadata request distribution method and equipment based on congestion aware
CN109413117B (en) Distributed data calculation method, device, server and computer storage medium
CN108200185B (en) Method and device for realizing load balance
CN111249747B (en) Information processing method and device in game
CN108833588A (en) Conversation processing method and device
CN110995802A (en) Task processing method and device, storage medium and electronic device
CN115168017B (en) Task scheduling cloud platform and task scheduling method thereof
CN115941604A (en) Flow distribution method, device, equipment, storage medium and program product
CN109670691A (en) Method, equipment and the customer service system distributed for customer service queue management and customer service
CN108540336A (en) A kind of elastic telescopic dispatching method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201229