CN107562534A - One kind weighting minimum data amount load-balancing method - Google Patents
One kind weighting minimum data amount load-balancing method Download PDFInfo
- Publication number
- CN107562534A CN107562534A CN201710637339.6A CN201710637339A CN107562534A CN 107562534 A CN107562534 A CN 107562534A CN 201710637339 A CN201710637339 A CN 201710637339A CN 107562534 A CN107562534 A CN 107562534A
- Authority
- CN
- China
- Prior art keywords
- cpu core
- load
- data
- task process
- balancing method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Multi Processors (AREA)
- Computer And Data Communications (AREA)
Abstract
One kind weighting minimum data amount load-balancing method, comprises the following steps:(1) task process is bundled on the CPU core specified, prevents the task process between CPU core from migrating;(2) batch of data bag is parsed, obtains data volume;(3) in the processor for obtaining server, the weighted load volume of each CPU core;(4) task process bound on the minimum CPU core of weighted load volume is determined;5) the batch data bag is distributed in step (4) described task process, (6) according to the data volume obtained in step (2), the data total amount of the task process of packet has been distributed in renewal, and return to step (2) carries out the circulation of next group packet afterwards.
Description
Technical field
The present invention relates to one kind to weight minimum data amount load-balancing method, belongs to the load balancing neck of data handling system
Domain.
Background technology
For the calculating pressure that balanced classification sampling is brought to system, system uses at present enters line number based on tables of data
According to the static load balancing strategy of distribution.Based on the monitoring and statisticses of long term test data traffic, packet is pressed into data volume substantially
Respectively 4 task queues, so as to construct data inactivity distributing list.Then, packet is distributed to difference according to static distributing list
Processing node in carry out the parallel processings of data, improve the data-handling efficiency of system.
But the static distribution that data are carried out according to data inactivity distributing list can bring two problems:
1), the packet in data inactivity distributing list and the mapping relations of task queue have been consolidated before system use
Change, when the data parameters type demand increase of processing, can only rely on artificial again to all test types progress after increase
A new wheel test and statistics, edit new data inactivity distributing list.This has just run counter to data handling system can processing data class
The technical requirements that type can flexibly match somebody with somebody.
2), during inconsistent when the frequency and statistics of the test type truly carried out, it is distributed in each task queue
Data volume will be far from each other with the distribution situation estimated.Can cause the unbalance of data volume in each task queue, it is this not
Equilibrium reduces the computation capability of system.
In the practical application of Linux virtual server (Linux Virtual Server), adopted in dynamic load balancing algorithm
Most is weighting Smallest connection (Weighted Least-Connection) dispatching algorithm.The algorithm overcomes minimum connect
The deficiency of algorithm is connect, the disposal ability of server is represented with corresponding weights, new connection request is distributed into current connection number
The minimum server with the ratio between weights.But three in weighted least-connection scheduling algorithm be present:
1), the algorithm is to be based on each heterogeneous machines joint behavior in group system, with reference to the reality of every server node
Ability defines the weights of server node, the list service not directed to production domesticization mass data processing system using multinuclear
Device, so the node selection of the algorithm is not suitable for existing data handling system.
2), identical connection number/number of tasks can not represent that identical loads, to connect number/number of tasks to represent node
Loading condition is inaccurate.
3), increasing with load, the disposal ability of respective server node can also change.The weights of server are
Set manually by system manager, the disposal ability of server node can not be reflected well.
It is therefore desirable to the new load balance scheduling algorithm based on data volume and node real-time computing weights is designed,
Realize the dynamic load leveling between each CPU core node.
The content of the invention
The technology of the present invention solves problem:Overcome the deficiencies in the prior art, there is provided a kind of based on preferential dynamic of weights
State load-balancing algorithm, the load balancing between each CPU core is realized, improve the computation capability of system, lifted at data
Reason ability, mainly used in more vouching servers.
The present invention technical solution be:
One kind weighting minimum data amount load-balancing method, step are as follows:
(1) task process is bundled on the CPU core specified, prevents the task process between CPU core from migrating;
(2) batch of data bag is parsed, obtains data volume;
(3) in the processor for obtaining server, the weighted load volume of each CPU core;
(4) task process bound on the minimum CPU core of weighted load volume is determined;
(5) the batch data bag is distributed in step (4) described task process,
(6) according to the data volume obtained in step (2), the data for updating the task process for having distributed packet are total
Amount, the circulation of the next group packet of return to step (2) progress afterwards.
The server refers to single server.
The processor includes multiple CPU cores.
Task process is bundled on the CPU core specified by the step (1) to be provided with by CPU affinity.
The step (2) is obtained in the processor of server, the weighted load volume of each CPU core, is specially:W (Si)=Q
(Si)/F (Si), wherein, W (Si) is the weighted load volume of CPU core, and O (Si) is that the data for the task queue bound on CPU core are total
Amount, F (Si) are the disposal ability weights of CPU core.
The disposal ability weights F (Si) of CPU core=1-O (Si), wherein, O (Si) is the occupancy of the CPU core.
The quantity of task process is corresponding with the quantity of CPU core.
Compared with the prior art, the invention has the advantages that:
(1) present invention weighting minimum data amount load-balancing method can be according to the real-time weighted load data of task queue
Volume, by packet delivery to the minimum node of weighted load data volume.Repeatedly after distribution, it is possible to achieve the number between each node
According to the equilibrium of total amount.
(2) for original system when carrying out data processing, some cores still have CPU core to be substantially at sky already close to full load condition
Load state, it is unbalance that this has resulted in server inter-core load, computation capability is reduced, so as to slow down data processing speed.System
After weighting minimum data amount load-balancing method is introduced in system, when testing every time, each CPU occupancy is all in similar shape
State, server has been reached inter-core load equilibrium, improve the Parallel Computing Performance of system, and then accelerate data processing speed
Degree.
Brief description of the drawings
Fig. 1 is the inventive method flow chart;
Fig. 2 is CPU usage calculation flow chart;
Fig. 3 is the occupancy block diagram of each CPU core in the prior art;
Fig. 4 is the occupancy block diagram of each CPU core in the present invention.
Embodiment
As shown in figure 1, the present invention proposes a kind of weighting minimum data amount load-balancing method, step is as follows:
(1) task process is bundled on the CPU core specified, prevents the task process between CPU core from migrating;Task is entered
Journey is bundled on the CPU core specified to be provided with by CPU affinity.
Specially:It is exactly " locking " process in simple terms that CPU affinity, which is set, makes it can only be one or several
Run on CPU core.The process of given CPU affinity will not operate in any other CPU core.The present invention uses sched_
Setaffinity (pid_t pid, unsigned int cpusetsize, cpu_set_t*mask) function, setting process are
Pid this process, allows it to operate on the CPU core set by mask.
(2) batch of data bag is parsed, obtains the data volume of this batch data bag;
(3) in the processor for obtaining server, the weighted load volume of each CPU core;The server refers to single server,
Processor includes multiple CPU cores.
Specially:W (Si)=Q (Si)/F (Si), wherein, W (Si) is the weighted load volume of CPU core, and O (Si) is CPU core
The data total amount of the task queue of upper binding, F (Si) are the disposal ability weights of CPU core.
In polycaryon processor, common memory between each core, the only CPU usage letter of each core in/proc/stat
The memory usage situation of breath and whole processor, so only this parameter is each to calculate by CPU usage by the present invention
The disposal ability weights F (Si) of CPU core:
F (Si)=1-O (Si)
Wherein, O (Si) is the occupancy of the CPU core.The quantity of task process is corresponding with the quantity of CPU core.
As shown in Fig. 2 the method that CPU core occupancy is obtained in linux operating systems is as follows:
A) the CPU usage information of two time intervals short enough is sampled in/proc/stat, structure is:(user、
Nice, system, idle, iowait, irq, softing, stealstolen, guest) 9 tuples;
B) total CPU time slice is calculated
The temporal summation of (user+priority+system+free time) for the first time, s is obtained1;
s1=user1+nice1+system1+idle1
The temporal summation of second (user+priority+system+free time), s is obtained2;
s2=user2+nice2+system2+idle2
s2-s1All timeslices in each time interval are obtained, i.e.,:
Total=s2-s1
C) computation-free time idle
Corresponding 4th column datas of idle, the 4th row of first time are subtracted with the secondary 4th row, i.e.,:
Idle=idle2-idle1
D) cpu utilization rates are calculated
O (Si)=100% × (total-idle)/total
(4) task process bound on the minimum CPU core of weighted load volume is determined;
(5) the batch data bag is distributed in step (4) described task process,
(6) according to the data volume obtained in step (2), the data for updating the task process for having distributed packet are total
Amount, the circulation of the next group packet of return to step (2) progress afterwards.
Embodiment one:
Simulated environment is the vouching servers of Godson 3A2,000 tetra-+acceptance of the bid kylin (SuSE) Linux OS.
Multiple distribution is first carried out to packet using weighting minimum data amount load equalizer, input numerical value 1~9 is simulated
The byte number of packet, packet is carried out after repeatedly distributing by testing weight detection minimum data amount load equalizer, most
The data volume in each task process is eventually:113,110,114 and 118.The weighted load volume of each task process is respectively:
122.222,120.213,125.275 with 124.211.Illustrate that weighting minimum data amount load equalizer is repeatedly being distributed
Afterwards, it is possible to achieve the equilibrium of data volume and weighted load volume between each task process.
Embodiment two:
Experimental situation is that the core server of Godson 3A1000 two-ways eight+acceptance of the bid kylin Linux operations be+reach dream database.
Data handling system is tested using various control system test categorical data.Minimum data will be weighted
The contrast test of load balance scheduling algorithm and system legacy data distribution policy is measured, mainly testing each CPU occupancy is
No this evaluation index of equilibrium, is then analyzed according to this index.
The CPU usage of each core of server is as shown in the table when original system works.
Table 1
Cpu | 1st experiment | 2nd experiment | 3rd experiment | The 4th is tested | The 5th is tested |
0 | 25.2% | 62.5% | 37.5% | 54.9% | 34.2% |
1 | 82.0% | 4.9% | 52.1% | 21.4% | 1.6% |
2 | 27.0% | 3.6% | 36.8% | 30.1% | 58.8% |
3 | 3.3% | 46.7% | 61.4% | 56.7% | 61.7% |
4 | 43.3% | 71.7% | 7.5% | 8.8% | 22.1% |
5 | 9.1% | 2.0% | 55.7% | 4.9% | 2.6% |
6 | 52.0% | 55.9% | 9.4% | 42.8% | 1.9% |
7 | 1.6% | 9.1% | 82.7% | 7.5% | 46.2% |
In order to intuitively be compared, experimental result is depicted as block diagram as shown in figure 3, CPU0~CPU7 is represented in figure
8 CPU cores, the block diagram for successively carrying out five experiments sort unanimously, and order from left to right is followed successively by CPU0~CPU7, is scheming
0~7 is indicated below middle block diagram to represent CPU0~CPU7.
Each CPU usage is as shown in the table when system works after introducing minimum data amount load-balancing algorithm:
Table 2
Cpu | 1st experiment | 2nd experiment | 3rd experiment | The 4th is tested | The 5th is tested |
0 | 50.6% | 45.9% | 43.0% | 55.4% | 61.2% |
1 | 50.5% | 55.9% | 45.0% | 42.6% | 49.4% |
2 | 60.2% | 55.4% | 42.4% | 58.8% | 46.0% |
3 | 50.6% | 52.4% | 58.8% | 58.5% | 57.7% |
4 | 61.1% | 49.7% | 49.3% | 56.1% | 56.0% |
5 | 54.5% | 44.4% | 48.4% | 54.6% | 51.1% |
6 | 43.8% | 55.9% | 53.3% | 55.9% | 53.2% |
7 | 44.4% | 48.2% | 48.5% | 57.0% | 61.4% |
In order to intuitively be compared, it is as shown in Figure 4 that experimental result is depicted as block diagram.CPU0~CPU7 is represented in figure
8 CPU cores, the block diagram for successively carrying out five experiments sort unanimously, and order from left to right is followed successively by CPU0~CPU7, is scheming
0~7 is indicated below middle block diagram to represent CPU0~CPU7.
Can visually it find out from table and block diagram, for original system when carrying out data processing, some cores are already close to full
Load state, still has CPU core to be substantially at Light Condition, and it is unbalance that this has resulted in server inter-core load, reduces parallel computation energy
Power, so as to slow down data processing speed.After weighting minimum data amount load balance scheduling algorithm is introduced in system, experiment every time
When, each CPU occupancy makes server reach inter-core load equilibrium all in similar state, improve system and
Row calculates performance, and then accelerates data processing speed.
Claims (7)
1. one kind weighting minimum data amount load-balancing method, it is characterised in that step is as follows:
(1) task process is bundled on the CPU core specified, prevents the task process between CPU core from migrating;
(2) batch of data bag is parsed, obtains data volume;
(3) in the processor for obtaining server, the weighted load volume of each CPU core;
(4) task process bound on the minimum CPU core of weighted load volume is determined;
(5) the batch data bag is distributed in step (4) described task process,
(6) the data total amount of the task process of packet has been distributed according to the data volume obtained in step (2), renewal, it
Return to step (2) carries out the circulation of next group packet afterwards.
A kind of 2. weighting minimum data amount load-balancing method according to claim 1, it is characterised in that:The server
Refer to single server.
A kind of 3. weighting minimum data amount load-balancing method according to claim 1, it is characterised in that:The processor
Including multiple CPU cores.
A kind of 4. weighting minimum data amount load-balancing method according to claim 1, it is characterised in that:The step
(1) task process is bundled on the CPU core specified and be provided with by CPU affinity.
A kind of 5. weighting minimum data amount load-balancing method according to claim 1, it is characterised in that:The step
(2) in the processor for obtaining server, the weighted load volume of each CPU core, it is specially:W (Si)=Q (Si)/F (Si), wherein,
W (Si) is the weighted load volume of CPU core, and Q (Si) is the data total amount for the task queue bound on CPU core, and F (Si) is CPU core
Disposal ability weights.
A kind of 6. weighting minimum data amount load-balancing method according to claim 5, it is characterised in that:The place of CPU core
Reason ability weights F (Si)=1-O (Si), wherein, O (Si) is the occupancy of the CPU core.
A kind of 7. weighting minimum data amount load-balancing method according to claim 5, it is characterised in that:Task process
Quantity is corresponding with the quantity of CPU core.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710637339.6A CN107562534B (en) | 2017-07-31 | 2017-07-31 | Load balancing method for weighted minimum data volume |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710637339.6A CN107562534B (en) | 2017-07-31 | 2017-07-31 | Load balancing method for weighted minimum data volume |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107562534A true CN107562534A (en) | 2018-01-09 |
CN107562534B CN107562534B (en) | 2020-05-08 |
Family
ID=60974110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710637339.6A Active CN107562534B (en) | 2017-07-31 | 2017-07-31 | Load balancing method for weighted minimum data volume |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107562534B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209102A (en) * | 2020-01-08 | 2020-05-29 | 湖南映客互娱网络信息有限公司 | Distributed task distribution method and system |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103713956A (en) * | 2014-01-06 | 2014-04-09 | 山东大学 | Method for intelligent weighing load balance in cloud computing virtualized management environment |
CN105516360A (en) * | 2016-01-19 | 2016-04-20 | 苏州帕科泰克物联技术有限公司 | Method and device for load balance of computer |
CN106775927A (en) * | 2016-11-25 | 2017-05-31 | 郑州云海信息技术有限公司 | A kind of processor partition method and device based on KVM virtualization technology |
-
2017
- 2017-07-31 CN CN201710637339.6A patent/CN107562534B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103713956A (en) * | 2014-01-06 | 2014-04-09 | 山东大学 | Method for intelligent weighing load balance in cloud computing virtualized management environment |
CN105516360A (en) * | 2016-01-19 | 2016-04-20 | 苏州帕科泰克物联技术有限公司 | Method and device for load balance of computer |
CN106775927A (en) * | 2016-11-25 | 2017-05-31 | 郑州云海信息技术有限公司 | A kind of processor partition method and device based on KVM virtualization technology |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111209102A (en) * | 2020-01-08 | 2020-05-29 | 湖南映客互娱网络信息有限公司 | Distributed task distribution method and system |
Also Published As
Publication number | Publication date |
---|---|
CN107562534B (en) | 2020-05-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tang et al. | An intermediate data placement algorithm for load balancing in spark computing environment | |
CN103401939B (en) | Load balancing method adopting mixing scheduling strategy | |
CN103338228B (en) | Cloud computing load balancing dispatching algorithms based on double weighting Smallest connection algorithms | |
CN103713956B (en) | Method for intelligent weighing load balance in cloud computing virtualized management environment | |
CN106534318B (en) | A kind of OpenStack cloud platform resource dynamic scheduling system and method based on flow compatibility | |
CN104902001B (en) | Web request load-balancing method based on operating system virtualization | |
CN104978236B (en) | HDFS load source destination node choosing methods based on more measurement indexs | |
CN114143326B (en) | Load adjustment method, management node, and storage medium | |
CN104182278B (en) | A kind of method and apparatus for judging computer hardware resource busy extent | |
CN108055292B (en) | Optimization method for mapping from virtual machine to physical machine | |
CN111752678A (en) | Low-power-consumption container placement method for distributed collaborative learning in edge computing | |
CN116263701A (en) | Computing power network task scheduling method and device, computer equipment and storage medium | |
CN107301466A (en) | To business load and resource distribution and the Forecasting Methodology and forecasting system of property relationship | |
CN112365070A (en) | Power load prediction method, device, equipment and readable storage medium | |
CN114611903A (en) | Data transmission dynamic configuration method and system based on information entropy weighting | |
CN107301094A (en) | The dynamic self-adapting data model inquired about towards extensive dynamic transaction | |
CN107562534A (en) | One kind weighting minimum data amount load-balancing method | |
CN106844175B (en) | A kind of cloud platform method for planning capacity based on machine learning | |
CN112101891B (en) | Data processing method applied to project declaration system | |
CN111435472A (en) | Method, device, equipment and storage medium for predicting quantity of components | |
CN116521335A (en) | Distributed task scheduling method and system for inclined image model production | |
CN116362212A (en) | Report generation method, device, equipment and storage medium | |
CN114741161A (en) | HPC job cluster sensing method based on mixed cluster | |
CN112732451A (en) | Load balancing system in cloud environment | |
CN108650121B (en) | Comprehensive scoring and distributing method for multiple attribute Web service demand and provider |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |