CN108733475A - A kind of dynamical feedback dispatching method - Google Patents
A kind of dynamical feedback dispatching method Download PDFInfo
- Publication number
- CN108733475A CN108733475A CN201810493641.3A CN201810493641A CN108733475A CN 108733475 A CN108733475 A CN 108733475A CN 201810493641 A CN201810493641 A CN 201810493641A CN 108733475 A CN108733475 A CN 108733475A
- Authority
- CN
- China
- Prior art keywords
- node
- load
- formula
- task
- component
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/48—Program initiating; Program switching, e.g. by interrupt
- G06F9/4806—Task transfer initiation or dispatching
- G06F9/4843—Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
- G06F9/4881—Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/16—Error detection or correction of the data by redundancy in hardware
- G06F11/20—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements
- G06F11/202—Error detection or correction of the data by redundancy in hardware using active fault-masking, e.g. by switching out faulty elements or by switching in spare elements where processing functionality is redundant
- G06F11/2023—Failover techniques
- G06F11/203—Failover techniques using migration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1001—Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
- H04L67/1004—Server selection for load balancing
- H04L67/1008—Server selection for load balancing based on parameters of servers, e.g. available memory or workload
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Quality & Reliability (AREA)
- Computer Hardware Design (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The present invention relates to a kind of dynamical feedback dispatching methods, run on scheduling model, including:Monitor component real-time data collection, calculate the remaining load ability of each node, and it will be in the remaining load ability of each node and other factor deposit decision knowledge bases for influencing scheduling, weight is calculated by scheduling engine (SCHEDULER), Hash mapping relationship is established, determines the assignment of task.The present invention coordinates Distributed engine scheduling model using dynamical feedback dispatching method so that the increase and decrease of serviced component is more convenient and flexible, solves the problems, such as Scalability.Can task reasonable distribution is achieved the effect that load balancing by the loading condition of each node in understanding system in time, solve the problems, such as that task is rationally assigned.
Description
Technical field
The present invention relates to a kind of dynamical feedback dispatching methods, belong to distributed computing method technical field.
Background technology
Cloud computing has tri- kinds of service modes of SaaS, PaaS and IaaS.SaaS is a kind of by software and its relevant data set
In be hosted in software operation pattern on PaaS, reduce the O&M cost of software systems, thus obtain it is extensive approve and
It uses.SaaS is also the core for constituting PaaS and IaaS, it possesses customizable, expansible, support multi-tenant (Multi-
Tenancy, MT) etc. characteristics, can meet tenant displaying, function, performance etc. diversity requirement.The reality of these characteristics
Now need the support of the PaaS and IaaS positioned at cloud computing stack lower layer.However, being applied in face of abundant field, PaaS and IaaS exist
There are problems that in task scheduling and resource allocation.
How reasonably task to be assigned on resource node, meanwhile, meet system scalability, robustness, High Availabitity
Property demand, these problems have become technical barrier urgently to be resolved hurrily in field of cloud calculation.
Invention content
In view of the deficiencies of the prior art, the present invention provides a kind of dynamical feedback dispatching methods;
The technical scheme is that:
A kind of dynamical feedback dispatching method, runs on scheduling model, and the scheduling model includes system software component group
With business software component group, all components are referred to as serviced component (Service Component, SC).
The system software component group includes scheduling engine (SCHEDULER), analytics engine (fE), system data object
Map component (MGDM-SCF), global data object map component (SCF- GOR), function management component (SCFFunction), divide
Analyse component (SCFAnalysis), monitoring component (SCFMonitor), Installed System Memory database (MMDBsys), system database
(Systemic DB) and system resource management component, assembly management component;These are referred to as SCiF, they are united by scheduling engine
One scheduling.
Business software component group includes Service Component (SCiB), business internal storage data object map component (MGDM-
SCB), business memory database (MGDBb);It, may be only comprising in two kinds of core components on the dummy node of a cloud environment
One such or both of which includes.Each tenant can select several SCiBThe application system of oneself is formed, the present invention
This application system is referred to as Internet service center (internet Service Center, iSC).When there are task requests, hold
Row process is as follows:
1. after tenant logs in, platform can retrieve corresponding component Name composition from MMDBsys and meet tenant's demand
SCF Function。
2. the user of tenant is from SCFFunction chooses some service menu, clicks.
3. platform will click on the component Name corresponding to menu and pass to scheduling engine (SCHEDULER), SCHEDULER is looked into
Decision knowledge base is ask, the satisfactory component (SCi from being retrieved in system databaseFOr SCiB), they all carry specific
Nodal information.
4. the relevant information of component is passed to analytics engine (fE), fEDependency analysis is carried out, the original of execution task is called
Sub- component.
5. after Atomic component has executed task, returning result to SCHEDULER, SCHEDULER finally returns that result
The main body frame page, is shown to user.
If 6. component can not execute the task due to resource-constrained (situation for including response time time-out) on the node,
The task is given to the scheduling engine on another node by scheduling engine together together with request address, is rescheduled.
Monitor component (SCFMonitor) it is responsible for all assembly operating situations of monitoring, pinpoints the problems reported simultaneously in time
It temporarily removes and cannot be used.It cannot be used if it is because resource is nervous, the strategy for degrading and using can also be used.
Analytic unit (SCFAnalysis) the knowledge that can be used according to the service condition and failure rate formation component of component
The foundation of component scheduling is done in library.
Global data object map component (SCF- GOR) it is responsible for synchronizing all MMDBsys, prevent look into secondary in scheduling
It looks for system database and occasions a delay.
The data of tenant are by SCF- GOR is responsible for being stored in the relevant database of tenant.
Due to the requirement of load pressure and aspect of performance, (the same serviced component is deployed in a virtual section to the copy of SC
A copy is known as on point) it may be deployed on the multiple nodes for meeting its resource requirement, it follows that function/clothes in iSC
Business item may have one-to-many mapping relations with relevant SC, when service request reaches, need cloud platform timely
Suitable SC is found to respond service request.For this reason, it may be necessary to analytics engine (the f with scheduling engine (SCHEDULER)E) common
It is the basis of scheduling engine (SCHEDULER) scheduling to guide SC, dynamical feedback dispatching method.
Including:Monitor component (SCFMonitor) the service condition of acquisition node resource and relevant with the response time in real time
Data calculate the remaining load ability of each node, and the remaining load ability of each node and other influences are dispatched
Factor is stored in decision knowledge base, is calculated weight by scheduling engine (SCHEDULER), is established Hash mapping relationship, determines and appoint
The assignment of business.
According to currently preferred, the remaining load ability of each node is calculated, including:
(1) calculating of load capacity linear module of computing capability in cloud computing is dummy node, mainly consider CPU,
5 memory, I/O, process/thread number and bandwidth parameters, interstitial content are increased and decreased according to actual conditions.It will be complementary
SCiB/SCiFIt is deployed on identical node, to ignore the routing cost between task.Being located in cloud environment has n platform virtual machines,
Every virtual machine is a dummy node, is expressed as V={ V1,V2,…Vi…Vn, i=1,2 ..., n, every virtual machine is
One dummy node, the load Load (V of any dummy node ii) calculation formula such as formula (I) shown in:
In formula (I), load (cpui),load(memi),load(ioi),load(bwi),load(ti),load(xi) respectively
Indicate dummy node i current CPU, thread/process occupancy, memory, magnetic disc i/o, bandwidth and other influence factors;
K indicates the item number of influence factor;
vjEach load terms impact factor is represented, is measured by experiment.Support SCiB/SCiFThere are one maximum for the container of operation
Concurrent thread number Tmax, the T of different vesselsmaxMay be different, therefore, the negative of current container is weighed with thread/process occupancy
It is relatively reasonable to carry situation, i.e. Load (t)=Tcur/Tmax, TcurIndicate current concurrent thread or into number of passes;
(2) remaining load capability container has been saturated and the not high situation of resources occupation rate happens occasionally under cloud environment,
Therefore, in computational load, thread/process occupancy is added.Whether some node assigned tasks can be given, depend on the section
The remaining load ability of point, the remaining load ability Weight (V of any dummy node ii) calculation formula such as formula (II) shown in:
In formula (II), Weight (Vi) ∈ (0,1), Weight (Vi) bigger, illustrate that the remaining load ability of the node is got over
By force;Weight(Vi) smaller, illustrate that the remaining load ability of the node is weaker;
(3) judge whether dummy node i overloads:Load threshold λ, λ ∈ [0,1] is set, if Weight (Vi) >=λ, then
Dummy node i does not overload, and otherwise, dummy node i assigned tasks are no longer given in dummy node i overload.
It is further preferred that λ=0.1.
According to currently preferred, pass through scheduling engine (SCHEDULER) calculate node weight Weight (Vi), it establishes and breathes out
Uncommon mapping relations, establish Hash mapping relationship and construct a 0~KmaxHash space τ, KmaxRefer to the maximum occurrences of τ;Including:
τ is divided by node weights, then dummy node i is calculated in the position point (i) of hash space τ as shown in formula (III):
Node interval is indicated with P (Vi), if P (Vi)=(point (i-1), point (i)], then it presses formula (IV) and Hash is set
The value of function:
H (j)=Vi j∈P(Vi)(Ⅳ)
In formula (IV), H (j) refers to hash function.
It follows that the remaining load ability of node is bigger, i.e. Weight (Vi) value is bigger, range is also bigger in τ.
According to currently preferred, the assignment of task is determined, including:
(4) each request task is assigned into a sequence number seq given birth at random, makes seq ∈ τ;
(5) seq is substituted into formula (IV) to get to Vi, which is assigned to dummy node ViIt executes.
According to the randomness of seq, it may fall on any one position τ, and the node that range is wider, obtain this task
Probability it is also bigger.When node increases and decreases, because weights gross weight does not exceed 1, therefore, the size in the spaces Hash is not interfered with.
The increase of node can make the distance between node shorten, and the probability that task falls on each node can increase.Node reduction can make
The distance between node is elongated, and according to the randomness of seq, task distribution still can keep uniform.
According to currently preferred, current load situation is assessed using average variance method, and adjusts in due course.
According to currently preferred, current load situation is assessed using average variance method, and adjusts in due course,
Including:
(6) average load lavg can be sought by formula (V) by the integrated load of every server, is sought by formula (VI)
Load balancing degrees magnitude C under cloud computing environment:
In formula (V), formula (VI), n is number of nodes;
C indicates load balance degree, and C is bigger, and load balance degree is smaller;
(7) given threshold C0, C0 ∈ (0,1), threshold value C0 are the feelings that require and pay for system response time according to tenant
Condition and set, smaller system response time is faster, and being undertaken on node for task is fewer.It is as C > C0, load is maximum
Node on the most request task of occupancy resource move on the minimum node of load.
Under normal circumstances, be not in the prodigious situations of C, but the appearance of extreme case can not be avoided completely, when C is true
When being more than C0 in fact, it is meant that there is big task to be assigned to some node suddenly, its load is made to be higher than other all nodes, it is remote super average
Value, is directed at unbalance!So wanting its migration task.Otherwise, it is exactly normal condition, load is balanced.
Beneficial effects of the present invention are:
1, the present invention coordinates Distributed engine scheduling model using dynamical feedback dispatching method so that the increase and decrease of serviced component
It is more convenient and flexible, solve the problems, such as Scalability.
2, the present invention uses dynamical feedback dispatching method, can in time in understanding system each node loading condition, will
Task reasonable distribution, achievees the effect that load balancing, solves the problems, such as that task is rationally assigned.
3, the dynamical feedback dispatching method that the present invention uses needs the service condition of monitoring assembly and node resource, works as node
It breaks down, scheduling engine can shield malfunctioning node, and existing task is transferred on other nodes, can't assign to malfunctioning node
New task improves the availability and robustness of system.
Description of the drawings
Fig. 1 is the structure diagram of scheduling model of the present invention;
Fig. 2 is position views of the dummy node i that seeks of embodiment in hash space τ:
Fig. 3 is the schematic diagram for the impact factor vi that load terms are obtained by training;
Fig. 4 is the schematic diagram for concurrently accessing several influences to load balancing;
Fig. 5 is influence schematic diagram of the concurrent access number for average response time.
Specific implementation mode
The present invention is further qualified with embodiment with reference to the accompanying drawings of the specification, but not limited to this.
Embodiment
A kind of dynamical feedback dispatching method, runs on scheduling model, as shown in Figure 1, scheduling model includes system software group
Part group and business software component group, all components are referred to as serviced component (Service Component, SC).
System software component group includes scheduling engine (SCHEDULER), analytics engine (fE), system data object map
Component (MGDM-SCF), global data object map component (SCF- GOR), function management component (SCFFunction), analysis group
Part (SCFAnalysis), monitoring component (SCFMonitor), Installed System Memory database (MMDBsys), system database
(Systemic DB) and system resource management component, assembly management component;These are referred to as SCiF, they are united by scheduling engine
One scheduling;
Business software component group includes Service Component (SCiB), business internal storage data object map component (MGDM-SCB)、
Business memory database (MGDBb);It, may be only comprising therein in two kinds of core components on the dummy node of a cloud environment
One kind or both of which include.Each tenant can select several SCiBThe application system of oneself is formed, the present invention claims this
Application system is Internet service center (internet Service Center, iSC).When there is task requests, implementation procedure
It is as follows:
1. after tenant logs in, platform can retrieve corresponding component Name composition from MMDBsys and meet tenant's demand
SCF Function。
2. the user of tenant is from SCFFunction chooses some service menu, clicks.
3. platform will click on the component Name corresponding to menu and pass to scheduling engine (SCHEDULER), SCHEDULER is looked into
Decision knowledge base is ask, the satisfactory component (SCi from being retrieved in system databaseFOr SCiB), they all carry specific
Nodal information.
4. the relevant information of component is passed to analytics engine (fE), fEDependency analysis is carried out, the original of execution task is called
Sub- component.
5. after Atomic component has executed task, returning result to SCHEDULER, SCHEDULER finally returns that result
The main body frame page, is shown to user.
If 6. component can not execute the task due to resource-constrained (situation for including response time time-out) on the node,
The task is given to the scheduling engine on another node by scheduling engine together together with request address, is rescheduled.
Monitor component (SCFMonitor) it is responsible for all assembly operating situations of monitoring, pinpoints the problems reported simultaneously in time
It temporarily removes and cannot be used.It cannot be used if it is because resource is nervous, the strategy for degrading and using can also be used.
Analytic unit (SCFAnalysis) the knowledge that can be used according to the service condition and failure rate formation component of component
The foundation of component scheduling is done in library.
Global data object map component (SCF- GOR) it is responsible for synchronizing all MMDBsys, prevent look into secondary in scheduling
It looks for system database and occasions a delay.
The data of tenant are by SCF- GOR is responsible for being stored in the relevant database of tenant.
Due to the requirement of load pressure and aspect of performance, (the same serviced component is deployed in a virtual section to the copy of SC
A copy is known as on point) it may be deployed on the multiple nodes for meeting its resource requirement, it follows that function/clothes in iSC
Business item may have one-to-many mapping relations with relevant SC, when service request reaches, need cloud platform timely
Suitable SC is found to respond service request.For this reason, it may be necessary to analytics engine (the f with scheduling engine (SCHEDULER)E) common
It is the basis of scheduling engine (SCHEDULER) scheduling to guide SC, dynamical feedback dispatching method.
Including:Monitor component (SCFMonitor) the service condition of acquisition node resource and relevant with the response time in real time
Data, calculating each node, (node in a model, is shown in the Node1x ... in Fig. 1, is exactly virtual machine or physics in cloud environment
Machine) remaining load ability, and by the remaining load ability of each node and other factors for influencing scheduling (for example, when dry
When disturbing generation, the impact factor that can be added temporarily, this engine can be extended according to demand with the expression way in formula I
And addition) (monitoring data that monitoring component acquires in real time can be stored in this database to deposit decision knowledge base, pass through for administrator
The behavior of knowledge base observation assembly operation, increases impact factor in due course, system can according to given method (referring to formula III and
IV the assignment for) adjusting task, is shown in Fig. 1.In addition, the proportion v shared by the impact factor obtained after systematic trainingiAlso it is stored in knowledge
In library) in, weight is calculated by scheduling engine (SCHEDULER), Hash mapping relationship is established, determines the assignment of task.
The remaining load ability of each node is calculated, including:
(1) calculating of load capacity linear module of computing capability in cloud computing is dummy node, mainly consider CPU,
5 memory, I/O, process/thread number and bandwidth parameters, interstitial content are increased and decreased according to actual conditions.It will be complementary
SCiB/SCiFIt is deployed on identical node, to ignore the routing cost between task.Being located in cloud environment has n platform virtual machines,
Every virtual machine is a dummy node, is expressed as V={ V1,V2,…Vi…Vn, i=1,2 ..., n, every virtual machine is
One dummy node, the load Load (V of any dummy node ii) calculation formula such as formula (I) shown in:
In formula (I), load (cpui),load(memi),load(ioi),load(bwi),load(ti),load(xi) respectively
Indicate that dummy node i current CPU, thread/process occupancy, memory, magnetic disc i/o, bandwidth and other influence factors (represent one
A little unpredictable influence factors, this model be can one kind can be with extended model, if new discovery influence factor can be added
In model, ellipsis represents these unpredictable factors.It can have k, and k indicates the maximum number of influence factor.);
K indicates the item number of influence factor;
vjEach load terms impact factor is represented, is measured by experiment.Support SCiB/SCiFThere are one maximum for the container of operation
Concurrent thread number Tmax, the T of different vesselsmaxMay be different, therefore, the negative of current container is weighed with thread/process occupancy
It is relatively reasonable to carry situation, i.e. Load (t)=Tcur/Tmax, TcurIndicate current concurrent thread or into number of passes;
(2) remaining load capability container has been saturated and the not high situation of resources occupation rate happens occasionally under cloud environment,
Therefore, in computational load, thread/process occupancy is added.Whether some node assigned tasks can be given, depend on the section
The remaining load ability of point, the remaining load ability Weight (V of any dummy node ii) calculation formula such as formula (II) shown in:
In formula (II), Weight (Vi) ∈ (0,1), Weight (Vi) bigger, illustrate that the remaining load ability of the node is got over
By force;Weight(Vi) smaller, illustrate that the remaining load ability of the node is weaker;
(3) judge whether dummy node i overloads:Load threshold λ, λ ∈ [0,1] is set, if Weight (Vi) >=λ, then
Dummy node i does not overload, and otherwise, dummy node i assigned tasks are no longer given in dummy node i overload.
Pass through scheduling engine (SCHEDULER) calculate node weight Weight (Vi), Hash mapping relationship is established, establishes and breathes out
Uncommon mapping relations construct a 0~KmaxHash space τ, KmaxRefer to the maximum occurrences of τ;Including:
τ is divided by node weights, then dummy node i is calculated in the position point (i) of hash space τ as shown in formula (III):
Node interval is indicated with P (Vi), if P (Vi)=(point (i-1), point (i)], then it presses formula (IV) and Hash is set
The value of function:
H (j)=Vi j∈P(Vi)(Ⅳ)
In formula (IV), H (j) refers to hash function.
Distribution situation of each node on τ is known by above-mentioned formula, as shown in Figure 2.It follows that the remaining load of node
Ability is bigger, i.e. Weght (Vi) value is bigger, range is also bigger in τ.
Determine the assignment of task, including:
(4) each request task is assigned into a sequence number seq given birth at random, makes seq ∈ τ;
(5) seq is substituted into formula (IV) to get to Vi, which is assigned to dummy node ViIt executes.
According to the randomness of seq, it may fall on any one position τ, and the node that range is wider, obtain this task
Probability it is also bigger.When node increases and decreases, because weights gross weight does not exceed 1, therefore, the size in the spaces Hash is not interfered with.
The increase of node can make the distance between node shorten, and the probability that task falls on each node can increase.Node reduction can make
The distance between node is elongated, and according to the randomness of seq, task distribution still can keep uniform.
Current load situation is assessed using average variance method, and is adjusted in due course, including:
(6) average load lavg can be sought by formula (V) by the integrated load of every server, is sought by formula (VI)
Load balancing degrees magnitude C under cloud computing environment:
In formula (V), formula (VI), n is number of nodes;
C indicates load balance degree, and C is bigger, and load balance degree is smaller;
(7) given threshold C0, C0 ∈ (0,1), threshold value C0 are the feelings that require and pay for system response time according to tenant
Condition and set, smaller system response time is faster, and being undertaken on node for task is fewer.It is as C > C0, load is maximum
Node on the most request task of occupancy resource move on the minimum node of load.
Under normal circumstances, be not in the prodigious situations of C, but the appearance of extreme case can not be avoided completely, when C is true
When being more than C0 in fact, it is meant that there is big task to be assigned to some node suddenly, its load is made to be higher than other all nodes, it is remote super average
Value, is directed at unbalance!So wanting its migration task.Otherwise, it is exactly normal condition, load is balanced.
Core of the invention is studied by case comparison, to prove the practicability of the present invention.For the ease of comparing,
Use two sets of basic equivalent hardware as basic facility, one group of non-cloud system platform CERP of installation classics multilayer herein;Another group is opened
Hair tool is substantially essentially identical with the former with running environment, is introduced for performances such as stiffener independent operating, plug and play
Docker technologies, classical multi-layer framework and cloud framework software and hardware scheme comparison's table are as shown in table 1.
Table 1
Due to enterprise application software be essentially all centered on document, report and its processing, be with document here
Universal formulation standard of the unit as module or component, is compared using two sets of embodiments.Scheme 1 is using single tenant's mould
Formula, the large-scale logistics company of service Mr. Yu man.There are about 71 modules, 14 procedural models for its home-delivery center's business;Forwarding is about
There are 28 modules, 8 procedural models to decline to alleviate the service quality generated by overload, use reverse proxy+two-shipper
Parallel+DATABASE HARDWARE physical structure.Wherein, a server makees reverse proxy, it is ensured that load balance;Two application servers
WEB server is done, the software that they are installed is the same, shares service request task jointly;One database Servers for data
Persistence.Scheme 2 uses scheduling model described in the present invention, incorporates cloud computing as the part of system component group
In platform BIRISCloud, the business module of logistics company is all made of SaaS mode constructions, and it is main to cover fortune logistic information management
System:Home-delivery center's (32 document modules, 16 flows), transport (20), common component (8), by the transformation of SaaS patterns,
Global facility is extracted, component total amount is reduced.Compared with traditional development mode, number of components is reduced to 60 from 98, contracting
Subtract 39%, has effectively reduced development cost.The present invention is emphatically by taking logistics as an example, the multiple logistics companies of shape after deployment of components
Logistics company, home-delivery center and each two of carrier is arranged in application system herein.It is as shown in table 2 that dummy node creates inventory.
Table 2
The acquisition needs for obtaining load terms impact factor load terms impact factor are targetedly tested.For enterprise
For information management software, the data transfer of each task is simultaneously little, and computationally intensive, therefore, selectes 500 and lays particular stress on calculating
Task test repeatedly.When initialization, impact factor { v1,v2,v3,v4,v5Be respectively set as 0.5,0.2,0.1,0.1,
0.1 }, from v1Start to successively decrease 0.05 every time, impact factor successively increases thereafter, as { 0.45,0.25,0.1,0.1,0.1 },
{ 0.45,0.20,0.15,0.1,0.1 } ..., as shown in Figure 3.It is not difficult to observe from figure, every magnetic disc i/o and bandwidth weights
When variation, little, such as the 4th, 5 node { v is influenced on the response time4,v5}={ 0.15,0.1 }, { 0.1,0.15 }, 8,9 nodes are
{0.15,0.15},{0.1,0.2}.When touching the bottom, then from minimum point (number 19, impact factor be 0.3,0.2,
0.2,0.1,0.2 }, response time 2111ms) go up, i.e. next group of impact factor be 0.3,0.2,0.25,0.1,
0.15 } ..., by hundreds of time experiments, rule is essentially identical, thus can initialization system operation the initial effects factor be 0.3,
0.2,0.2,0.1,0.2}.The value of this group of impact factor is not constant always, monitor component meeting periodic sampling, according to specific feelings
Condition is adjusted, and then, is stored in knowledge base.
As Fig. 4,5 illustrate concurrent quantity to load balancing (calculating C values, value is smaller, and load is more balanced), response time
It influences.Basic law is, load balancing better system increasingly longer with the increase response time of load, average response time
What is increased is slower, meanwhile, the load pressure that can be born is also bigger.In load balancing and average response time, because of diffusion
Hot spot has used dynamical feedback dispatching algorithm algorithm and dynamic expansion strategy, the present invention to behave oneself best, and BIRISAmazon is slightly inferior
In it, it is primarily due to network reason, it should be indiscriminate that the two is run, which is all BIRISCloud,.CERP is in B value upper tables
It is existing poor, it is because the fact the repeating query method of salary distribution fair on surface conceals internal arithmetic amount unfair distribution.Dotted line mark in figure
The peak value shown is exactly because task of constantly having operand larger in repeating query distribution is dispatched to identical node, it is also contributed to
The suddenly raising of average response time.Growth rate of the CERP on average response time is also very fast, to 425 and
When hair, or even the phenomenon that refusal services is had, however after testing, hardware resource comprehensive utilization ratio is no more than unexpectedly at this time
40%.This illustrates the problem of CERP exists both ways, does not on the one hand account for software application server and hardware resource
Configuration, application server has been expired into/Thread Count, and resource is underused, and dynamic expansion can not be carried out;On the other hand do not have
There are consideration hot issue, possible hot spot all to concentrate on a host, and another leaves unused within the most of the time, dispatching algorithm
It is too simple.It can be seen from the figure that BIRISCloud is dynamic by its scheduling model, Hash consistency feedback algorithm and three stages
State extension and deployment etc. are tactful, successfully solve the above problem.Under current configuration, BIRISCloud opens 6 business void
Quasi- machine, 400 or more is concurrent, and service quality, which has no, to be decreased obviously, and when peak value, resource utilization is up to 70% or so.
Example the above is only the implementation of the present invention is not intended to limit the scope of the invention, every to utilize this hair
Equivalent structure or equivalent flow shift made by bright description is applied directly or indirectly in other relevant technology necks
Domain is included within the scope of the present invention.
Claims (7)
1. a kind of dynamical feedback dispatching method, which is characterized in that run on scheduling model, the scheduling model includes system software
Component group and business software component group;The system software component group includes scheduling engine, analytics engine, system data
Object map component, global data object map component, function management component, analytic unit, monitoring component, Installed System Memory data
Library, system database and system resource management component, assembly management component;Business software component group includes service groups
Part, business internal storage data object map component, business memory database;Including:The monitoring real-time acquisition node resource of component makes
With situation and with response time relevant data, calculate the remaining load ability of each node, and by the residual negative of each node
In loading capability and other factor deposit decision knowledge bases for influencing scheduling, weight is calculated by scheduling engine, Hash is established and reflects
Relationship is penetrated, determines the assignment of task.
2. a kind of dynamical feedback dispatching method according to claim 1, which is characterized in that calculate the residual negative of each node
Loading capability, including:
(1) being located in cloud environment has n platform virtual machines, and every virtual machine is a dummy node, is expressed as V={ V1,V2,…
Vi…Vn, i=1,2 ..., n, every virtual machine are a dummy node, the load Load (V of any dummy node ii) calculating
Shown in formula such as formula (I):
In formula (I), load (cpui),load(memi),load(ioi),load(bwi),load(ti),load(xi) indicate respectively
CPU, thread/process occupancy, memory, magnetic disc i/o, bandwidth and other influence factors current dummy node i;K indicates other
The item number of influence factor;
(2) the remaining load ability Weight (V of any dummy node ii) calculation formula such as formula (II) shown in:
In formula (II), Weight (Vi) ∈ (0,1), Weight (Vi) bigger, illustrate that the remaining load ability of the node is stronger;
Weight(Vi) smaller, illustrate that the remaining load ability of the node is weaker;
(3) judge whether dummy node i overloads:Load threshold λ, λ ∈ [0,1] is set, if Weight (Vi) >=λ, then it is virtual to save
Point i does not overload, and otherwise, dummy node i assigned tasks are no longer given in dummy node i overload.
3. a kind of dynamical feedback dispatching method according to claim 2, which is characterized in that λ=0.1.
4. a kind of dynamical feedback dispatching method according to claim 2, which is characterized in that pass through scheduling engine calculate node
Weight Weight (Vi), Hash mapping relationship is established, Hash mapping relationship is established and constructs a 0~KmaxHash space τ, Kmax
Refer to the maximum occurrences of τ;Including:
τ is divided by node weights, then dummy node i is calculated in the position point (i) of hash space τ as shown in formula (III):
Node interval is indicated with P (Vi), if P (Vi)=(point (i-1), point (i)], then it presses formula (IV) and hash function is set
Value:
H (j)=Vi j∈P(Vi) (Ⅳ)
In formula (IV), H (j) refers to hash function.
5. a kind of dynamical feedback dispatching method according to claim 4, which is characterized in that determine the assignment of task, including:
(4) each request task is assigned into a sequence number seq given birth at random, makes seq ∈ τ;
(5) seq is substituted into formula (IV) to get to Vi, which is assigned to dummy node ViIt executes.
6. a kind of dynamical feedback dispatching method according to claim 2, which is characterized in that using average variance method come to current
Loading condition is assessed, and is adjusted in due course.
7. a kind of dynamical feedback dispatching method according to claim 6, which is characterized in that using average variance method come to current
Loading condition is assessed, and is adjusted in due course, including:
(6) average load lavg is sought by formula (V), the load balancing degrees magnitude under cloud computing environment is sought by formula (VI)
C:
In formula (V), formula (VI), n is number of nodes;
C indicates load balance degree, and C is bigger, and load balance degree is smaller;
(7) given threshold C0, C0 ∈ (0,1), as C > C0, by the most request of the occupancy resource loaded on maximum node
On task immigration to the minimum node of load.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810493641.3A CN108733475A (en) | 2018-05-22 | 2018-05-22 | A kind of dynamical feedback dispatching method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810493641.3A CN108733475A (en) | 2018-05-22 | 2018-05-22 | A kind of dynamical feedback dispatching method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108733475A true CN108733475A (en) | 2018-11-02 |
Family
ID=63937759
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810493641.3A Pending CN108733475A (en) | 2018-05-22 | 2018-05-22 | A kind of dynamical feedback dispatching method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108733475A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522129A (en) * | 2018-11-23 | 2019-03-26 | 快云信息科技有限公司 | A kind of resource method for dynamically balancing, device and relevant device |
CN109815014A (en) * | 2019-01-17 | 2019-05-28 | 北京三快在线科技有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN110489238A (en) * | 2019-08-21 | 2019-11-22 | 北京百度网讯科技有限公司 | Nodal test method, apparatus, electronic equipment and storage medium |
CN110691140A (en) * | 2019-10-18 | 2020-01-14 | 国家计算机网络与信息安全管理中心 | Elastic data issuing method in communication network |
CN111083232A (en) * | 2019-12-27 | 2020-04-28 | 南京邮电大学 | Server-side load balancing method based on improved consistent hash |
CN111824216A (en) * | 2020-06-19 | 2020-10-27 | 北京交通大学 | Train running scheme evaluation method |
CN113689103A (en) * | 2021-08-18 | 2021-11-23 | 国电南瑞南京控制系统有限公司 | Adaptive load balancing employing flow distribution intelligent scheduling management method, device and system |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104010028A (en) * | 2014-05-04 | 2014-08-27 | 华南理工大学 | Dynamic virtual resource management strategy method for performance weighting under cloud platform |
CN106970831A (en) * | 2017-05-15 | 2017-07-21 | 金航数码科技有限责任公司 | The resources of virtual machine dynamic scheduling system and method for a kind of facing cloud platform |
-
2018
- 2018-05-22 CN CN201810493641.3A patent/CN108733475A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104010028A (en) * | 2014-05-04 | 2014-08-27 | 华南理工大学 | Dynamic virtual resource management strategy method for performance weighting under cloud platform |
CN106970831A (en) * | 2017-05-15 | 2017-07-21 | 金航数码科技有限责任公司 | The resources of virtual machine dynamic scheduling system and method for a kind of facing cloud platform |
Non-Patent Citations (1)
Title |
---|
张小东等: "SaaS 支撑框架及关键技术研究", 《电信科学》 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109522129A (en) * | 2018-11-23 | 2019-03-26 | 快云信息科技有限公司 | A kind of resource method for dynamically balancing, device and relevant device |
CN109815014A (en) * | 2019-01-17 | 2019-05-28 | 北京三快在线科技有限公司 | Data processing method, device, electronic equipment and computer readable storage medium |
CN110489238A (en) * | 2019-08-21 | 2019-11-22 | 北京百度网讯科技有限公司 | Nodal test method, apparatus, electronic equipment and storage medium |
CN110691140A (en) * | 2019-10-18 | 2020-01-14 | 国家计算机网络与信息安全管理中心 | Elastic data issuing method in communication network |
CN110691140B (en) * | 2019-10-18 | 2022-02-15 | 国家计算机网络与信息安全管理中心 | Elastic data issuing method in communication network |
CN111083232A (en) * | 2019-12-27 | 2020-04-28 | 南京邮电大学 | Server-side load balancing method based on improved consistent hash |
CN111083232B (en) * | 2019-12-27 | 2022-06-28 | 南京邮电大学 | Server-side load balancing method based on improved consistent hash |
CN111824216A (en) * | 2020-06-19 | 2020-10-27 | 北京交通大学 | Train running scheme evaluation method |
CN111824216B (en) * | 2020-06-19 | 2021-11-05 | 北京交通大学 | Train running scheme evaluation method |
CN113689103A (en) * | 2021-08-18 | 2021-11-23 | 国电南瑞南京控制系统有限公司 | Adaptive load balancing employing flow distribution intelligent scheduling management method, device and system |
CN113689103B (en) * | 2021-08-18 | 2023-11-24 | 国电南瑞南京控制系统有限公司 | Mining and shunting intelligent scheduling management method, device and system for self-adaptive load balancing |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108733475A (en) | A kind of dynamical feedback dispatching method | |
WO2021208546A1 (en) | Multi-dimensional resource scheduling method in kubernetes cluster architecture system | |
CN106161120B (en) | The distributed meta-data management method of dynamic equalization load | |
CN107832153B (en) | Hadoop cluster resource self-adaptive allocation method | |
CN104050042B (en) | The resource allocation methods and device of ETL operations | |
Abad et al. | Package-aware scheduling of faas functions | |
CN104023088B (en) | Storage server selection method applied to distributed file system | |
CN103595780B (en) | Cloud computing resource scheduling method based on the weight that disappears | |
CN105933408B (en) | A kind of implementation method and device of Redis universal middleware | |
CN103176849B (en) | A kind of dispositions method of the cluster virtual machine based on resource classification | |
CN102882973A (en) | Distributed load balancing system and distributed load balancing method based on peer to peer (P2P) technology | |
CN106095531B (en) | A kind of dispatching method of virtual machine loaded based on grade and physical machine in cloud platform | |
US10102230B1 (en) | Rate-limiting secondary index creation for an online table | |
CN102857560A (en) | Multi-service application orientated cloud storage data distribution method | |
CN109783235A (en) | A kind of load equilibration scheduling method based on principle of maximum entropy | |
Mansouri | QDR: a QoS-aware data replication algorithm for Data Grids considering security factors | |
Zhao et al. | Dynamic replica creation strategy based on file heat and node load in hybrid cloud | |
Khatami et al. | High availability storage server with kubernetes | |
US9898614B1 (en) | Implicit prioritization to rate-limit secondary index creation for an online table | |
Garg et al. | Optimal virtual machine scheduling in virtualized cloud environment using VIKOR method | |
Zhiyong et al. | An improved container cloud resource scheduling strategy | |
Tsujita et al. | Alleviating i/o interference through workload-aware striping and load-balancing on parallel file systems | |
Guo et al. | Handling data skew at reduce stage in Spark by ReducePartition | |
Irandoost et al. | Learning automata-based algorithms for MapReduce data skewness handling | |
Douhara et al. | Kubernetes-based workload allocation optimizer for minimizing power consumption of computing system with neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20181102 |
|
WD01 | Invention patent application deemed withdrawn after publication |