CN109815009A - Scheduling of resource and optimization method under a kind of CSP - Google Patents

Scheduling of resource and optimization method under a kind of CSP Download PDF

Info

Publication number
CN109815009A
CN109815009A CN201811625775.2A CN201811625775A CN109815009A CN 109815009 A CN109815009 A CN 109815009A CN 201811625775 A CN201811625775 A CN 201811625775A CN 109815009 A CN109815009 A CN 109815009A
Authority
CN
China
Prior art keywords
scheduling
gmap
server
resource
csp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811625775.2A
Other languages
Chinese (zh)
Other versions
CN109815009B (en
Inventor
张栋梁
刘会会
张中军
叶海琴
谭永杰
李纲
陈立勇
王倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhoukou Normal University
Original Assignee
Zhoukou Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhoukou Normal University filed Critical Zhoukou Normal University
Priority to CN201811625775.2A priority Critical patent/CN109815009B/en
Publication of CN109815009A publication Critical patent/CN109815009A/en
Application granted granted Critical
Publication of CN109815009B publication Critical patent/CN109815009B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Power Sources (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The invention discloses the scheduling of resource and optimization method under a kind of CSP, belong to field of cloud computer technology, firstly, the son scheduling of each application program is parallel generation, and are assembled into pre-scheduling concentration;It concentrates in pre-scheduling using each pre-scheduling as root processing, executes GMaP parallel, finally generate multiple scheduling results, select best scheduling result as solution;GMAP mainly solves energy problem but need to meet the off period;If all off periods all meet, GMaP only focuses on energy consumption, unless there is the new off period;Under GMaP frame, scheduling strategy can become excessively power consumption or time-consuming, but the resource request of GMaP and runing time can carry out two aspect adjustment based on the power combination of target cloud environment: the quantity of root can be adjusted according to random natural number;The size of each search tree can individually adjust.GMAP of the invention is very flexible in the control of search space size and Riming time of algorithm.

Description

Scheduling of resource and optimization method under a kind of CSP
Technical field
The invention belongs to field of cloud computer technology, specifically, scheduling of resource and optimization method under being related to a kind of CSP.
Background technique
Cloud computing due to on-demand Self-Service, ubiquitous network insertion, the resource pool unrelated with position and The advantages such as risk transfer are considered as next-generation computing technique mode.Cloud computing calculates data and resource is stored from network transitions To " cloud ", required resource can be accessed in user whenever and wherever possible from cloud.Meanwhile the large data of service is provided for cloud user Center and server cluster-cloud service provider (CSP) are also come into being, it will provide cloud service operation cost minimize and Efficiency of operation maximizes, and attracts increasingly multiple access cloud service center.
Virtualization technology is one of the important impetus of cloud computing development, it provides server, data center CPU and memory Source etc. is reconfigured on cloud computing deployment and the basic unit virtual platform (VM) of management, and user calls VM to construct by CSP Service and application platform, how CSP decision distributes physical resource uses to each user collaborative;It is well known that the major part of CSP Operation cost derived from dynamic and static energy consumption, the energy consumption that large-scale Cloud Server is waited during idle time is up to the 50% of peak power. For this purpose, selectivity closing free time Cloud Server or in all active servers balance resource utilization can to enable consumption most Smallization.In order to pacify user to the worry of potential adverse consequences, the both sides that standardized using Service Level Agreement (SLA) agreement Service quality, including off period, privacy and safety standard etc..CSP will be managed while must unconditionally meeting individual subscriber request Large scale scale heterogeneous elastic cloud resource is managed to improve efficiency to the maximum extent.To realize this target, it is necessary to follow following three Principle, i.e. 3A criterion:
1) cloud platform accurate modeling cloud platform modeling only considers physical factor, it is most important to model optimization, can not only Computation complexity is reduced, but also is able to maintain enough model accuracies.
2) appropriate user job load modeling is currently, there are two types of work load model frame, " batch processing moulds in cloud computing Formula scheduling " and " relying on mode dispatching ".Key difference is that the dependence of handled task is different between them.In coarseness ring Under border, each user job load is mutually indepedent as an atomic task and other tasks, according to poisson arrival rate and response Time can judge whether set of tasks constitutes a batch processing task, and batch processing task greatly reduces CSP task schedule Complexity.Under fine granularity environment, the workload of each user is a more accurate task image, in figure task mutually according to Rely, mutually supports, it is common to complete CSP task schedule.
3) acceptable complexity ensures that optimization process itself will not generate too long fortune for relying on mode dispatching, CSP Row time and higher energy consumption.
Summary of the invention
It is an object of the invention to propose scheduling of resource and optimization method under a kind of CSP.This method provides for CSP General scheduling and Optimization Framework, it is intended to meet the off period of all users, the frame while improving efficiency to the maximum extent It is enough to carry under large-scale cloud computing platform and handles the load of multi-user's major work.
Its technical solution is as follows:
Scheduling of resource and optimization method under a kind of CSP (cloud service provider cloud service provider) are opened One unlimited off period and energy-efficient Scheduling Framework, to the application program and overall energy consumption cost optimization for violating the off period.Firstly, The son scheduling of each application program is parallel generation, and is assembled into pre-scheduling concentration.It concentrates in pre-scheduling by each pre-scheduling As root processing, GMaP is executed parallel, finally generates multiple scheduling results, selects best scheduling result as solution. GMAP (Guided Migrate and Pack guidance migration and integration) mainly solves energy problem but need to meet the off period.Such as Fruit all off periods all meet, then GMaP only focuses on energy consumption, unless there is the new off period.Under GMaP frame, plan is dispatched Excessively power consumption or time-consuming can slightly be become, but the resource request of GMaP and runing time can be based on the power combinations of target cloud environment Carry out two aspect adjustment:
1) quantity of root can be adjusted according to random natural number;
2) size of each search tree can individually adjust.
Stage 1: son scheduling is generated
The son scheduling of user a is based on UaGaScheduling, it is assumed that (virtual machine is virtual for the VM of each request Machine) one VM of Type Concretization.Virtual machines all simultaneously are mapped to single virtual server.No matter which kind of dispatching algorithm, Sub- dispatching algorithm can in all application programs parallel generation, use improved greedy algorithm.
Stage 2: feature is applied
In this stage, each application program has different characteristic parameters.First characteristic parameter is GaSub- scheduling length, It is denoted asSecond parameter is off phase fuzzy parameterFuzzy application program can more preferably be located in Manage CSP cost of energy.
Finally, it is assumed that VM resource is infinitely great, subtracts PARa with the timing length of genetic algorithm, presses ascending order according to these parameters Generate the application list of three sequences: Lseed[↑], PAR [↑] and SLACK [↑].
Stage 3: preliminary scheduling generates
By covering sub- scheduling to generation pre-scheduling collection on server.
Stage 4: optimization
The core of GMaP is the program optimization of beam search.This stage can by task from a virtual machine (vm) migration to another Virtual machine is maximized with meeting off period and efficiency.GMaP realizes global optimum using evolution algorithm.Each iteration optimization all passes through Cross two steps, migration and integration.
The function of Migrate () (migration function) is will to be currently located at source server DxOn type g virtual machine (vm) migration arrive Another destination server DyOn same type virtual machine.Source and target server can be different, but most of In the case of, to avoid high communication delay, they should be located in the same server zone.
Further, migration is attempted all make three important decisions every time, codetermines solution quality, it may be assumed that
1) which user's migration should be selected;
2) which task immigration should be selected;
3) which server is the task should move to.
The invention has the benefit that
GMAP of the invention is very flexible in the control of search space size and Riming time of algorithm.The experimental results showed that when When disposing GMaP for CSP, global energy consumption improves 23% or more when providing service for 30-50 user, for 60-100 A user improves 16% or more when providing service.
Detailed description of the invention
Fig. 1 is environment cyclogram;
Fig. 2 is Application models;
Fig. 3 is server zone distribution map;
Fig. 4 is dynamic energy consumption with CPU (central processing unit) utilization rate change curve;
Fig. 5 is scheduling virtual machine figure;
Fig. 6 is COSP (total amount of energy consumption) value variation diagram;
Fig. 7 is the virtual machine figure for testing 6 each user's distribution.
Specific embodiment
Technical solution of the present invention is described in more detail with reference to the accompanying drawings and detailed description.
The present invention uses the dependence mode dispatching of workload, and proposes a kind of new cloud resource configuration mode-task tune Degree model and energy optimization frame, this frame have the characteristics that following:
(1) workload is modeled as the Multi-Task Graph set with output dependence.Based on this work load model Cloud system Optimization Framework include: Nephele, Pegasus, VGrADS.
(2) cloud platform is modeled as representing and there is different resource capacity, efficiency and the heterogeneous server for communicating the limit Weighted graph.
(3) user requests virtual machine (such as Amazon EC2) in the charging protocol of pay-for-use, but does not have to consider resource Configuration and Mission Scheduling.
(4) it is excellent to handle deadline, resource distribution, arrangement virtual machine, task schedule and energy consumption cost in an integral manner by CSP The problems such as change.
(5) under the conditions of the cloud resource of optimization, dispatching algorithm fully achieves parallelization processing.
Fig. 1 task schedule is executed by CSP, and CSP selectively receives workload request by Admission control, so An appropriate number of virtual machine is distributed for each workload request afterwards, this virtual machine is distributed into corresponding physical server, such as It is necessary to then merge virtual machine.Under the premise of meeting SLA deadline, all requests are handled, while most Reduce energy consumption cost to limits.Above-mentioned work load model not only has task image parallelization advantage, but also there are also global excellent Change feature;It is only applicable to batch processing workload frame at present, which is offline, but recurrence when entering by new user Triggering, it is easy to which ground is translated into adaptive on-line Algorithm.
One, related work
For CSP, the first step of cloud service optimization is resource distribution, and substantially this is with physical server and virtual machine Mode distribute the process of computing resource.Simplest mode is cloud platform to be envisioned for the server of mutually disjoint isomorphism, And workload is the individual requests mode of given arrival rate.In this case, resource distribution is carried out by the following method It solves: modification bin packing algorithm or the adaptive workload prediction algorithm inspired by queuing theory.More accurate hardware modeling Method is related with communication capacity.In the prior art, author proposes a kind of graphical model based on a variety of service cloud environments, and The settlement server by way of MILP (mixed integer linear programming mixed integer linear program) Configuration and message routing issue also introduce other variants, for example binding virtual machine request, virtual machine performance changeability take more The sleep state of business device or price auction.
It is the mapping of application program or virtual machine to physical server after resource distribution.The target of this process be each Server is maintained close to optimum utilization level to realize high energy efficiency.This problem is similar to the classical load in Internet service Equilibrium problem, it can be solved simultaneously during the resource allocation for the load that works independently.For example, bin packing algorithm in the prior art The number of servers that deployment can not only be reduced to the maximum extent can also prevent server unloaded or overload, existing Virtual machine configuration and Layout Problem are established as an extent function in technology.Using other classical solutions, such as MCMF (minimum cost maximum stream flow), when workload is unpredictable in advance, then dynamic workload migration and virtual at this time Machine reallocation just becomes extremely important.The variation of electricity price over time and space is so that load keeps balance.Correlation technique includes MILP, original dual, bargaining game and probabilistic forecasting.
In cloud computing system, the scheduler task figure of the workload based on dependence is different from batch processing task tune Degree.For higher level face, this problem is similar to chip multiprocessing (the CMP chip of parallel/PC cluster community The multiprocessing of multiprocessor chip) scheduling.But the technology that it is developed does not directly apply to cloud user, mainly Be as public cloud it is opaque caused by;It due to multiple competition users and deposits, it is often more important that the bullet of cloud computing bottom hardware Property feature be not suitable for CSP, improved Scheduling Framework is optimized from the angle of personal user, for example Nephele and Pegasus fill Divide the dynamic characteristic for considering computing resource in cloud computing.Due to lacking to other competition users and entire cloud resource figure Solution, they can not capture global CSP administrative mechanism, and such as admission control, virtual machine arrangement and merging, as a result only basis is single The resource utilization and deadline of user is evaluated, rather than whole energy consumption.
Two, user job load modules
The present invention is using directed acyclic graph (DAG directed acyclic graphs directed acyclic graph) to user job Load modeling, entire workload are expressed as N number of disjoint DAG set: { G1(V1,E1),G2(V2,E2),...,GN(VN, EN), each directed acyclic graph Ga(1≤a≤N) indicates a workload request, GaIn each vertex Vi a(1≤i≤|Va |) indicate a task.General, it is assumed that each workload request is the application program for belonging to individual consumer.Therefore, Each application program is equivalent to a user job load requests.GaIn fromTo Tj aLine segment indicate Tj aOutput rely on Weight on arrow line segmentIt indicates from previous taskIt is successfully delivered to Tj aRequired data volume.Example such as Fig. 2 institute Show.
Task feature
Task is all handled by virtual machine, and virtual machine is divided into K seed type: { VM1,VM2......,VMK, every kind of virtual machine VMg (1≤g≤K) is coupled with a two-spot set of integers, the quantity of CPU and memory source needed for this virtual machine of the integer set representations, AsWith
Each taskIt is coupled as a binary set of integersFirst parameter in binary group set of integers Expression taskIt can only be run on the type of virtual machine.Second parameterIt is taskIn typeLongest is held on virtual machine Row time, the binary group set of integers are the data input units of optimization algorithm.Maximum execution time is regarded as influence by we to be owned The common factors of dispatching algorithm.The team of Nephele project contemplates a kind of study mechanism, its Neng Shi cloud operator is from past This is summed up in experience executes the time.
Resource request
Other than workload request, user must also obtain the computing resource of CSP.Certainly, CSP is according to scheduled charge Contract is charged for the resource request of user.In current cloud system, resource request is mutually bundled with type of virtual machine.Often A user can only specify the type of required virtual machine, but cannot specify the quantity of the virtual machine of required type, so user It need not be concerned about the detailed problem of resource allocation.
Virtual machine request is expressed as: each using GaThere is relationship with binary array, binary array UaThere is K element Composition (K is virtual machine quantity):IfIndicate VMgIt is asked by user a It asks, otherwiseUaIt must assure that all tasks are mapped on virtual machine, that is to say, that whenWhen, it has to comply with VMgBelong to { VM1,VM2,......,VMK, andThe application of each user is by workload request GaU is requested with virtual machinea Definition, but scheduling is not carried out in user.Wherein reason has three: I) cloud platform is opaque to user, II) user do not have accordingly Computing capability;III) CSP has overall scheduling permission to realize higher efficiency.
Off period
Although user not can request that the virtual machine of more same types, the workload of oneself can not be arranged, can be led to Cross the off period control workload performance in specified SLA.User a (Ga) the workload off period be denoted asOne As, when user a provides a smaller off period, CSP will be GaMore VM resources are distributed, so as to GaIn task can To execute and be finished as early as possible parallel.However, the workload of user is constrained by Admission control, therefore, beyond cut-off The application program of phase will be rejected scheduling or abandon during scheduling.
Three, cloud platform models
Cloud is by M server: { D1,D2,……,DMComposition, and it is modeled as the non-directed graph on M vertex, each vertex generation One server of table, the weight B on each sidex,yRepresent (Dx,Dy) between message capacity, multiple adjacent servers constitute local The server zone of connection, the server zone are in communication with each other by high-speed channel, the distance between server and bandwidth chahnel value Bx,yIt indicates.Bx,xIt is by default ∞, i.e., executing on the same server for task will not generate any communication overhead.This Outside, it is assumed that all there is a high-speed channel between any two server, either by direct-connected or multi-hop, multihop path will be by It is abstracted as low value Bx,yConnection side.In Fig. 3, cloud platform is made of 9 servers, and constitutes two server zones, wherein one A server zone has 6 servers, and it is clear that another server zone, which has 3 servers, and locality connection is also likely to be isomery For the sake of, do not show that Servers-all connects.
Virtual machine configuration and resource utilization
In operation, each server DxWith the integer array with K elementPhase It closes, whereinIndicate server DxOn operate to type g VMs (VMg) virtual machine quantity, QxIt is dynamic, because it can be because VM's stops working and starts and change.QxIt is Q in the value of time tx(t), each server DxMoney comprising limited quantity Source, i.e. respectively CPU quantityAnd amount of memoryObviously, DxVM configuration according to total resources, i.e.,MeetAndServer DxIt include quiescent dissipation in the power consumption of time tAnd dynamic power consumptionThe two factors all with DxResource utilization Util on time txIt is (t) related, Utilx(t) only consider Qx(t) cpu busy percentage on the VM of quantity, without considering that virtual machine is operation or free time, because in sky During spare time, CPU is also required to operate.Utilx(t) it is expressed as follows:
Work as Utilx(t) > 0 when,It is a constant, is otherwise 0.According to every watt of performance in the prior art, clothes Business device has an optimum utilization, and the present invention defines server DxOptimal utilization rate be Optx.The Opt of server at presentx≈ 0.7, work as Utilx(t)<OptxWhen, power consumption increase is faster.ParameterAnd βxRespectively represent server DxIn Utilx(t)<OptxWith Utilx(t) ≥OptxWhen power consumption increase ratio, even identical utilization rate, different server efficiencies be would also vary from.Calculation formula it is as follows:
Fig. 4 is depicted under different cpu busy percentages, whenβx=10, and OptxWhen=0.7's Change curve.
Assuming that the maximum scheduling length upper bound of all applications is Lmax, then the total amount (COSP) of energy consumption is the whole service time In the summation of the power consumption of Servers-all:
Admission control
The purpose of access control is the user job loading problem for judging and solve to consume excessively resource.In cloud, some User requests a large amount of VMs are reserved to cause resource occupied, brings difficulty to the demand and scheduling of resource of other users.In the present invention In, the off period based on user, we are using a kind of second level Admission control come screening and filtering user.
Under second level Admission control, it will check each application judge its whether can give off period it Preceding completion.Different from kernel scheduling program, this " schedulability " can carry out in linear session, the off period if more than, Then the workload request will not be performed.In scheduling process, user competes VM resource under the supervision of CSP, due to resource It is limited and certain user's request may be unable to satisfy.If still not meeting the off period after a large amount of Optimization Works, then this work Making load requests will be dropped.
The operation of four, clouds
CSP is responsible for configuring VM and distribution task, and every virtual machine can only be by a user occupancy, until CSP stops the void Quasi- machine service.Assuming that there is a ready task Ti a,If ready task Ti aIt is scheduled to server DxOn, serve g Class virtual machine then must satisfy two conditions:
1) target VM is available, and only serves user a;
2) all necessary datas of output are completed.
Assuming that task Ti aPrevious task beIn server DyUpper execution, then data output the time be
Scheduling quality is determined by two factors: i) overall energy consumption;Ii) the number of requests abandoned because violating the off period. Sometimes scheduling quality is very high, but finally infeasible due to violating the off period, at this moment should adjust scheduling strategy and be wanted with meeting the off period It asks, and reduces the number of requests being dropped to the greatest extent.For example, setting CSP serves two users, workload information see the table below I.
1 task image of table and task delay
If workload is regarded as individual atoms, display scheduling method is to be put into two application programs most in figure Energy-efficient 5 (D of server5) on, when distributing two 1 type Virtual machines for it, utilization rate Util5(t) it is less than Opt5.Knot Shown in fruit such as Fig. 5 (a), table length is 19 units.If(shown in dotted line red line), then this is adjusted Degree violates the off period of two users.Before user's request is dropped, the cost of scheduling is calculated are as follows:
For the off period for meeting two users, CSP utilizes the data parallelism in G1 and G2.Greedy method can generation time The scheduling scheme generated shown in table Fig. 5 (b), all virtual machines all concentrate on D5 in the figure, so when two users complete Between reduced, but over-burden by D5.Meanwhile Utilx(t) it is greater than Optx, this energy consumption of scheduling are as follows:
From the point of view of in detail, in t=9, stop the Class1 virtual machine retained for user 1, is 2 Retention Type of user, 1 virtual machine Configuration.FigureShown in energy consumption of scheduling are as follows:
Another solution is using other servers, such as server 6 (D6).CSP increases by one to server D6 User, so that D5 and D6 are not in load.Fig. 5 is that user 2 is transferred to D6 server.D6 and D5 is not necessarily in the same clothes Be engaged in device group.This energy consumption of scheduling are as follows:
Assuming thatβ56=10, Opt5=Opt6=0.7,With Value be variable.For the sake of simplicity, ifIf only considering energy consumption, cospaIt is best dispatching party Case. cospbAnd cospdValue be all larger than cospa, value cospbAnd cospdBetween comparison depend on PstaticAnd other parameters, Such as: β56.If it is considered that multi-user's problem of big task image, may be implemented in Fig. 5 (c), then cospcIt is best, cospa Because meeting all off periods, energy consumption ratio cospaIt is low.It is as shown in Figure 6 how server efficiency influences scheduling.
Present case research is to the effect that:
1) CSP energy expense usually will increase come accelerating application by additional virtual machine distribution and parallel execution.
2) execute VM distribution and when task immigration there are a variety of scheduling schemes, without violate off period and low energy consumption at This scheduling scheme is preferentially considered.Optimal allocation and migration scheme depend on the characteristic and workload situation of cloud platform.
Five .GMaP frames
In this section, " guidance migrates and integration " (GMaP) optimizing scheduling frame of CSP will be introduced.GMaP is based on beam search And it is fully parallelized, CSP runs GMaP using governable cloud resource.
The basic thought of GMaP algorithm is to open a unlimited off period and energy-efficient Scheduling Framework, to the violation off period Application program and overall energy consumption cost optimization.Firstly, the son scheduling of each application program is parallel generation, and it is assembled into presetting Degree is concentrated.It concentrates in pre-scheduling using each pre-scheduling as root processing, executes GMaP parallel, finally generate multiple scheduling results, Select best scheduling result as solution.GMAP mainly solves energy problem but need to meet the off period.If all Off period all meets, then GMaP only focuses on energy consumption, unless there is the new off period.Under GMaP frame, scheduling strategy can become Excessively power consumption or time-consuming, but the resource request of GMaP and runing time can carry out two based on the power combination of target cloud environment Aspect adjustment:
1) quantity of root can be adjusted according to random natural number;
2) size of each search tree can individually adjust.
Stage 1: son scheduling is generated
The son scheduling of user a is based on UaGaScheduling, it is assumed that for one VM of VM Type Concretization of each request.Together The virtual machine of Shi Suoyou is mapped to single virtual server.Regardless of which kind of dispatching algorithm, sub- dispatching algorithm can be all Application program in parallel generation.
Stage 2: feature is applied
In this stage, each application program has different characteristic parameters.First characteristic parameter is the sub- scheduling length of Ga, It is denoted asSecond parameter is off phase fuzzy parameterFuzzy application program can more preferably be located in Manage CSP cost of energy.
Finally, it is assumed that VM resource is infinitely great, subtracts PARa with the timing length of genetic algorithm, presses ascending order according to these parameters Generate the application list of three sequences: Lseed[↑], PAR [↑] and SLACK [↑].
Stage 3: preliminary scheduling generates
By covering sub- scheduling to generation pre-scheduling collection on server.As shown in Figure 5 a, pseudocode is as follows:
Stage 4: optimization
The core of GMaP is the program optimization of beam search.This stage can by task from a virtual machine (vm) migration to another Virtual machine is maximized with meeting off period and efficiency.GMaP realizes global optimum using evolution algorithm.Each iteration optimization all passes through Cross two steps, migration and integration.Pseudocode is as follows:
The function of Migrate () is will to be currently located at source server DxOn type g virtual machine (vm) migration to another purpose Server DyOn same type virtual machine.Source and target server can be different, but in most cases, are High communication delay is avoided, they should be located in the same server zone.
Migration is attempted all make three important decisions every time, codetermines solution quality, it may be assumed that
1) which user's migration should be selected?
2) which task immigration should be selected?
Which server should 3) task move to?
CSP selects the postponing application program of the task, is moved to the server that will not generate high energy efficiency expense On, while to consider whether to have an impact other applications.Decision process would generally cross-check Lseed[↑], PAR [↑] With SLACK [↑] to select that there is negative SLACK value, high LseedApplication with PAR value is migrated, because application program violates thus Off period requirement produces negative S (slack) value, but since PAR value is higher, reduces length, mesh by executing parallel Mark server will be selected according to utilization rate rank and task dependencies.But since current priority meets deadline, Therefore the server that former subsequent tasks will be selected to be resident, and minimize the communication of cross-server.Journey is applied when most of When sequence meets the off period, manipulation for greater flexibility is had the application program of high s-value to minimize energy consumption by GMaP.For example, by task Less crowded server is moved to, after migration, needs to adjust the plan in source server and the plan in destination server, Wherein current task is the direct or indirect subsequent tasks of migration task.Since migration will carry out virtual machine integration and recombination, because This will lead to excessively configuration VM.
Six, experimental results
In this section, by magnanimity workload on large-scale cloud platform experiments have shown that the validity of GMaP.It tests every time defeated The data entered are different, and cloud platform scale is not also identical.Table two provides the upper and lower bound of some key parameters.
2 model parameter table of table
Scheduling result more final first and " the best off period (BDOPS) of fuzzy pre-scheduling ", i.e., best dispatching party Formula is exactly that workload request as an atomic entities and is ignored all off periods.BDOPS realizes best efficiency, but wraps Containing a large amount of off period violation situations.The present invention formulates SLA and the user of 30%-80% in BDOPS is made to violate off period, BDOPS It is the reference of energy consumption overhead computational.
Secondly, comparing solution and baseline, which subtracts the optimization that efficiency obtains from GMaP, i.e. benchmark scheduling is exactly Fuzzy energy consumption.The results are shown in Table 3.
Table 3 is on a large scale and ultra-large user job loads input results table
As 30-50 user of input on a large scale, BDOPS is compared, energy consumption averagely improves -14.12%, in other words, energy Consume expense average out to 14.12%.This expense is inevitable, because it is distributed by additional virtual machine, for accelerating the off period to disobey The application program of rule.CSP distributes appropriate VM to each user according to user demand, and Fig. 7 illustrates the virtual machine point of experiment 6 Match.Compared with baseline, the improvement average out to 23.61% of energy cost, this is very promising result.
As 60-100 user of ultra-large input, if realizing the solution quality of same levels, search should be expanded and calculated Method space inputs the increase of number of users to match.However be being consistent property, the present invention has modified the size of search space such as Shown in table 2.Therefore, energy cost expense averagely increases 49.72%, and the average energy consumption improvement relative to baseline is reduced to 9.35%.According to these inferred from input data, 50 roots of each search tree and the search space of 5000 nodes are sufficient to accommodate 30-50 A user job load, but be inadequate for 60-100 user job load.
Pass through the Util (t) of 10 servers all in observation experiment 6.Server is ranked up according to efficiency grade, Confirm server 0 be it is most energy-efficient, it can be seen that the fundamental difference between BDOPS and scheduling scheme.For BDOPS, own Application program is all placed in most energy-efficient server 0-4, and server 5-9 is not used.Although its Energy Efficiency Ratio is best, In 29 workloads received, 12 violate the off period.
For scheduling scheme, since GMaP is assigned with other virtual machines server 0-4 utilization rate is increased, make efficiency compared with The new virtual machine of the low online trustship of server 5-9, Util (t) value is relatively high in t=0, but these high level are kept very in short-term Between just have dropped.
Search space analysis
Similar to other evolution algorithms, GMaP long-play can preferably be solved with expansion algorithm search space and generation Scheme, but need to support bigger search space by increasing the calculating time, this, which will lead to, increases energy consumption and runs GMAP's Time, cost and assessment expansion algorithm search space changing in terms of solution quality from deployment GMaP to GMaP itself Into providing some schemes on how to balance income.Firstly, table 3 has the search space of extension, experiment 15- is reruned 10 of 50 roots and each search tree in 18, i.e. PSS4A node, the results are shown in Table 4.
4 expanded search spatial result table of table
When search space is double, GMaP more preferably than 3 result of table, almost by average energy consumption optimized amount from table 3 9.35% doubles 16.85% into table 4.20 detailed cases for having received user are checked in fixed cloud platform below Research, has 50 roots in PSS, search tree size is increased to 105 nodes from 100 node indexs.As the result is shown such as table 5 It is shown.
Influence of 5 algorithm of table to scheduling prioritization scheme
With the growth of search tree, GMaP solution be may be implemented most preferably.Importantly, there is income reduction in GMaP The phenomenon that, the size of search tree and PSS are highly dependent on actual operating condition.GMaP can only be realized in terms of search space at present Flexibility.
The aggressiveness of off period will affect the quality of solution in SLA.It is real that the present invention will carry out a fixed cloud platform The plan of testing, the platform include two server zones, share 40 workload requests of 15 servers and one group of fixation.Each The off period of application program is a part of its seed projected length:If μ >=1, BDOPS All off periods will be met, and immediately become baseline programme.As μ < 1, GMaP needs to solve the problems, such as off period violation.Due to μ The off period of all application programs is controlled, therefore the slightly lower general who has surrendered of μ greatly increases the aggressiveness of whole off period.If μ is too It is small, then it can abandon many requests.In order to be consistent with hypothesis, 6, table researchs are without abandoning the case where requesting.
Influence of 6 off period of table to scheduling prioritization scheme
As μ=1, it can be seen that GMAP keeps the energy consumption of entire optimization process minimum, improves 20.32% than BDOPS.Work as μ When being decreased to less than 1, GMAP is limited to this rigid requirement of off period, therefore scheduling scheme is superior not as good as BDOPS.With μ value Reduce, baseline programme will execute many virtual machine distribution unfavorable to energy consumption and task immigration, GMAP have successfully restored nearly 40% Energy loss.
Seven, conclusions
The global running optimizatin problem of present invention cloud computing from the viewpoint of cloud service provider (CSP), target are General scheduling and Optimization Framework are provided for CSP, it is intended to meet cutting for all users while improving efficiency to the maximum extent The only phase, which, which is enough to carry under large-scale cloud computing platform, handles the load of multi-user's major work.
Cloud computing system uses two kinds of work load model: dependence is requested and is had in independent batch processing Task image.Workload from multiple users is modeled as disjoint task set of graphs by the present invention.For cloud platform mould Type, it is fully able to reflection server resource capacity and efficiency isomerism, including server communication bottleneck is also considered.By simultaneously Row executes and the global energy consumption minimized fine granularity processing loaded to hardware resource and user job is the application towards the off period Program acceleration provides chance it also requires making more efforts in admission control, resource distribution, virtual machine layout and task schedule. Unified scheduling and Optimization Framework in the present invention we have proposed GMAP as CSP, is solved these problems with general manner. GMAP is also very flexible in the control of search space size and Riming time of algorithm.The experimental results showed that when disposing GMaP for CSP When, global energy consumption improves 23% or more when providing service for 30-50 user, raising when providing service for 60-100 user 16% or more.
The foregoing is only a preferred embodiment of the present invention, the scope of protection of the present invention is not limited to this, it is any ripe Know those skilled in the art within the technical scope of the present disclosure, the letter for the technical solution that can be become apparent to Altered or equivalence replacement are fallen within the protection scope of the present invention.

Claims (2)

1. scheduling of resource and optimization method under a kind of CSP, which is characterized in that firstly, the son scheduling of each application program is simultaneously What row generated, and it is assembled into pre-scheduling concentration;It concentrates in pre-scheduling using each pre-scheduling as root processing, executes GMaP parallel, Multiple scheduling results are finally generated, select best scheduling result as solution;GMAP mainly solves energy problem but needs Meet the off period;If all off periods all meet, GMaP only focuses on energy consumption, unless there is the new off period;In GMaP Under frame, scheduling strategy can become excessively power consumption or time-consuming, but the resource request of GMaP and runing time can be based on target cloud ring The power combination in border carries out two aspect adjustment:
1) quantity of root can be adjusted according to random natural number;
2) size of each search tree can individually adjust;
Stage 1: son scheduling is generated
The son scheduling of user a is based on UaGaScheduling, it is assumed that for one VM of VM Type Concretization of each request;Institute simultaneously Some virtual machines are mapped to single virtual server;Regardless of which kind of dispatching algorithm, sub- dispatching algorithm can be in all applications Parallel generation in program;
Stage 2: feature is applied
In this stage, each application program has different characteristic parameters;First characteristic parameter is the sub- scheduling length of Ga, is denoted asSecond parameter is off phase fuzzy parameterFuzzy application program can preferably handle CSP energy Measure cost;
Finally, it is assumed that VM resource is infinitely great, subtracts PARa with the timing length of genetic algorithm, is generated according to these parameters by ascending order The application list of three sequences: Lseed[↑], PAR [↑] and SLACK [↑];
Stage 3: preliminary scheduling generates
By covering sub- scheduling to generation pre-scheduling collection on server;
Stage 4: optimization
The core of GMaP is the program optimization of beam search;This stage can be virtual from a virtual machine (vm) migration to another by task Machine is maximized with meeting off period and efficiency;GMaP realizes global optimum using evolution algorithm;Each iteration optimization all passes through two Step, migration and integration;
The function of migrate () is will to be currently located at source server DxOn type g virtual machine (vm) migration to another purpose service Device DyOn same type virtual machine;Source and target server is located in the same server zone.
2. scheduling of resource and optimization method under CSP according to claim 1, which is characterized in that migration is attempted all every time Three important decisions can be made, solution quality is codetermined, it may be assumed that
1) which user's migration should be selected;
2) which task immigration should be selected;
3) which server is the task should move to.
CN201811625775.2A 2018-12-28 2018-12-28 Resource scheduling and optimizing method under CSP Expired - Fee Related CN109815009B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811625775.2A CN109815009B (en) 2018-12-28 2018-12-28 Resource scheduling and optimizing method under CSP

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811625775.2A CN109815009B (en) 2018-12-28 2018-12-28 Resource scheduling and optimizing method under CSP

Publications (2)

Publication Number Publication Date
CN109815009A true CN109815009A (en) 2019-05-28
CN109815009B CN109815009B (en) 2022-01-25

Family

ID=66602724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811625775.2A Expired - Fee Related CN109815009B (en) 2018-12-28 2018-12-28 Resource scheduling and optimizing method under CSP

Country Status (1)

Country Link
CN (1) CN109815009B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124665A (en) * 2019-11-22 2020-05-08 奇瑞汽车股份有限公司 Method and device for distributing computing resources
CN112363811A (en) * 2020-11-16 2021-02-12 中国电子科技集团公司电子科学研究院 Artificial intelligence computing resource scheduling method and computer readable storage medium
CN114296868A (en) * 2021-12-17 2022-04-08 中电信数智科技有限公司 Virtual machine automatic migration decision method based on user experience in multi-cloud environment
CN114827142A (en) * 2022-04-11 2022-07-29 浙江大学 Scheduling method for ensuring real-time performance of containerized edge service request

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737752A (en) * 2004-08-18 2006-02-22 华为技术有限公司 Managing method for shared data
US20110093500A1 (en) * 2009-01-21 2011-04-21 Google Inc. Query Optimization
CN102546771A (en) * 2011-12-27 2012-07-04 西安博构电子信息科技有限公司 Cloud mining network public opinion monitoring system based on characteristic model
CN105447565A (en) * 2015-11-19 2016-03-30 广东顺德中山大学卡内基梅隆大学国际联合研究院 On-chip network mapping method based on discrete bat algorithm
CN105740051A (en) * 2016-01-27 2016-07-06 北京工业大学 Cloud computing resource scheduling realization method based on improved genetic algorithm
US20170116107A1 (en) * 2011-05-31 2017-04-27 International Business Machines Corporation Testing a browser-based application
CN107301137A (en) * 2017-07-04 2017-10-27 福建中金在线信息科技有限公司 RSET interface realizing methods and device and electronic equipment and computer-readable recording medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1737752A (en) * 2004-08-18 2006-02-22 华为技术有限公司 Managing method for shared data
US20110093500A1 (en) * 2009-01-21 2011-04-21 Google Inc. Query Optimization
US20170116107A1 (en) * 2011-05-31 2017-04-27 International Business Machines Corporation Testing a browser-based application
CN102546771A (en) * 2011-12-27 2012-07-04 西安博构电子信息科技有限公司 Cloud mining network public opinion monitoring system based on characteristic model
CN105447565A (en) * 2015-11-19 2016-03-30 广东顺德中山大学卡内基梅隆大学国际联合研究院 On-chip network mapping method based on discrete bat algorithm
CN105740051A (en) * 2016-01-27 2016-07-06 北京工业大学 Cloud computing resource scheduling realization method based on improved genetic algorithm
CN107301137A (en) * 2017-07-04 2017-10-27 福建中金在线信息科技有限公司 RSET interface realizing methods and device and electronic equipment and computer-readable recording medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
G.DINESH等: "QoS Ranking Prediction for Cloud Brokerage Services", 《INTERNATIONAL JOURNAL OF COMPUTER SCIENCES AND ENGINEERING》 *
张栋梁等: "云计算中负载均衡优化模型及算法研究", 《SOFTWARE》 *
林伟伟等: "基于CSP 的能耗高效云计算资源调度模型与算法", 《通信学报》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111124665A (en) * 2019-11-22 2020-05-08 奇瑞汽车股份有限公司 Method and device for distributing computing resources
CN111124665B (en) * 2019-11-22 2023-07-28 奇瑞汽车股份有限公司 Method and device for distributing computing resources
CN112363811A (en) * 2020-11-16 2021-02-12 中国电子科技集团公司电子科学研究院 Artificial intelligence computing resource scheduling method and computer readable storage medium
CN114296868A (en) * 2021-12-17 2022-04-08 中电信数智科技有限公司 Virtual machine automatic migration decision method based on user experience in multi-cloud environment
CN114296868B (en) * 2021-12-17 2022-10-04 中电信数智科技有限公司 Virtual machine automatic migration decision method based on user experience in multi-cloud environment
CN114827142A (en) * 2022-04-11 2022-07-29 浙江大学 Scheduling method for ensuring real-time performance of containerized edge service request
CN114827142B (en) * 2022-04-11 2023-02-28 浙江大学 Scheduling method for ensuring real-time performance of containerized edge service request

Also Published As

Publication number Publication date
CN109815009B (en) 2022-01-25

Similar Documents

Publication Publication Date Title
Gao et al. An energy and deadline aware resource provisioning, scheduling and optimization framework for cloud systems
Masdari et al. A survey of PSO-based scheduling algorithms in cloud computing
Liu et al. Job scheduling model for cloud computing based on multi-objective genetic algorithm
Zuo et al. A multi-objective optimization scheduling method based on the ant colony algorithm in cloud computing
CN104657221B (en) The more queue flood peak staggered regulation models and method of task based access control classification in a kind of cloud computing
Chaurasia et al. Comprehensive survey on energy-aware server consolidation techniques in cloud computing
CN109815009A (en) Scheduling of resource and optimization method under a kind of CSP
Asghari et al. Online scheduling of dependent tasks of cloud’s workflows to enhance resource utilization and reduce the makespan using multiple reinforcement learning-based agents
Patel et al. Priority based job scheduling techniques in cloud computing: a systematic review
Thaman et al. Green cloud environment by using robust planning algorithm
Sonkar et al. A review on resource allocation and VM scheduling techniques and a model for efficient resource management in cloud computing environment
Maiti et al. Internet of Things applications placement to minimize latency in multi-tier fog computing framework
Xiang et al. Computing power allocation and traffic scheduling for edge service provisioning
CN111309472A (en) Online virtual resource allocation method based on virtual machine pre-deployment
Narwal et al. A novel approach for Credit-Based Resource Aware Load Balancing algorithm (CB-RALB-SA) for scheduling jobs in cloud computing
CN106802822A (en) A kind of cloud data center cognitive resources dispatching method based on moth algorithm
Ahmed et al. An Enhanced Workflow Scheduling Algorithm for Cloud Computing Environment.
Ramezani et al. Task scheduling in cloud environments: A survey of population‐based evolutionary algorithms
Rajeshwari et al. Efficient task scheduling and fair load distribution among federated clouds
Aoun et al. Towards a fairer benefit distribution in grid environments
Rahman et al. Group based resource management and pricing model in cloud computing
Sharma et al. Adaptive Particle Swarm Optimization for Energy Minimization in Cloud: A Success History Based Approach
Praveenchandar et al. An enhanced load balancing approach for dynamic resource allocation in cloud environments
Panwar et al. Analysis of various task scheduling algorithms in cloud environment
Xiao et al. A novel QoS-based co-allocation model in computational grid

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220125