CN106535242A - Wireless cloud computing system performance prediction method - Google Patents

Wireless cloud computing system performance prediction method Download PDF

Info

Publication number
CN106535242A
CN106535242A CN201610878631.2A CN201610878631A CN106535242A CN 106535242 A CN106535242 A CN 106535242A CN 201610878631 A CN201610878631 A CN 201610878631A CN 106535242 A CN106535242 A CN 106535242A
Authority
CN
China
Prior art keywords
task
congestion
link
graph model
cloud computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610878631.2A
Other languages
Chinese (zh)
Other versions
CN106535242B (en
Inventor
张源
张佳乐
郑军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
White Box Shanghai Microelectronics Technology Co ltd
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201610878631.2A priority Critical patent/CN106535242B/en
Publication of CN106535242A publication Critical patent/CN106535242A/en
Application granted granted Critical
Publication of CN106535242B publication Critical patent/CN106535242B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/06Testing, supervising or monitoring using simulated traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/14Network analysis or design
    • H04L41/147Network analysis or design for predicting network behaviour

Abstract

The present invention discloses a wireless cloud computing system performance prediction method. The method comprises the following steps: confirming the mapping relation from a work flow to an AP, and calculating the related data; layering the task link of the work flow aiming at the condition that the users in the system are mainly blocked up at the communication process, and selecting a graph model with the maximum value of the longest path as a prediction model of the system performance of the cloud computing system reaching the maximum processing ability; about the condition that the users in the system are mainly blocked up at the computing process, selecting a graph model with the maximum value of the longest path as a prediction model; and finally, solving the average user processing time makespan of the system at different arrival rates through adoption of the mode of solving two functions to predicate the system performances. The performances of the computing system is predicated through the modularization method, and when the cloud computing system is huge and is difficult to realize in a short time, the wireless cloud computing system performance prediction method provides a frame method for prediction of the system performances and provides references for the problems such as the feasibility of the system and the parameter arrangement of the system, etc.

Description

A kind of wireless cloud computing system performance prediction method
Technical field
The present invention relates to wireless communication technology field, especially a kind of wireless cloud computing system performance prediction method.
Background technology
With the development of Internet of Things and the 5th generation wireless cellular network, the in addition progress of hardware technology, computation schema band is given Carry out new development opportunity.5G networks are more new skills for considering permission equipment direction communication on the basis of conventional communication mode Art.Such as D2D, M2M communication so that the resource-sharing of equipment room is more convenient.In hardware aspect, the computing resource of terminal has Increase substantially, in software aspects, Intel Virtualization Technology will ensure more flexible load transfer, the mobile terminal of scarcity of resources Can be with the computing resource of Selection utilization distal end cloud computing center.
Cloud computing is the product of large-scale distributed computing technique and its supporting business model evolution, and its development mainly has The items technology such as Lai Yu virtualizations, Distributed Storage, data management, programming mode, information security, the common of product send out Exhibition.In the last few years, the evolution of the business model such as trustship, backward charge, on-demand delivery also accelerated the turnover in cloud computing market.Cloud Calculating not only changes the mode of information offer, has also overturned the delivery mode of traditional IC T system.Cloud computing is technology Innovation, not equal to cloud computing is thinking and the transformation of business model.
But, wireless cloud computing system is sometimes very huge, will be put into greatly in the process realized or emulate The manpower, material resources and financial resources of amount, and generally require to take a substantial amount of time.And for some during realization or emulation The setting of parameter, is often difficult to quickly estimation the problems such as some methods are whether desirable.
The content of the invention
The technical problem to be solved is, there is provided a kind of wireless cloud computing system performance prediction method, can be with By modular unit predict cloud computing system performance, for the feasibility of system, parameter setting the problems such as reference is provided.
For solving above-mentioned technical problem, the present invention provides a kind of wireless cloud computing system performance prediction method, including as follows Step:
(1) confirm that workflow, to the mapping relations of AP, calculates the average task amount of each AP internal task, calculate The average amount of transmission of data;
(2) average completion time of each AP internal task is calculated, the average transmission time of transmission data between AP is calculated;
(3) by the task link layering of workflow, certain the task link place number of plies is defined as first to be appointed to an office business and reaches the path The most short jumping figure of rear-guard task;
(4) for the main congestion of user in system is when the communication process, between the jam situation of AP links and link The transmission time of total data is relevant, it is assumed that have k bar AP link congestions, and the order of congestion AP links is always transmitted according to data in link Time arranges from high to low, calculates each congestion AP links data queued bag number, obtains in this case graph model;
(5) the congestion data bag number calculated by step (4), calculates the communication queue time delay of every AP link, calculates each Longest path of the graph model first to be appointed to an office business to tail task, the graph model for finally selecting longest path value maximum is used as the user in system Main congestion is in communication process and system reaches the forecast model under maximum processing capability, obtains makespan;
(6) for the main congestion of user in system is when the calculating process, it is assumed that in wireless cloud computing system, group is more The number of users that can be accommodated is R, and factor alpha is incorporated herein, its unit for/people, then represent Queued tasks or packet with R* α Number, it is assumed that only one of which AP congestion, and the number of tasks of congestion is R* α, other AP congestions number of tasks are 1;
(7) congestion number of tasks before AP is obtained by step (6), calculates the computing resource queuing delay of each AP, supplement figure The computing resource stand-by period of task node in model, the graph model for finally selecting longest path value maximum is used as the use in system Householder wants congestion in calculating process and system reaches forecast model under maximum processing capability, obtains makespan;
(8) makespan obtained by step (5) and step (7), takes the minimum of a value of makespan* λ and R, and wherein λ is Arrival rate, the makespan that the minimum of a value is obtained as system maximum galleryful input step (5) and step (7) continue Iteration, until stable, then obtains the makespan under different arrival rate.
Preferably, in step (1), confirm that workflow, to the mapping relations of AP, calculates the average of each AP internal task and appoints Business amount, the data volume of all tasks that will be in AP outside virtual task be added after divided by task quantity;Calculate transmission data between AP Average amount, that is, divided by task number of links after the task link transmitted data amount sum included in seeking certain AP link.
Preferably, the average completion time of AP internal tasks in step (2), is calculated, i.e., inside the AP for being obtained with step (1) Processing speed of the average task amount divided by AP;The average transmission time of the transmission data between AP is calculated, i.e., is obtained with step (1) Between AP, average data bag data amount is divided by the total bandwidth transmitted between corresponding A P.
Preferably, in step (4), it is assumed that be the main congestion of user in system in the situation of communication process, AP links Between jam situation and link, the transmission time of total data is relevant, first assumes there is an AP link congestion, and the congestion path is sum According to the most long path of transmission time, then graph model in this case can be obtained;It is next assumed that have two AP link congestions, together Reason is gone down always, it is assumed that have k bar AP link congestions, the order of congestion AP links according to data overall transmission time in link by height to Low arrangement, obtains graph model in this case, for every kind of graph model is by the delamination in step (3), obtains each link Middle communication queue number-of-packet, it is assumed that have Ni paths, can obtain Ni graph model.
Preferably, in step (5), it is assumed that be the main congestion of user in system in the situation of communication process, by congestion Number-of-packet, calculates the communication queue time delay of every AP link, and AP link communications queuing delay is assigned to correspondence in graph model The communication resource between task waits time delay, calculates each graph model first to be appointed to an office business to the longest path of tail task, finally selects longest path The maximum graph model of footpath value as in system the main congestion of user is in communication process and system is reached under maximum processing capability Forecast model, obtain makespan.
Preferably, in step (6), it is assumed that be the main congestion of user in system in the situation of calculating process, only one of which Before AP congestions, and the AP, the number of tasks of congestion is R × α, and other AP congestions number of tasks are 1, then if Nap AP, can obtain To Nap graph model.
Preferably, in step (7), it is assumed that be the main congestion of user in system in the situation of calculating process, by step (6) the congestion number of tasks of computing resource is waited before obtaining AP, calculates the computing resource queuing delay of each AP, by the calculating before AP Queuing delay is assigned to the computing resource in graph model before each task and waits time delay, selects longest path value maximum by last Graph model as the main congestion of user in system is in calculating process and system reaches the prediction mould under maximum processing capability Type, obtains makespan.
Preferably, in step (8), the makespan that above-mentioned steps (5) and step (7) are obtained takes makespan × λ and R Minimum of a value, wherein λ are arrival rate, are obtained the minimum of a value as system maximum galleryful input step (5) and step (7) Makespan, continues iteration, until stable, then obtains the makespan under different arrival rate.
Beneficial effects of the present invention are:Performance prediction for huge cloud computing system provides a kind of simple prediction block Frame, modular method can be by the jam situation of access point, the congestion feelings of channel in the estimated cloud computing system of initial parameters Condition and the average process time in systems of each user.For the feasibility of system, the parameter setting of system the problems such as provide With reference to substantially increasing the efficiency of cloud computing system design work.
Description of the drawings
Fig. 1 is the Forecasting Methodology schematic flow sheet of the present invention.
Fig. 2 is 10 points of workflow diagrams of the present invention.
Fig. 3 is cloud computing system performance chart under different arrival rate of the invention.
Fig. 4 is the average task of the present invention or processing data packets time Estimate block process schematic diagram.
Fig. 5 is the queuing estimation module of the present invention and graph model selecting module schematic flow sheet.
Fig. 6 is the solution block process schematic diagram of the present invention.
Specific embodiment
As shown in Fig. 1,3,4,5 and 6, a kind of wireless cloud computing system performance prediction method specifically includes following steps:
(1) confirm that workflow, to the mapping relations of AP, calculates the average task amount of each AP internal task, it is assumed that APnReflect Being mapped to for task has p, respectively t1,t2,...,tp, their task amount is I1,I2,…,Ip, then average task amount such as formula (1) shown in, it is assumed that APnTo APmPacket amount number have q, the transmitted data amount included in each packet is respectively Transferdata1,Transferdata2,…,Transferdataq, then calculate the average amount of transmission packet between AP As shown in formula (2).
(2)APnShown in the average completion time of internal task such as formula (3), wherein, PapFor the disposal ability of an AP, Assume there is N inside an APvmIndividual VM (virtual machine), then the disposal ability of each VM is Pvm=Pap/Nvm。APnArrive APmTransmission data average transmission time such as formula (4) shown in, wherein NchFor APnTo APmLink in radio channels, B0 For the bandwidth of these channels.
(3) by the task link layering of workflow, certain the task link place number of plies is defined as first to be appointed to an office business and reaches the path The most short jumping figure of rear-guard task, as shown in figure 1, task 1 is set to ground floor to the task link of task 2, task 2 to task 8 Task link be the second layer, task link of the identical layer as same type.
(4) the main congestion of user being assumed to be in system is in the situation of communication process, the jam situation of AP links and link Between total data transmission time it is relevant, first assume there is an AP link congestion, the congestion path is that total data transmission time is most long Path, the number of congestion data bag is R × α, then can obtain graph model in this case.It is next assumed that there is two AP links Congestion, is the most long AP links with vice-minister of total data transmission time, then can equally obtain graph model in this case, in the same manner Go down always, it is assumed that have k bar AP link congestions, the order of congestion AP links according to data overall transmission time in link from high to low Arrangement, calculates each congestion AP links data queued bag number, it is assumed that task number of links is respectively N below1,N2,…,Nk.It is all this A little task links have Q types (type is illustrated in step 3), and the number-of-packet for obtaining each AP links congestion is (R ×α/Q)×N1, (R × α/Q) × N2 ..., (R × α/Q) × NkIn this case graph model is obtained then.Finally, it is assumed that have NlBar road Footpath, can obtain NlIndividual graph model.
(5) the communication queue time delay of every AP link by congestion data bag number, is calculated, by AP link communications queuing delay Be assigned to the communication resource in graph model between correspondence task and wait time delay, each graph model first to be appointed to an office business is calculated to the longest path of tail task Footpath, the graph model for finally selecting longest path value maximum as in system the main congestion of user is in communication process and system reaches Forecast model under maximum processing capability, obtains makespan.
(6) the main congestion of user being assumed to be in system in the situation of calculating process, by dividing to a large amount of simulation results The heart, it is assumed that before the congestion of only one of which AP, and the AP number of tasks of congestion be R × α, other AP congestions number of tasks be 1, then if NapIndividual AP, then can obtain NapIndividual graph model.
(7) obtain before AP, waiting the congestion number of tasks of computing resource, the computing resource for calculating each AP to queue up by step 6 The queuing delay that calculates before AP is assigned in graph model the wait time delay of the computing resource before each task, by last by time delay The graph model for selecting longest path value maximum as in system the main congestion of user is in calculating process and system reaches maximum Forecast model under reason ability, obtains makespan.
(8) bring the makespan obtained in above procedure into formula (5) and formula (6) respectively, solve, obtain difference and arrive In makespan up under rate, wherein formula (1), f represents step 3,4,5 or step 6,7 process, only during R for become Measure, use RuserRepresent.
Makespan=f (Ruser×α) (5)
Ruser=min { λ × makespan, R } (6)
Below for above-mentioned steps for 10 points of workflows example, 10 points of workflows as shown in Fig. 2 this be one figure Model.Design parameter, it is Pap=100000MIPS that we arrange the processing speed of an AP, and each AP has 10 VM, i.e. Nvm =10, then processing speed P of single VMvm=Pap/Nvm=10000MIPS, in data transmission procedure, bandwidth B0=10^ 6bit/sm, Nch=1.
As shown in table 1, task amount unit is MI (million instruction set) to the corresponding task amount I of its task number.
Table 1
Task number 1 2 3 4 5 6 7 8 9 10
Task amount 0 1339000 1383000 1336000 1378000 1037000 1059000 1088000 1755000 0
The data volume transmitted between its task is as shown in table 2.Data volume unit is bit.Wherein in table, data volume 1 represents task Between have before and after drive relation, but transmission data volume be 0, it is 1 to take transmission quantity here very little.
Table 2
Task number 1 2 3 4 5 6 7 8 9 10
1 0 1 1 1 1 0 0 0 0 0
2 0 0 0 0 0 8334624 0 8873456 0 0
3 0 0 0 0 0 0 8517569 0 0 0
4 0 0 0 0 0 7511132 8340796 8324756 0 0
5 0 0 0 0 0 0 0 7853276 0 0
6 0 0 0 0 0 0 0 0 8326168 0
7 0 0 0 0 0 0 0 0 8861129 0
8 0 0 0 0 0 0 0 0 8315294 0
9 0 0 0 0 0 0 0 0 0 1
10 0 0 0 0 0 0 0 0 0 0
In fig. 2, each task node has corresponding weights, and it is data in table 1 divided by P to be worthvm, during each side correspondence weights are table 2 Data are divided by B0
Task allocation result is as shown in table 3, wherein task 1, and task 10 is virtual task, and which is not involved in average amount Calculate.
Table 3
AP labels 1 2 3 4
Comprising task 1,5,6,9 3,7 2 4,8,10
The task link that AP links are included is as shown in table 4:
Table 4
Bring formula into and obtain the average task amount of each AP internal task as shown in table 5, unit is MI.Transmission data between AP The average amount of bag is as shown in table 6:
Table 5
AP labels 1 2 3 4
Task amount 1390000 1221000 1339000 1212000
Table 6
AP labels 1 2 3 4
1 / / / 7853276
2 8861129 / / /
3 8334624 / / 8873456
4 7913213 8340796 / /
Bring formula into and obtain the average handling time of each AP internal task as shown in table 7, unit is MI.Number is transmitted between AP Average transmission time according to bag is as shown in table 8:
Table 7
AP labels 1 2 3 4
Task amount 13.9 12.21 13.39 12.12
Table 8
Task link layering result is as shown in table 9:
Table 9
Task number 1 2 3 4 5 6 7 8 9 10
1 / 1 1 1 1 / / / / /
2 / / / / / 2 / 2 / /
3 / / / / / / 2 / / /
4 / / / / / 2 2 2 / /
5 / / / / / / / 2 / /
6 / / / / / / / / 3 /
7 / / / / / / / / 3 /
8 / / / / / / / / 3 /
9 / / / / / / / / / 4
10 / / / / / / / / / /
By in table, during the time delay that corresponding communication queue and calculating are queued up adds Fig. 1,6 corresponding graph models are obtained, led to Cross be calculated 6 kinds of graph models longest path it is as shown in table 10:
Table 10
Artwork model 1 2 3 4 5 6
Longest path value 2020.911 2905.377 2905.377 2905.377 2905.377 2905.377
Then in order, select second graph model as the main congestion of user in system in the prediction of communication process Graph model.
In the same manner, for user's congestion is when the calculating process.For longest path such as 11 institute of table for obtaining 4 kinds of graph models Show.The first graph model is then chosen as the main congestion of user in system in the forecast model of calculating process, figure is finally obtained Model is as shown in Figure 4.
Table 11
Artwork model 1 2 3 4
Longest path value 6026.155 5335.322 3156.195 5296.029
Solution formula (1) and formula (2) obtain the performance curve of wireless cloud computing system, and the performance refers to average each user Process time in systems, as shown in Figure 5.
Compute-intensive applications can be decomposed into a series of set of tasks, and the logical relation formed between a kind of task is claimed For workflow, by the mapping of workflow to AP (Wireless Access Point), the communication resource and computing resource can be made Distributed, completed the process of workflow.When substantial amounts of user reaches, that is, there is substantial amounts of workflow in cloud computing system Middle parallel processing, for each workflow of each user, could set up a graph model, i.e., workflow each Before business before link, computation delay and communication delay can be produced, afterwards by seeking first to be appointed to an office business (task 1 in Fig. 2) in graph model To the longest path of tail task (task 10 in Fig. 2), process time makespan for obtaining the user that can be approximate, this figure Model is that the prediction of the performance to whole system provides the foundation.Secondly, it is assumed that can at most accommodate in wireless cloud computing system Number of users be R.When arrival rate of customers reaches certain value, the number of users that system is accommodated will remain in R, and now system Reach " saturation state " of maximum processing capability, the system processing time of each user will increase no longer as arrival rate increases Greatly, then systematic function under this kind of " saturation state " needs to be predicted.Again, the product of time delay is analyzed under " saturation state " It is raw, during task is pending or data are to be sent, computing resource or the communication resource may be waited, is at this moment just generated Time delay, by the mapping relations of task to AP, can obtain in AP average data between the process time of average each task and AP Transmission time, then now the prediction of the stand-by period in graph model can be changed into the prediction of queuing number of tasks before AP and Between AP, the prediction of link data queued bag number, is incorporated herein factor alpha, and its unit is individual/people, then represent Queued tasks with R × α Or the number of packet, in this process α take 1.Finally, the main congestion of user model being divided in system is in communication process Considered in two kinds of situations of calculating process respectively with the main congestion of user in system, for the main congestion of user in system is in logical The situation of letter process, it is believed that no computing resource queuing delay, the biography of total data between the jam situation and AP links between AP links The defeated time is relevant, first assumes there is an AP link congestion, and congestion AP links are the most long path of total data transmission time, then may be used To obtain graph model in this case.It is next assumed that there is two paths congestions, congestion path be total data transmission time it is most long and The path of vice-minister, goes down in the same manner always, it is assumed that have NiBar AP links, each AP links congestion data bag number is by task link point The mode of Layer assignment is determined.N can be obtainediIndividual graph model, the graph model for finally selecting longest path value maximum is used as being Forecast model of the main congestion of user in system under communication process " saturation state ".For the main congestion of user in system in The situation of calculating process, it is believed that task does not have communication queue time delay, it is assumed that only one of which AP congestion, and the number of tasks of congestion is R × α, other AP congestions number of tasks are 1, then if Nap AP, can obtain Nap graph model, finally select longest path The maximum graph model of value as the main congestion of user in system in calculating process and system reach it is pre- under maximum processing capability Model is surveyed, the makespan obtained by above procedure can be obtained under each arrival rate by the solution for seeking one group of function makespan。
Although the present invention is illustrated with regard to preferred embodiment and has been described, it is understood by those skilled in the art that Without departing from scope defined by the claims of the present invention, variations and modifications can be carried out to the present invention.

Claims (8)

1. a kind of wireless cloud computing system performance prediction method, it is characterised in that comprise the steps:
(1) confirm that workflow, to the mapping relations of AP, calculates the average task amount of each AP internal task, to calculate and number is transmitted between AP According to average amount;
(2) average completion time of each AP internal task is calculated, the average transmission time of transmission data between AP is calculated;
(3) by the task link layering of workflow, certain the task link place number of plies is defined as first to be appointed to an office business and reaches the path rear-guard The most short jumping figure of task;
(4) for the main congestion of user in system is when the communication process, sum between the jam situation of AP links and link According to transmission time it is relevant, it is assumed that have k bar AP link congestions, the order of congestion AP links is according to data overall transmission time in link Arrange from high to low, calculate each congestion AP links data queued bag number, obtain in this case graph model;
(5) the congestion data bag number calculated by step (4), calculates the communication queue time delay of every AP link, calculates each artwork Longest path of the type first to be appointed to an office business to tail task, the graph model for finally selecting longest path value maximum are main as the user in system Congestion is in communication process and system reaches the forecast model under maximum processing capability, obtains makespan;
(6) for the main congestion of user in system is when the calculating process, it is assumed that in wireless cloud computing system, group is more can be with The number of users of receiving is R, and factor alpha is incorporated herein, its unit for/people, then with R* α represent Queued tasks or packet Number, it is assumed that only one of which AP congestion, and the number of tasks of congestion is R* α, other AP congestions number of tasks are 1;
(7) congestion number of tasks before AP is obtained by step (6), calculates the computing resource queuing delay of each AP, supplement graph model The computing resource stand-by period of middle task node, the graph model for finally selecting longest path value maximum as system in use householder Want congestion in calculating process and forecast model that system is reached under maximum processing capability, obtain makespan;
(8) makespan obtained by step (5) and step (7), takes the minimum of a value of makespan* λ and R, and wherein λ is arrival Rate, the makespan that the minimum of a value is obtained as system maximum galleryful input step (5) and step (7) continue iteration, Until stable, then the makespan under different arrival rate is obtained.
2. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (1), confirm work Make stream to the mapping relations of AP, calculate the average task amount of each AP internal task, will be in AP outside virtual task all Divided by task quantity after the data volume addition of business;The average amount of transmission data between AP is calculated, that is, is wrapped in seeking certain AP link Divided by task number of links after the task link transmitted data amount sum for containing.
3. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (2), calculate AP The average completion time of internal task, i.e., processing speed of the average task amount divided by AP inside the AP for being obtained with step (1);Calculate The average transmission time of the transmission data between AP, i.e., between the AP for being obtained with step (1), average data bag data amount is divided by corresponding A P Between the total bandwidth transmitted.
4. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (4), it is assumed that be The transmission time of the user's main congestion total data between the situation of communication process, the jam situation of AP links and link in system It is relevant, first assume there is an AP link congestion, the congestion path is the most long path of total data transmission time, then can obtain this Graph model in the case of kind;It is next assumed that there are two AP link congestions, go down always in the same manner, it is assumed that there are k bar AP link congestions, gather around The order of stifled AP links is arranged from high to low according to data overall transmission time in link, obtains graph model in this case, for Every kind of graph model obtains communication queue number-of-packet in each link, it is assumed that YouNiTiao roads by the delamination in step (3) Footpath, can obtain Ni graph model.
5. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (5), it is assumed that be The main congestion of user in system calculates the communication platoon of every AP link in the situation of communication process by congestion data bag number AP link communications queuing delay is assigned to the communication resource in graph model between correspondence task and waits time delay, calculates each by team's time delay Longest path of the graph model first to be appointed to an office business to tail task, the graph model for finally selecting longest path value maximum is used as the user in system Main congestion is in communication process and system reaches the forecast model under maximum processing capability, obtains makespan.
6. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (6), it is assumed that be The number of tasks of the user's main congestion congestion before the situation of calculating process, the congestion of only one of which AP, and the AP in system be R × α, other AP congestions number of tasks are 1, then if Nap AP, can obtain Nap graph model.
7. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (7), it is assumed that be The main congestion of user in system obtains waiting the congestion of computing resource to appoint before AP by step (6) in the situation of calculating process Business number, calculates the computing resource queuing delay of each AP, the calculating queuing delay before AP is assigned to each task in graph model Front computing resource waits time delay, is mainly gathered around as the user in system by the last graph model for selecting longest path value maximum Block up in calculating process and forecast model that system is reached under maximum processing capability, obtain makespan.
8. cloud computing system performance prediction method as claimed in claim 1 wireless, it is characterised in that in step (8), top step Suddenly the makespan that (5) and step (7) are obtained, takes makespan × λ and R minimum of a values, and wherein λ is arrival rate, by the minimum of a value Makespan is obtained as system maximum galleryful input step (5) and step (7), continues iteration, until stable, then obtained Makespan under different arrival rate.
CN201610878631.2A 2016-09-30 2016-09-30 A kind of wireless cloud computing system performance prediction method Active CN106535242B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610878631.2A CN106535242B (en) 2016-09-30 2016-09-30 A kind of wireless cloud computing system performance prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610878631.2A CN106535242B (en) 2016-09-30 2016-09-30 A kind of wireless cloud computing system performance prediction method

Publications (2)

Publication Number Publication Date
CN106535242A true CN106535242A (en) 2017-03-22
CN106535242B CN106535242B (en) 2019-10-11

Family

ID=58331425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610878631.2A Active CN106535242B (en) 2016-09-30 2016-09-30 A kind of wireless cloud computing system performance prediction method

Country Status (1)

Country Link
CN (1) CN106535242B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237727A (en) * 2022-09-21 2022-10-25 云账户技术(天津)有限公司 Method and device for determining most congested sublinks, electronic equipment and storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150113542A1 (en) * 2013-10-17 2015-04-23 Nec Laboratories America, Inc. Knapsack-based sharing-aware scheduler for coprocessor-based compute clusters
US20150207859A1 (en) * 2014-01-22 2015-07-23 Ford Global Technologies, Llc Vehicle-specific computation management system for cloud computing
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method
CN105490959A (en) * 2015-12-15 2016-04-13 上海交通大学 Heterogeneous bandwidth virtual data center embedding realization method based on congestion avoiding
CN105740124A (en) * 2016-02-01 2016-07-06 南京邮电大学 Redundant data filtering method oriented to cloud computing monitoring system
CN105975342A (en) * 2016-04-29 2016-09-28 广东工业大学 Improved cuckoo search algorithm based cloud computing task scheduling method and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150113542A1 (en) * 2013-10-17 2015-04-23 Nec Laboratories America, Inc. Knapsack-based sharing-aware scheduler for coprocessor-based compute clusters
US20150207859A1 (en) * 2014-01-22 2015-07-23 Ford Global Technologies, Llc Vehicle-specific computation management system for cloud computing
CN105159762A (en) * 2015-08-03 2015-12-16 冷明 Greedy strategy based heuristic cloud computing task scheduling method
CN105490959A (en) * 2015-12-15 2016-04-13 上海交通大学 Heterogeneous bandwidth virtual data center embedding realization method based on congestion avoiding
CN105740124A (en) * 2016-02-01 2016-07-06 南京邮电大学 Redundant data filtering method oriented to cloud computing monitoring system
CN105975342A (en) * 2016-04-29 2016-09-28 广东工业大学 Improved cuckoo search algorithm based cloud computing task scheduling method and system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YUMENG ZHANG ET.AL: "Congestion integrated control in virtualized clouds", 《 2014 IEEE INTERNATIONAL CONFERENCE ON PROGRESS IN INFORMATICS AND COMPUTING》 *
王卓 ET.AL: "求解大规模云计算负载均衡问题的局部搜索算法", 《中国科学:信息科学》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115237727A (en) * 2022-09-21 2022-10-25 云账户技术(天津)有限公司 Method and device for determining most congested sublinks, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN106535242B (en) 2019-10-11

Similar Documents

Publication Publication Date Title
CN103747059B (en) A kind of cloud computing server cluster network support method towards many tenants and system
CN108566659A (en) A kind of online mapping method of 5G networks slice based on reliability
CN104540234B (en) A kind of associated task scheduling mechanism synchronously constrained based on CoMP under C RAN frameworks
CN103931262B (en) A kind of data dispatching method and equipment
CN108694077A (en) Based on the distributed system method for scheduling task for improving binary system bat algorithm
CN107203412A (en) A kind of cloud resource method for optimizing scheduling that particle cluster algorithm is improved based on membranous system
CN108512772A (en) Quality-of-service based data center's traffic scheduling method
CN107948083A (en) A kind of SDN data centers jamming control method based on enhancing study
Chakraborty et al. Sustainable task offloading decision using genetic algorithm in sensor mobile edge computing
CN104731528A (en) Construction method and system for storage service of cloud computing block
CN106598727B (en) A kind of computational resource allocation method and system of communication system
CN109873772A (en) Stream scheduling method, device, computer equipment and storage medium based on stable matching
CN106604284A (en) Method and device for allocating heterogeneous network resources
CN108605017A (en) Inquiry plan and operation perception communication buffer management
CN108055701A (en) A kind of resource regulating method and base station
CN106028453B (en) Wireless dummy network resource cross-layer scheduling mapping method based on queueing theory
CN113535393B (en) Computing resource allocation method for unloading DAG task in heterogeneous edge computing
CN104469851B (en) Balanced handling capacity and the resource allocation methods of delay in a kind of LTE downlinks
CN105760227A (en) Method and system for resource scheduling in cloud environment
CN104811467B (en) The data processing method of aggreggate utility
Kaur et al. Latency and network aware placement for cloud-native 5G/6G services
CN104009904B (en) The virtual network construction method and system of facing cloud platform big data processing
CN105594158B (en) The configuration method and device of resource
Jain et al. Optimal task offloading and resource allotment towards fog-cloud architecture
CN106535242A (en) Wireless cloud computing system performance prediction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210326

Address after: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee after: Shanghai Hanxin Industrial Development Partnership (L.P.)

Address before: 210088 No. 6 Dongda Road, Taishan New Village, Pukou District, Nanjing City, Jiangsu Province

Patentee before: SOUTHEAST University

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230913

Address after: 201615 room 301-6, building 6, no.1158, Jiuting Central Road, Jiuting Town, Songjiang District, Shanghai

Patentee after: White box (Shanghai) Microelectronics Technology Co.,Ltd.

Address before: 201306 building C, No. 888, Huanhu West 2nd Road, Lingang New Area, Pudong New Area, Shanghai

Patentee before: Shanghai Hanxin Industrial Development Partnership (L.P.)

TR01 Transfer of patent right