CN109005130B - Network resource allocation scheduling method and device - Google Patents

Network resource allocation scheduling method and device Download PDF

Info

Publication number
CN109005130B
CN109005130B CN201810726208.XA CN201810726208A CN109005130B CN 109005130 B CN109005130 B CN 109005130B CN 201810726208 A CN201810726208 A CN 201810726208A CN 109005130 B CN109005130 B CN 109005130B
Authority
CN
China
Prior art keywords
task
value
predicted
time
arrival
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810726208.XA
Other languages
Chinese (zh)
Other versions
CN109005130A (en
Inventor
朱晓敏
包卫东
陈俊杰
张国良
吴冠霖
闫辉
杨骋
张雄涛
张亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201810726208.XA priority Critical patent/CN109005130B/en
Publication of CN109005130A publication Critical patent/CN109005130A/en
Application granted granted Critical
Publication of CN109005130B publication Critical patent/CN109005130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/70Admission control; Resource allocation
    • H04L47/83Admission control; Resource allocation based on usage prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a device for network resource allocation scheduling, wherein the method comprises the following steps: predicting a predicted value of task arrival based on a preset prediction algorithm; according to the predicted value, the network resources are re-deployed correspondingly, so that enough processing resources are available when the peak value of the task amount of the future task comes; and when the predicted arrival time of the future task is reached, readjusting the parameters of the task classifier and the prediction algorithm based on the difference value between the actually received task number and the predicted arrival number of the future task. The method and the device of the invention combine the moving average prediction method under the normal condition with the trend extrapolation prediction method under the emergency condition, find the emergency condition in time, provide a flexible and adaptive resource reservation strategy, and can self-adapt to the effective work flow burst in the aspect of resource allocation by utilizing the resources to the maximum extent and improving the resource guarantee rate of normal and emergency work loads.

Description

Network resource allocation scheduling method and device
Technical Field
The invention relates to the technical field of cloud computing, in particular to a network resource configuration scheduling method and device.
Background
Cloud computing has become one of the hottest topics in computer science for decades, and as virtualization has advanced cloud computing now allowing on-demand network access to shared pools of configurable computing resources, thereby providing these quickly to clients on the internet as a new form of service no longer limited to traditional infrastructure as a service (IaaS), platform as a service (PaaS) or software as a service (SaaS), while data as a service (DaaS), analytics as a service (AaaS), and so forth. In modern computing clouds, workloads burst more and more frequently and are difficult to predict. For example, when a celebrity releases a surprise twitter or some online retailer pushes a discount at some point, a sudden amount of work may be placed on the relevant web site. If the computing resources are not properly and timely reconfigured, the website or application may crash, causing customer dissatisfaction or even financial loss, which is undesirable to cloud providers or customers and clients. Typically, cloud providers seek help strategies to deploy excessive resources in the face of sudden workload spikes, but increase overhead costs to customers and sacrifice unnecessary resources. Therefore, a new network resource allocation scheduling technical solution is needed.
Disclosure of Invention
In view of the above, a technical problem to be solved by the present invention is to provide a method and an apparatus for scheduling network resource allocation.
According to an aspect of the present invention, a network resource configuration scheduling method is provided, including: predicting a predicted value of task arrival based on a preset prediction algorithm; wherein the predicted values include: the predicted arrival time and predicted arrival number of future tasks; according to the predicted value, the network resources are re-deployed correspondingly, so that enough processing resources are available when the peak value of the task amount of the future task comes; wherein the network resources include: a physical machine PM and a virtual machine VM; when the predicted arrival time of the future task is reached, readjusting the task classifier and the parameters of the prediction algorithm based on the difference between the number of actually received tasks and the predicted arrival number of the future tasks.
According to another aspect of the present invention, there is provided a network resource configuration scheduling apparatus, including: the trend prediction module is used for predicting a predicted value of task arrival based on a preset prediction algorithm; wherein the predicted values include: the predicted arrival time and predicted arrival number of future tasks; the resource reservation module is used for re-deploying the network resources correspondingly according to the predicted values so as to ensure that enough processing resources are available when the task quantity peak value of the future task comes; wherein the network resources include: a physical machine PM and a virtual machine VM; and the parameter adjusting module is used for readjusting the parameters of the task classifier and the prediction algorithm based on the difference value between the number of the actually received tasks and the predicted arrival number of the future tasks when the predicted arrival time of the future tasks is reached.
The network resource allocation scheduling method and device predict the predicted value of the future task arrival in the task cluster, correspondingly re-deploy PM and VM according to the predicted value, and re-adjust the parameters of the task classifier and the prediction algorithm based on the difference value between the actually received task quantity and the predicted quantity of the future task arrival; the method combines a moving average prediction method under normal conditions with a trend extrapolation prediction method under emergency conditions, finds the emergency conditions in time, provides a flexible and adaptive resource reservation strategy, and can adapt to the effective work flow burst in the aspect of resource allocation by utilizing resources to the maximum extent and improving the resource guarantee rate of normal and emergency work loads.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flowchart illustrating a network resource allocation scheduling method according to an embodiment of the present invention;
FIGS. 2A-2F are schematic diagrams illustrating distance-on-packet calculations in an embodiment of a network resource allocation scheduling method of the present invention;
FIG. 3 is a schematic diagram illustrating a simulated workload spike of a Gaussian curve according to an embodiment of the network resource allocation scheduling method of the present invention;
FIGS. 4A-4D are schematic diagrams illustrating different predicted results of four groups of tasks in an embodiment of a network resource allocation scheduling method according to the present invention;
fig. 5 is a block diagram illustrating a network resource allocation scheduling apparatus according to an embodiment of the present invention.
Fig. 6 is a schematic block diagram of an embodiment of a network resource allocation scheduling apparatus in an actual scenario.
Fig. 7 is a block diagram of another embodiment of a network resource allocation scheduler according to the invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
Fig. 1 is a schematic flowchart of an embodiment of a network resource allocation scheduling method according to the present invention, as shown in fig. 1:
step 101, predicting a predicted value of a task arrival based on a preset prediction algorithm, wherein the predicted value comprises: the predicted time of arrival and the predicted number of arrivals for the future task.
Step 102, performing corresponding redeployment on network resources according to the predicted values so as to enable enough processing resources to be available when a task amount peak value of a future task comes, wherein the network resources comprise: a physical machine PM and a virtual machine VM.
And 103, when the predicted arrival time of the future task is reached, readjusting the parameters of the task classifier and the prediction algorithm based on the difference value between the number of the actually received tasks and the predicted arrival number of the future tasks.
Grouping the tasks requiring processing through a task classifier to generate at least one task cluster; judging whether to enter a burst state or not based on the number of tasks added in the task cluster in unit time, if so, predicting the predicted value of the future task arrival in the task cluster by adopting a burst prediction algorithm, and if not, predicting the predicted value of the future task arrival in the task cluster by adopting a conventional prediction algorithm.
In one embodiment, a task requesting processing is received, the tasks are grouped through a task classifier, and the tasks are distributed to a task cluster with the highest similarity. The tasks include service request tasks, calculation request tasks, and the like. The task can be a request submitted to a data center by a terminal user, and can be large-scale scientific calculation, webpage access operation, conventional operations such as data reading, data analysis, data processing, data storage and the like.
And respectively predicting the predicted values of the future task arrivals in the plurality of task clusters based on a preset prediction algorithm, wherein the predicted values comprise the arrival prediction time and the arrival prediction number of the future tasks. The predicted arrival time and the predicted arrival quantity of the predicted value can be represented by a curve in a coordinate system, and the slope of the curve can be represented as the workload arrival rate. The arrival prediction time and the arrival prediction quantity of the future task can be respectively and continuously predicted under two different characteristic scenes, namely a normal scene and a burst scene.
The network resource allocation scheduling method of the above embodiment provides an integrated method of adaptive workload prediction and resource reservation to implement a solution for traffic burst (TRIERS) encountered in resource allocation; adaptive prediction can be adopted, the upcoming workload can be continuously predicted in a normal scene and a burst scene respectively, and when a burst is detected, the physical machine PM and the virtual machine VM are re-deployed correspondingly according to the number and the predicted arrival time of the physical machine PM and the virtual machine VM.
In one embodiment, there may be multiple ways to group tasks by task classifiers. For example, historical data of tasks are selected, the historical data are pre-grouped by adopting a k-means clustering method to obtain a plurality of task clusters, and a cluster characteristic attribute value of each task cluster is obtained. When a task is received, the characteristic attribute value of the task is obtained, and the cluster characteristic attribute value and the characteristic attribute value comprise arrival time, calculation length, deadline requirement, required memory size and the like. The task classifier respectively calculates the Mahalanobis distance between the characteristic attribute value and the characteristic attribute values of the clusters, and distributes the task to the task cluster corresponding to the minimum Mahalanobis distance.
The workload outbreak of the task is usually related to the sudden attention of some hot topics or some special events, and it can be inferred that the attributes of the task in the period are different from the attributes in the normal condition, so that for different tasks, there can be characteristic curves and other characteristics of the arrival rate such as CPU, memory, running time and the like. The number of clusters to be subjected to grouping processing may be preset in a range of 2 to 5 so that the result of grouping can represent a distinguishable property of a task from a normal scene or a burst scene. It is necessary to select a wide history data as a training data to obtain the features of each task cluster, which will be used as a classification standard, and the composition of each task cluster changes with time, and the parameters of the groups are adjusted and reset according to the overall performance. When implementing task grouping, a series of suitable task attributes are considered first. The most influential factors in reaching a task are its given computational length and required memory size and expected run time, since they determine the subsequent workload distribution and ultimately the final quality of the overall performance.
For each given task T in the task set TiCan use ti=(ai,li,di,mi) To model, wherein ai,li,diAnd miTasks t representing arrival time, calculation length, expected duration and memory size, respectivelyiThe upcoming tasks or requests are pre-grouped on a coarser scale using a low complexity clustering method k-means. Selecting historical data within a certain time, grouping by adopting the clustering method provided by the text, and extracting the characteristics of each group to serve as the screening condition of the task classifier in the subsequent real-time classification. The tasks are divided into several feature groups (task clusters), and one arriving task is assigned to the task cluster with the highest similarity score. In general, the similarity score between a task and a cluster of tasks is through the centroid of the cluster in vector space of the task and the above listed attributesEuclidean distance. However, for real-world tasks, these attributes are usually interrelated, and ignoring the relationship between the attributes may affect the accuracy of the grouping result to some extent.
The Mahalanobis distance is used in the present invention instead of calculating the similarity score. The mahalanobis distance considers the overall clustering and the relational covariance matrix among the attributes of the task through dissimilarity measurement between two identically distributed vectors and the overall. Furthermore, mahalanobis distance is scale invariant, or independent of the measurement scale. The traditional similarity calculation uses Euclidean distance calculation, as shown in formula 1-1:
Figure BDA0001719793190000051
wherein,
Figure BDA0001719793190000052
namely tasks
Figure BDA0001719793190000053
And task
Figure BDA0001719793190000054
Euclidean distance between two individual tasks, where li,di, miThe attributes/dimensions of the task i represent the task length, the deadline requirement and the required memory size, lj,dj, mjEtc. have similar meanings. Equation 1 represents the calculation of the euclidean distance. The use of mahalanobis instead of euclidean distances, as shown in equations 1-2, eliminates the correlation between the three selection attributes, revealing the distinctive features of each set of tasks. Equation 1-2 represents the manner in which the equine distance is calculated:
Figure BDA0001719793190000055
wherein,
Figure BDA0001719793190000056
namely tasks
Figure BDA0001719793190000057
And task
Figure BDA0001719793190000058
Mahalanobis distance between two individual tasks, wherein
Figure BDA0001719793190000059
Is a compound containing li,di,miThe three-element vector of (a) is,
Figure BDA00017197931900000510
and represent similar meanings. Sigma-1An inverse of the covariance matrix representing the selected historical data. Equations 1-3 and equations 1-4 both supplement the covariance matrix sigma.
ij=cov(ti,tj)=E[(tij)(tjj)] (1-3);
Figure BDA0001719793190000061
Where μ is the average of all tasks, ΣijIs the covariance value between two tasks, and σ in the covariance matrix is a set of competing values for each two tasks, representing the overall variance. Sigma represents a covariance matrix of the selected historical dataijEach entry in the covariance matrix is represented. Formula cov () is a covariance calculation formula between two vectors, and formula E () is an expectation value calculation formula.
A representative covariance matrix is first calculated using historical data samples containing multiple workload spike data and then updated in real-time (periodically). And (4) grouping the tasks by using the modified clustering method to obtain the characteristics of each cluster. In the tests, a technique called k-means + + was used to select the appropriate initial cluster center to accelerate subsequent convergence rates. Fig. 2A-2F compare the characteristics of tasks in a group using different similarity calculation methods, including their calculation length, required deadline, and required memory size. The results of the distance calculations in europe and miles are shown on the left side of fig. 2A, 2C and 2E, and those from Malahannobis on the right side of fig. 2B, 2D and 2F. Taking fig. 2A and 2B as an example, each scatter point represents a task at a certain point in time mapped on the x-axis, which calculates the principal y-axis with length on the left; the secondary y-axis indicates that tasks were counted at the same time point, continuing black at the bottom. It is clear that the tasks in emergency situations are mostly marked in red, and are normative with others. The method using the mahalanobis distance labels "bursts" more tasks than the method using the euclidean distance, which lengthens the burst to some extent. That is, the method using the Malahanobis distance is more sensitive to the arrival of the workload peak than Euclidean. Similar implications can be inferred from a comparison of fig. 2C and 2D, and fig. 2E and 2F. Euclidean seems to prefer narrowing the "bursty workload" domain, classifying fuzzy regions as the "standard" domain, while mahalanobis seems to emphasize the linear relationship between the two attributes, thereby expanding the detection range of "bursty workload". Thus, mahalanobis is more able to predict outbreaks. In one embodiment, for normal conditions, the workload fluctuates on average without any sharp increase, which is suitable for time series model prediction. However, for emergencies, more frequent and sharp rises and falls in a task or request, usually in the form of an exponential increase in load over a relatively short period of time, and then quickly reverts to the original level. The moving average method has the disadvantage that the hysteresis problem will be greatly magnified when attempting to predict an outbreak, and therefore a trend extrapolation prediction method is employed. The trend extrapolation prediction method predicts not the value at a point in the future but the trend. Several common mathematical functions and curves were selected and tested to simulate workload spikes, including exponential functions, Gompertz curves, gaussian curves, etc., and through experimental results, it can be seen that gaussian curves are the most suitable curves for workload bursts, as shown in fig. 3.
In one embodiment, there may be a plurality of methods for predicting the predicted value of the arrival of the future task in the plurality of task clusters based on the preset prediction algorithm. For example, whether the task cluster enters a burst state is judged based on the number of the tasks added in the task cluster in unit time, if yes, a burst prediction algorithm is adopted to predict a predicted value of the arrival of the future task in the task cluster, and if not, a conventional prediction algorithm is adopted to predict the predicted value of the arrival of the future task in the task cluster; in unit time, if the increasing acceleration of the task quantity in the task cluster is continuously increased, or the times that the value of the increasing acceleration of the task quantity exceeds a preset acceleration threshold exceeds a preset time threshold, determining that the task cluster enters a burst state; after it is determined that the burst state is entered, if it is determined that the acceleration of increase in the number of tasks in the task cluster is continuously decreased or the number of times the value of the acceleration of increase in the number of tasks is lower than the acceleration threshold exceeds the number-of-times threshold, it is determined to return to the normal state. The prediction value for predicting the arrival of the future task in the task cluster by adopting a burst prediction algorithm is as follows:
Figure BDA0001719793190000071
the meaning of s refers to the predicted window duration. I.e., at time t, the values after s unit times that can be predicted by equations 1-5, i.e., the predicted value e at time t + st+s
The predicted value for predicting the arrival of future tasks in the task cluster by adopting a conventional prediction algorithm is as follows:
Figure BDA0001719793190000072
where the meaning of s refers to the predicted window duration. I.e. at time t, values after s unit durations can be predicted by the formula, i.e. at time t + sThe value is obtained. e.g. of the typet+sTo predict the number of task arrivals at time t + s based on the number of task arrivals at time t, b1,b2,b3Is a prediction parameter which can be selected according to the results of various tests, exp () is an exponential function with a natural base number e as a base, window is the set number of historical data influencing the prediction value, WinThe influence weight of the historical data of the in-th task on the predicted value, i refers to the time length corresponding to in time points which are called back at the time t, and w can be seti(window-i) n, where n is a natural number greater than 1, to ensure that data pairs at historical times closer to that time have a greater impact on the prediction value ct-iRefers to the number of task arrivals at time t-i. This value is used to calculate a predicted value at time t + s. An acceleration threshold is set for the hybrid prediction method as a switch of the transition prediction model under transition conditions between normal and emergency situations. After removing the apparent outliers and smoothing the arriving lines within a preset short time window, it was observed that the increase in the number of tasks per unit time would be accelerated when a burst could occur. Thus, on the one hand, as soon as the acceleration exceeds the threshold value several consecutive times, the prediction method will take it as a signal of a recent burst; on the other hand, accelerations that are below the threshold or do not meet the above conditions will be flagged as fluctuations of the normal level, and then the prediction mode will quickly move back to the moving average. And vice versa.
In one embodiment, a two-scene prediction method is applied to each task cluster after grouping. First, tasks are arranged in a grouping pool, and task classifiers label and assign the following tasks according to their attributes and similarity scores between clusters. The task classifier takes the clustering of each group of pre-grouped features as a basis, after a new task arrives, each attribute value of the new task is read, distance calculation is carried out on the attribute values of all task clusters (the average value of all individual features of the task clusters), then the task is classified into the group with the minimum distance value, and the attribute value of the group is updated. At each time interval, a different task cluster is sent to the prediction pool to predict the trend of the arrival rate for each particular task cluster. And finally, when the actual time reaches the predicted time point, packing the difference value between the actual value and the predicted value as feedback, and readjusting the parameters of the prediction method. And judging whether the task state at the moment is 'burst' or 'conventional' according to the arrival rate of the historical unit time, and predicting by using a corresponding method. After a period of time, part of the previously predicted task arrival rates can be compared with the actual task arrival rate, and the difference value is used as feedback information to improve the prediction method. For example, if 75% of the predicted values are lower than the actual values in a period of time, the ratio of the predicted and actual difference values in the period of time to the predicted values is added to the predicted values of the subsequent predictions for adjustment. Fig. 4A-4D show different predicted results for four sets of tasks. The optimal threshold range for which task clustering is mainly detected in an emergency situation is an increase of 0.32 counts per minute. With respect to the other three groups, almost no protruding spikes are detected, as shown in fig. 4B, 4C and 4D, the pre-grouping step is to separate the feature "bursty task" from normal, and furthermore, the "bursty task" also has a feature arrival curve, and if appropriate thresholds are set accordingly, the prediction method will predict a more appropriate prediction curve. In one embodiment, allocating a corresponding PM or VM to a newly added task in a task cluster according to the predicted value and the characteristics of the task cluster, where the characteristics include: memory requirements, calculated length, etc. For the normal state, the reserved space of the PM is the minimum reserved space threshold, and when the burst state occurs, the reserved space of the PM is increased and/or a new PM is started and the reserved space is reserved to meet the burst increase of the workload. Constraints that are satisfied when making an allocation of a PM or VM include:
Figure BDA0001719793190000091
and
Figure BDA0001719793190000092
Figure BDA0001719793190000093
refers to the maximum CPU value, f, of the jth PMj,kThe CPU value of the kth virtual machine on the jth PM is pointed; m isjRefers to the maximum memory value, m, of the jth PMj,kRefers to the memory value of the kth virtual machine on the jth PM, and N is the number of virtual machines on the jth PM.
When the task is marked as an emergency, a plurality of virtual machines are searched in sequence according to a preset sequence, whether the constraint condition is met after the task is distributed to the virtual machine is judged, if yes, the task is distributed to the virtual machine, and if not, the next virtual machine is searched continuously. If the reserved space of the PM is increased when the PM is converted into the burst state from the conventional state, whether the CPU occupation amount or the memory occupation amount of the task allocated to the PM at any running time exceeds a set threshold value or not is judged, if yes, the task allocated to the PM is transferred to other opened PMs or the newly opened PM from the task with the minimum CPU occupation amount or the minimum memory occupation amount until the CPU occupation amount and the memory occupation amount of the task allocated to the PM at any running time do not exceed the set threshold value.
In one embodiment, configuration at peak workload is a time-first guarantee to guarantee task completion, but the elastic advantages of virtualization are compromised and resource utilization may result. The resource allocation should consider the normal working load scene and the burst working load scene respectively, consider the characteristics of each single cluster, and adjust the corresponding working load size to adjust the resource allocation in real time, thereby improving the utilization rate of the whole system to the maximum extent. On one hand, the characteristics of the previous clusters and the heterogeneity of the PMs are fully utilized, that is, the tasks of different clusters are distributed according to the distinguishing characteristics of the corresponding clusters, for example, the tasks with higher memory requirements, the higher configuration PMs and the tasks with lower computing length are given priority to the relatively more occupied PMs, so that the resource availability is the best. On the other hand, the reservation strategy depends to a large extent on the prediction result, since it is scheduled in advance that all PMs are fully occupied, the traffic workload suddenly surges whenever a physical or virtual machine needs additional reservations.
The amount of reservation depends on the difference between the normal workload and the burst workload. For the normal case, the headspace of each PM is compressed to a conditional minimum. If a sudden situation occurs, the system will be redeployed, opening a sufficient number of new PMs, and leaving enough headroom to handle the surge in workload. The reservation strategy is adopted to realize the balance of resource utilization rate, operation stability and high efficiency.
For example, a virtualized cloud of m physical hosts or PMs is first targeted. Each PM is characterized by hj=(rj,oj,fj) Wherein r isjIs the jth PMhjMemory size of fjIs the jth PM hjA CPU value of ojIs the jth host; then the host machine hjAnd the VM is set. VM is modeled as VMj,k=(fj,k,rj,k) Wherein f isj,kAnd rj,kAre each vmj,kRequired CPU performance and memory. The reservation policy preferably considers the proportion of tasks completed when the total number of virtual machines on the same PM cannot violate the capacity of the PM. Such constraints are formalized as:
Figure BDA0001719793190000101
Figure BDA0001719793190000102
Figure BDA0001719793190000103
maximum CPU value, f, of j-th hostj,kThe CPU value of the kth virtual machine on the jth host is referred to; equations 1-7 indicate that the sum of the CPU values of all virtual machines allocated on a host cannot exceed the CPU maximum value for that host. m isjRefers to the maximum memory value, m, of the jth hostj,kThe memory value of the kth virtual machine on the jth host is pointed; equations 1-8 refer to all virtual machines allocated on a host machineThe sum of the memory values of (a) cannot be greater than the maximum memory value of the host. In one embodiment, the pseudo code of algorithm 1 for the proposed heuristic of the retention policy, with the constraints satisfied, is as follows: in algorithm 1: 1. grouping the tasks newly arrived at each moment, and predicting the task arrival rate after the s moment according to the moment and the arrival rates of the moments of the backward dialing windows; 2. judging whether the task state is changed, namely burst or routine according to the predicted value; 3. if the task condition changes, the prediction method and the resource reservation method are changed simultaneously; 4. performing resource allocation, and distributing tasks to the virtual of each host; after a period of time, the historical data is updated, clustering is performed again, and the covariance matrix is updated. Algorithm 1 is an overall flow presentation of the overall method.
Figure BDA0001719793190000111
The function re-deployment given in Algorithm 2 will attempt to move the available resources to handle the upcoming burst, function resource allocation, and the pseudo code for Algorithm 2 is as follows:
Figure BDA0001719793190000112
from the above, in algorithm 2: 1. if the environment transitions from a "normal" condition to a "burst" condition, then a check is started for each virtual machine; 2. because of the transition to the "bursty" condition, the reservation value of the host increases and some tasks that originally met the constraint condition need to be transferred to other machines; 3. finding out the machine whose work occupies CPU or memory and exceeds the threshold value, starting the transfer from the task with minimum occupation, firstly considering the transfer to the machine which is started and does not exceed the resource reservation, if no such machine exists, newly starting the machine. The algorithm 2 is used for supplementing one step of the algorithm 1, and mainly solves the problem of readjustment of resources of the host group during state transition.
Algorithm 3 is used to schedule the upcoming task into the appropriate VM according to its characteristic attributes, the pseudo code of algorithm 3 is as follows:
Figure BDA0001719793190000121
from the above, in algorithm 2: 1. if the task is marked as a 'burst' condition, starting to search each virtual machine and distributing the virtual machine to a proper machine; 2. and if the CPU or the memory of the machine does not exceed the threshold value after the task is calculated, distributing the task to the virtual machine. The algorithm 3 is a supplement to one of the steps of the algorithm 1, and mainly solves the problem of task allocation during state transition.
In one embodiment, the metrics that measure system performance, i.e., the guaranteed rate, resource utilization, and total energy consumption, are considered from three aspects:
Figure BDA0001719793190000122
Figure BDA0001719793190000123
Figure BDA0001719793190000124
Figure BDA0001719793190000125
wherein, the formulas 1 to 9 refer to a task completion rate (GR, guardee Ratio), i.e. a percentage of the total tasks of the task exhibition that can be completed on time. Wherein xi,j,kRepresenting a task tiThe situation is completed in time in the kth VM of the jth PM, and n represents the total task number. Equations 1-10 refer to Resource Utilization (RU), i.e., any completed taskThe total length of the service accounts for the sum of the resource amount of the opened host. Wherein y isi,j,kRepresenting a task tiThe case is completed in the kth VM of the jth PM, n representing the total task number. Equations 1-11 show that the power consumption of a powered-on host is proportional to the power of its operating frequency to the third power. Equations 1-12 represent the total power consumption calculation equation from the time st to the time et for an open host.
Wherein x isi,j,kAnd yi,j,kRespectively representing signed tasks tiWhether it has been completed and completed in the kth VM of the jth PM, wtjRepresents PM hjActive time of cj tE {1,0} represents hjWhether it is valid at time t, when hjIs 1 when valid, otherwise is 0. The guarantee ratio in equations 1-9 represents the task completion rate, which is the primary goal. x is the number ofi,j,kMeaning that the task is completed before the cutoff date, and the tasks completed in equations 1-10 mean that the task is completed regardless of its running time. y isi,j, kRepresenting the ratio of the total length of all task arrivals for all PMs to the total active time of the running CPU. The total power consumption represents the overall power consumption of the startup PM as a whole, and is basically calculated from the usage amount of the CPU. Power consumption can be divided into two states. One idle and the other active. For PM hjThe dynamic active power consumption p of the CPUj activeCan be approximately described as formulas 1-11. Let siPart of power consumption for idling PM (e.g. 50% and 60%)j maxAs a host hjMaximum power consumption when fully utilized. h isjTotal energy consumption tec from time st to time etjCan be approximated by equations 1-12.
The network resource allocation scheduling method of the invention is compared with other existing methods quantitatively, and the data sets used in the experiment are from Google group tracks and World of Washraft data sets, which represent a video game service. The "magic world" dataset is a trail of an "magic world" online game, containing 1107-day records between 1 month in 2006 and 1 month in 2009, from which several hours of experimental records were extracted. In the experiment, it was assumed that the system provided five different types of virtual machines, and that the number of virtual machines per configuration was infinite. The experiment was divided into two parts. The first part shows the accuracy of the prediction method in the network resource allocation scheduling method of the invention by predicting the actual workload statistics, and the relative error is less than 5%. The second part shows the proposed network resource configuration scheduling method of the present invention, and we evaluate the network resource configuration scheduling method of the present invention with metrics including VM time and availability. The result shows that the network resource allocation scheduling method has good performance under the emergency condition.
The network resource allocation scheduling method of the invention aims at resource allocation under the condition of flow emergency, and can combine the moving average prediction method under normal condition with the trend extrapolation prediction method under emergency to predict the upcoming workload outbreak; the upcoming tasks are characterized in the cluster, the attributes of the upcoming tasks comprise that the arrival rate is predicted independently, the trend prediction of the tasks is converted into outbreak signals, and therefore the reservation strategy is changed; a large number of experiments are carried out, and experimental results show that the network resource allocation scheduling method is superior to other methods in terms of guarantee ratio, total energy consumption and resource utilization rate.
In one embodiment, as shown in fig. 5, the present invention provides a network resource allocation scheduling apparatus 50, including: a trend prediction module 51, a resource reservation module 52, and a parameter adjustment module 53. The trend prediction module 51 predicts a predicted value of task arrival based on a preset prediction algorithm, and the predicted value includes: the predicted time of arrival and the predicted number of arrivals for the future task. The resource reservation module 52 re-deploys the network resources accordingly according to the predicted values, so that there are enough processing resources when the task volume peak of the future task arrives, and the network resources include: a physical machine PM and a virtual machine VM. The parameter adjustment module 53 readjusts the parameters of the task classifier and the prediction algorithm based on the difference between the number of actually received tasks and the predicted number of arrival of the future tasks when the predicted arrival time of the future task is reached.
As shown in fig. 6, the network resource allocation scheduling apparatus may also include: and the task grouping module is used for grouping the tasks requested to be processed through the task classifier to generate at least one task cluster. The trend prediction module 51 judges whether to enter a burst state based on the number of tasks added in the task cluster in unit time, if so, predicts a predicted value of the arrival of a future task in the task cluster by using a burst prediction algorithm, and if not, predicts the predicted value of the arrival of the future task in the task cluster by using a conventional prediction algorithm. The task grouping module receives the tasks requiring processing, performs grouping processing on the tasks through the task classifier, and distributes the tasks to the task cluster with the highest similarity; the tasks include: service request tasks, computation request tasks, and the like. The trend prediction module 51 predicts predicted values of future task arrivals in the plurality of task clusters respectively based on a preset prediction algorithm, and the predicted values include arrival prediction time and arrival prediction number of the future tasks.
In one embodiment, the task grouping module selects historical data of tasks, pre-groups the historical data by adopting a k-means clustering method to obtain a plurality of task clusters, and obtains a cluster characteristic attribute value of each task cluster. When the task grouping module receives the task, the characteristic attribute value of the task is obtained; clustering the feature attribute values and the feature attribute values comprises: arrival time, calculated length, deadline requirements, required memory size, etc. The task classifier respectively calculates the Mahalanobis distance between the characteristic attribute value and the characteristic attribute values of the clusters, and the task is allocated to the task cluster corresponding to the minimum Mahalanobis distance. In the unit time, if the acceleration of the increase in the number of tasks in the task cluster is continuously increased or the number of times that the value of the acceleration of the increase in the number of tasks exceeds the preset acceleration threshold exceeds the preset number of times threshold, the trend prediction module 51 determines that the burst state is entered. The trend prediction module 51, after determining that the burst state is entered, if it is determined that the acceleration of increase in the number of tasks in the task cluster is continuously decreased or the number of times the value of the acceleration of increase in the number of tasks is lower than the acceleration threshold exceeds the number threshold, lacks the normal state.
The trend prediction module 51 adopts a burst prediction algorithm to predict the predicted value of the future task arrival in the task cluster as follows:
Figure BDA0001719793190000151
the trend prediction module 53 employs a conventional prediction algorithm to predict the predicted value of the future task arrival in the task cluster as follows:
Figure BDA0001719793190000152
where s is the predicted window duration, at time t, et+sTo predict the number of task arrivals at time t + s based on the number of task arrivals at time t, b1,b2,b3Is a prediction parameter, exp () is an exponential function with a natural base number e as the base, window is the number of set historical data affecting the predicted value, WinThe influence weight of the historical data of the in-th task on the preset value, i refers to the time length corresponding to the in time points called back at the time t, ct-iRefers to the number of task arrivals at time t-i.
The resource reservation module 52 allocates a corresponding PM or VM to the newly added task in the task cluster according to the predicted value and the characteristics of the task cluster, where the characteristics include: memory requirements, calculated length, etc. For the normal state, the resource reservation module 52 sets the reserved space of the PM as the minimum reserved space threshold, and when the burst state occurs, increases the reserved space of the PM and/or starts a new PM and reserves the reserved space to meet the burst increase of the workload; constraints that are satisfied when making an allocation of a PM or VM include:
Figure BDA0001719793190000153
and
Figure BDA0001719793190000154
Figure BDA0001719793190000155
denotes the maximum CPU value, f, of the jth PMj,kThe CPU value of the kth virtual machine on the jth PM is pointed; m is a unit ofjRefers to the maximum memory value, m, of the jth PMj,kRefers to the memory value of the kth virtual machine on the jth PM, and N is the number of virtual machines on the jth PM. When the task is marked as an emergency, the resource reservation module 52 searches the plurality of virtual machines in sequence according to a predetermined sequence, and determines whether the virtual machine satisfies a constraint condition after the task is allocated to the virtual machine; if so, the task is assigned to the virtual machine, and if not, the next virtual machine is searched continuously. If the reserved space of the PM is increased when the PM is converted from the normal state to the burst state, the resource reservation module 52 judges whether the CPU occupation amount or the memory occupation amount of the task allocated to the PM at any running time exceeds a set threshold value, if so, the task allocated to the PM is transferred to other opened PMs or the newly opened PM from the task with the minimum CPU occupation amount or the minimum memory occupation amount until the CPU occupation amount and the memory occupation amount of the task allocated to the PM at any running time do not exceed the set threshold value.
In one embodiment, as shown in fig. 7, a network resource configuration scheduling apparatus is provided, which may include a memory 71 and a processor 72, where the memory 71 is used for storing instructions, and the processor 72 is coupled to the memory 71, and the processor 72 is configured to execute a network resource configuration scheduling method implemented above based on the instructions stored in the memory 71. The memory 71 may be a high-speed RAM memory, a non-volatile memory (non-volatile memory), or the like, and the memory 71 may be a memory array. The storage 71 may also be partitioned and the blocks may be combined into virtual volumes according to certain rules. The processor 72 may be a central processing unit CPU, or an application Specific Integrated circuit asic (application Specific Integrated circuit), or one or more Integrated circuits configured to implement the network resource allocation scheduling method of the present invention. In one embodiment, the present invention provides a computer-readable storage medium storing computer instructions which, when executed by a processor, implement a network resource configuration scheduling method as in any of the above embodiments.
The method and system of the present invention may be implemented in a number of ways. For example, the methods and systems of the present invention may be implemented in software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustrative purposes only, and the steps of the method of the present invention are not limited to the order specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present invention may also be embodied as a program recorded in a recording medium, the program including machine-readable instructions for implementing a method according to the present invention. Thus, the present invention also covers a recording medium storing a program for executing the method according to the present invention. The description of the present invention has been presented for purposes of illustration and description, and is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to practitioners skilled in this art. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.

Claims (8)

1. A network resource allocation scheduling method is characterized by comprising the following steps:
grouping the tasks requiring processing through a task classifier to generate at least one task cluster;
judging whether to enter a burst state or not based on the number of tasks added in the task cluster in unit time, if so, predicting a predicted value of the arrival of a future task in the task cluster by adopting a burst prediction algorithm, and if not, predicting the predicted value of the arrival of the future task in the task cluster by adopting a conventional prediction algorithm; wherein the predicted values include: the predicted arrival time and predicted arrival number of future tasks;
the network resources are correspondingly re-deployed according to the predicted values, so that enough processing resources are available when the peak value of the task amount of future tasks comes; wherein the network resources include: a physical machine PM and a virtual machine VM;
readjusting parameters of the task classifier and the prediction algorithm based on a difference between an actual number of received tasks and a predicted number of arrival tasks when a predicted time of arrival for the future task is reached, including:
based on each group of characteristics of clustering pre-grouping, the task classifier reads the attribute value of a new task after the new task arrives, and performs distance calculation with the attribute characteristic values of all task clusters so as to divide the new task into the group with the minimum distance value;
updating the attribute characteristic value of the group with the minimum distance value;
sending a different task cluster to a prediction pool at each time interval to predict an arrival rate of each particular task cluster;
and when the actual time reaches the predicted time point, packing the difference value between the actual value and the predicted value as feedback, and readjusting the parameters of the prediction method.
2. The method of claim 1, further comprising:
distributing corresponding PM or VM for the newly added task in the task cluster according to the predicted value and the characteristics of the task cluster, wherein the characteristics comprise: memory requirement and calculation length;
for a normal state, the reserved space of the PM is a minimum reserved space threshold value; when the emergent state occurs, the reserved space of the PM is increased and/or a new PM is started and reserved for meeting the requirement of sudden increase of the workload;
wherein, the constraint conditions satisfied when performing allocation of PM or VM include:
Figure FDA0003469044500000021
and
Figure FDA0003469044500000022
Figure FDA0003469044500000023
denotes the maximum CPU value, f, of the jth PMj,kThe CPU value of the kth virtual machine on the jth PM is pointed; m isjRefers to the maximum memory value, m, of the jth PMj,kRefers to the memory value of the kth virtual machine on the jth PM, and N is the number of virtual machines on the jth PM.
3. The method of claim 2, further comprising:
when the task is marked as a sudden situation, searching a plurality of virtual machines in sequence according to a preset sequence, and judging whether the virtual machines meet the constraint condition after the task is distributed;
if yes, the task is distributed to the virtual machine, and if not, the next virtual machine is searched continuously;
if the PM is converted into the burst state from the conventional state, the reserved space of the PM is increased, whether the CPU occupation amount or the memory occupation amount of the task allocated to the PM at any running time exceeds a set threshold value is judged, if yes, the task allocated to the PM is transferred to other opened PMs or the newly opened PM from the task with the minimum CPU occupation amount or the minimum memory occupation amount until the CPU occupation amount and the memory occupation amount of the task allocated to the PM at any running time do not exceed the set threshold value.
4. The method of claim 1, further comprising:
the predicted value for predicting the arrival of the future task in the task cluster by adopting the burst prediction algorithm is as follows:
Figure FDA0003469044500000024
the predicted value for predicting the arrival of the future task in the task cluster by adopting the conventional prediction algorithm is as follows:
Figure FDA0003469044500000025
where s is the predicted window duration, at time t, et+sA predicted value of the task arrival number at the time t + s according to the task arrival number at the time t, b1,b2,b3Is a prediction parameter, exp () is an exponential function with a natural base number e as the base, window is the number of set historical data affecting the predicted value, WinThe influence weight of the historical data of the in-th task on the predicted value, i refers to the time length corresponding to the in time points which are called back at the time t, and ct-iRefers to the number of task arrivals at time t-i.
5. A network resource allocation scheduling apparatus, comprising:
the trend prediction module is used for grouping the tasks requested to be processed through the task classifier to generate at least one task cluster;
judging whether to enter a burst state or not based on the number of tasks added in the task cluster in unit time, if so, predicting a predicted value of the arrival of a future task in the task cluster by adopting a burst prediction algorithm, and if not, predicting the predicted value of the arrival of the future task in the task cluster by adopting a conventional prediction algorithm; wherein the predicted values include: the predicted arrival time and predicted arrival number of future tasks;
the resource reservation module is used for carrying out corresponding redeployment on the network resources according to the predicted value so as to ensure that enough processing resources are available when the task amount peak value of the future task comes; wherein the network resources include: a physical machine PM and a virtual machine VM;
a parameter adjusting module, configured to readjust parameters of the task classifier and the prediction algorithm based on a difference between an actually received number of tasks and a predicted number of arrival of the future task when the predicted arrival time of the future task is reached, including:
based on each group of characteristics of clustering pre-grouping, the task classifier reads the attribute value of a new task after the new task arrives, and performs distance calculation with the attribute characteristic values of all task clusters so as to divide the new task into the group with the minimum distance value;
updating the attribute characteristic value of the group with the minimum distance value;
sending a different task cluster to the prediction pool at each time interval to predict the arrival rate of each particular task cluster;
and when the actual time reaches the predicted time point, packing the difference value between the actual value and the predicted value as feedback, and readjusting the parameters of the prediction method.
6. The apparatus of claim 5,
the resource reservation module is configured to allocate a corresponding PM or VM to a newly added task in the task cluster according to the predicted value and the characteristics of the task cluster, where the characteristics include: memory requirement and calculation length; for a normal state, the reserved space of the PM is a minimum reserved space threshold value; when the emergent state occurs, the reserved space of the PM is increased and/or a new PM is started and reserved for meeting the requirement of sudden increase of the workload;
wherein, the constraint conditions satisfied when allocating PM or VM include:
Figure FDA0003469044500000041
and
Figure FDA0003469044500000042
Figure FDA0003469044500000043
denotes the maximum CPU value, f, of the jth PMj,kThe CPU value of the kth virtual machine on the jth PM is pointed; m isjRefers to the maximum memory value, m, of the jth PMj,kRefers to the memory value of the kth virtual machine on the jth PM, and N is the number of virtual machines on the jth PM.
7. The apparatus of claim 6,
the resource reservation module is used for searching the plurality of virtual machines in sequence according to a preset sequence when the task is marked as an emergency, and judging whether the virtual machine meets the constraint condition after the task is allocated to the virtual machine; if yes, the task is distributed to the virtual machine, and if not, the next virtual machine is searched continuously;
and the resource reservation module is used for increasing the reserved space of the PM and judging whether the CPU occupation amount or the memory occupation amount of the task allocated to the PM at any running time exceeds a set threshold value or not when the PM is converted from a normal state to a burst state, and if so, starting from the task with the minimum CPU occupation amount or the minimum memory occupation amount, transferring the task allocated to the PM to other opened PMs or transferring the task allocated to the PM to a newly opened PM until the CPU occupation amount and the memory occupation amount of the task allocated to the PM at any running time do not exceed the set threshold value.
8. The apparatus of claim 5,
the trend prediction module is used for predicting the predicted value of the future task arrival in the task cluster by adopting a burst prediction algorithm, and comprises the following steps:
Figure FDA0003469044500000044
the trend prediction module is used for predicting the predicted value of the future task arrival in the task cluster by adopting a conventional prediction algorithm, and comprises the following steps:
Figure FDA0003469044500000051
where s is the predicted window duration, at time t, et+sTo predict the number of task arrivals at time t + s based on the number of task arrivals at time t, b1,b2,b3Is a prediction parameter, exp () is an exponential function with a natural base number e as the base, window is the number of set historical data affecting the predicted value, WinThe influence weight of the historical data of the in-th task on the predicted value, i refers to the time length corresponding to the in time points which are called back at the time t, and ct-iRefers to the number of task arrivals at time t-i.
CN201810726208.XA 2018-07-04 2018-07-04 Network resource allocation scheduling method and device Active CN109005130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810726208.XA CN109005130B (en) 2018-07-04 2018-07-04 Network resource allocation scheduling method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810726208.XA CN109005130B (en) 2018-07-04 2018-07-04 Network resource allocation scheduling method and device

Publications (2)

Publication Number Publication Date
CN109005130A CN109005130A (en) 2018-12-14
CN109005130B true CN109005130B (en) 2022-05-10

Family

ID=64598178

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810726208.XA Active CN109005130B (en) 2018-07-04 2018-07-04 Network resource allocation scheduling method and device

Country Status (1)

Country Link
CN (1) CN109005130B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110519386B (en) * 2019-08-30 2022-04-19 中国人民解放军国防科技大学 Elastic resource supply method and device based on data clustering in cloud environment
CN110865872B (en) * 2019-11-14 2022-07-08 北京京航计算通讯研究所 Virtualized cluster resource scheduling system based on resource rationalization application
CN114124733B (en) * 2020-08-27 2024-05-14 中国电信股份有限公司 Service flow prediction method and device
CN113098710B (en) * 2021-03-26 2022-07-12 北京赛博云睿智能科技有限公司 Network resource operation parameter self-adjusting and optimizing method and device
CN114417577A (en) * 2021-12-30 2022-04-29 浙江省科技信息研究院 Cross-platform resource scheduling and optimization control method
CN115174695B (en) * 2022-07-18 2024-01-26 中软航科数据科技(珠海横琴)有限公司 Scheduling system and method for distributed network resources
CN116033584B (en) * 2023-02-03 2023-10-20 阿里巴巴(中国)有限公司 Air interface resource scheduling method, network access equipment and communication network
CN116880401B (en) * 2023-07-28 2024-09-20 江苏道达智能科技有限公司 Automatic stereoscopic warehouse control system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103150215A (en) * 2013-02-04 2013-06-12 浙江大学 CPU (Central Processing Unit) resource utilization forecasting method of fine grit under virtual environment
JP2014048778A (en) * 2012-08-30 2014-03-17 Oki Electric Ind Co Ltd Demand prediction device, demand prediction method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014048778A (en) * 2012-08-30 2014-03-17 Oki Electric Ind Co Ltd Demand prediction device, demand prediction method, and program
CN103150215A (en) * 2013-02-04 2013-06-12 浙江大学 CPU (Central Processing Unit) resource utilization forecasting method of fine grit under virtual environment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
曹玲玲.面向绿色云计算的资源配置及任务调度研究.《中国优秀硕士学位论文全文数据库 信息科技辑》.2015, *
面向绿色云计算的资源配置及任务调度研究;曹玲玲;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150515;参见第3-4章 *

Also Published As

Publication number Publication date
CN109005130A (en) 2018-12-14

Similar Documents

Publication Publication Date Title
CN108984301B (en) Self-adaptive cloud resource allocation method and device
CN109005130B (en) Network resource allocation scheduling method and device
US10620839B2 (en) Storage pool capacity management
US8869160B2 (en) Goal oriented performance management of workload utilizing accelerators
CN108027889B (en) Training and scheduling method for incremental learning cloud system and related equipment
JP5218390B2 (en) Autonomous control server, virtual server control method and program
US20160299697A1 (en) Workload-aware i/o scheduler in software-defined hybrid storage system
CN111176852A (en) Resource allocation method, device, chip and computer readable storage medium
US20120221730A1 (en) Resource control system and resource control method
CN104168318A (en) Resource service system and resource distribution method thereof
US10884667B2 (en) Storage controller and IO request processing method
US10216543B2 (en) Real-time analytics based monitoring and classification of jobs for a data processing platform
CN105607952B (en) Method and device for scheduling virtualized resources
CN110262897B (en) Hadoop calculation task initial allocation method based on load prediction
CN109408230B (en) Docker container deployment method and system based on energy consumption optimization
CN116244085A (en) Kubernetes cluster container group scheduling method, device and medium
CN109005052B (en) Network task prediction method and device
CN110796591A (en) GPU card using method and related equipment
TW202215248A (en) Method of operating storage system, and method of partitioning tier of storage resources
CN117251275A (en) Multi-application asynchronous I/O request scheduling method, system, equipment and medium
US10430312B2 (en) Method and device for determining program performance interference model
CN112882805A (en) Profit optimization scheduling method based on task resource constraint
US11416152B2 (en) Information processing device, information processing method, computer-readable storage medium, and information processing system
CN115499513A (en) Data request processing method and device, computer equipment and storage medium
CN110580192B (en) Container I/O isolation optimization method in mixed scene based on service characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant