CN116594855A - Virtual machine load prediction method based on missing value filling - Google Patents

Virtual machine load prediction method based on missing value filling Download PDF

Info

Publication number
CN116594855A
CN116594855A CN202310520971.8A CN202310520971A CN116594855A CN 116594855 A CN116594855 A CN 116594855A CN 202310520971 A CN202310520971 A CN 202310520971A CN 116594855 A CN116594855 A CN 116594855A
Authority
CN
China
Prior art keywords
virtual machine
vector
data
value
missing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310520971.8A
Other languages
Chinese (zh)
Inventor
高岩
孙汉玺
刘凯
Original Assignee
东北大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 东北大学 filed Critical 东北大学
Priority to CN202310520971.8A priority Critical patent/CN116594855A/en
Publication of CN116594855A publication Critical patent/CN116594855A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3433Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment for load management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/301Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is a virtual computing platform, e.g. logically partitioned systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3051Monitoring arrangements for monitoring the configuration of the computing system or of the computing system component, e.g. monitoring the presence of processing resources, peripherals, I/O links, software programs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3055Monitoring arrangements for monitoring the status of the computing system or of the computing system component, e.g. monitoring if the computing system is on, off, available, not available
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • G06F18/15Statistical pre-processing, e.g. techniques for normalisation or restoring missing data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/815Virtual
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Quality & Reliability (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computer Hardware Design (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention designs a virtual machine load prediction method based on deficiency value filling, and belongs to the technical field of cloud computing; the concurrent access quantity and the resource quantity of the virtual machine to be subjected to load prediction in a certain period are taken as known values, the virtual machine load to be predicted is taken as a missing value, and the virtual machine load is filled by using a missing value filling method, so that a task of predicting the virtual machine load is realized, and the virtual machine load prediction problem is converted into a missing virtual machine load value filling problem; then constructing and training a GAIN-VMLP (virtual machine load prediction model) based on GAIN; predicting the load of the virtual machine by using the trained GAIN-VMLP model; the virtual machine load prediction method and device can effectively solve the virtual machine load prediction problem in cloud computing, and therefore effective support is provided for elastic expansion adjustment of virtual machine resources in cloud computing.

Description

Virtual machine load prediction method based on missing value filling
Technical Field
The invention belongs to the technical field of cloud computing, and particularly relates to a virtual machine load prediction method based on missing value filling.
Background
With the development of cloud computing, more and more software systems are deployed as SaaS to provide services in a cloud environment. The SaaS layer application (i.e., cloud application) provides services to users by signing service level agreements (SLAs, service Level Agreement) with the users.
Typically, cloud application providers need to rent resources (e.g., virtual machines) provided by cloud resource providers, deploy cloud applications, and provide users with services that meet SLA requirements. The performance of a cloud application deployed on a Virtual Machine (VM) is closely related to the amount of resources (such as CPU type and core number, memory capacity, network bandwidth, system disk type and capacity, etc.) and load (such as CPU utilization, memory occupancy, etc.) of the Virtual Machine where the cloud application is located, and if the loads of the Virtual machines under the same configuration are different, the performance of the cloud application running on the Virtual Machine is also different. The load of the virtual machine is influenced by factors such as the resource amount of the virtual machine, the concurrent access amount of the user and the like. Under the same concurrent access amount of users, if the virtual machine resource amount is large, the load is possibly low and the performance is good; if the amount of virtual machine resources is small, the load may be high and the performance may be poor.
Cloud application providers always wish to lease resource deployment cloud applications with minimal resource usage costs, providing users with services that meet SLA requirements. In order to guarantee the performance of the cloud application with minimum resource usage cost, the load of the virtual machine needs to be limited within a certain range, and if the load exceeds the range, the virtual machine resource needs to be dynamically adjusted. In order to meet the requirement that the initial application or the adjusted virtual machine load is in the limited range as far as possible, a model for predicting the virtual machine load based on the virtual machine resource amount and the user concurrent access amount needs to be established, and the model is used for calculating the degree to which the virtual machine resource amount needs to be adjusted under the current concurrent access amount condition, so that how to predict the virtual machine load based on the virtual machine resource amount and the user concurrent access amount becomes a key problem of the elastic adjustment of the virtual machine resource.
The virtual machine load prediction based on the virtual machine resource amount and the concurrent access amount of the user is a regression problem, and the virtual machine load prediction is input into the virtual machine resource amount and the concurrent access amount and output into the virtual machine load.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention provides a virtual machine load prediction method based on deficiency value filling; and taking the configuration information and concurrency of the virtual machine as known values, taking the virtual machine load to be predicted as a missing value, and filling the missing value by adopting GAIN (Generative Adversarial Imputation Nets, generating an anti-interpolation network).
A virtual machine load prediction method based on missing value filling specifically comprises the following steps:
step 1: converting the virtual machine load prediction problem into a missing virtual machine load value filling problem;
the virtual machine load prediction is to predict the load of the virtual machine in a certain period according to the concurrent access quantity and the resource quantity of the virtual machine in the period; the indexes of the concurrent access quantity, the resource quantity and the load are set according to the actual application requirements;
the concurrent access quantity and the resource quantity of the virtual machine to be subjected to load prediction in a certain period are taken as known values, the virtual machine load to be predicted is taken as a missing value, and the virtual machine load is filled by using a missing value filling method, so that a task of predicting the virtual machine load is realized, and the virtual machine load prediction problem is converted into a missing virtual machine load value filling problem; the method comprises the following steps:
the method for converting the virtual machine load prediction problem into the missing virtual machine load value filling problem by predicting m virtual machine loads according to the actual application requirements by adopting concurrent access quantity and n virtual machine resource quantity indexes comprises the following steps:
n virtual machine resource quantity indexes, virtual machine concurrent access quantity and m virtual machine load indexes of the virtual machine jointly form a virtual machine state described by k (k=n+1+m) indexes; the first n items are virtual machine resource amounts, the n+1th item is concurrent access amount, and the last m items are virtual machine loads to be predicted;
the specific value of each state index of the virtual machine in a certain time period forms a k-dimensional virtual machine state vector; predicting the virtual machine load based on the virtual machine resource amount and the user concurrent access amount aiming at a virtual machine of the load to be predicted, namely predicting the value of the m items by using the first n+1 items of the virtual machine state vector; taking the first n+1 items of the virtual machine state vector as known values, taking the last m items as missing values, filling the missing values to complete the prediction work of the virtual machine load, and converting the virtual machine load prediction problem into a missing virtual machine load value filling problem;
step 2: constructing a GAIN-VMLP (virtual machine load prediction model) based on GAIN;
the GAIN-VMLP takes the virtual machine resource quantity and the concurrent access quantity (namely the first n+1 items of the virtual machine state vector) as known values, takes the virtual machine load to be predicted (namely the last m items of the virtual machine state vector) as a missing value, converts the virtual machine load prediction problem into a missing virtual machine load value filling problem, and fills the missing virtual machine load value by adopting GAIN, thereby completing the prediction of the virtual machine load, and constructing a GAIN-VMLP model for virtual machine load prediction;
the filling of the missing virtual machine load value by adopting GAIN specifically comprises the following steps: generating fitting data of the missing virtual machine load value through a generator, and judging whether the data are real through a discriminator so as to achieve the aim of countermeasure;
the input of the GAIN-VMLP model is a k-dimensional virtual machine state vector with a missing virtual machine load value (the last m items are virtual machine loads to be predicted), and the output is a k-dimensional virtual machine state vector with a virtual machine load prediction value (the last m items are virtual machine loads for which results have been predicted);
the method for constructing the GAIN-VMLP based virtual machine load prediction model comprises the following specific processes:
step S1: designing an input data vector;
the missing items in the virtual machine state vector are marked by a special value which is not in the value range of any index to form an input data vector X; for a k-dimensional virtual machine state vector, the first n+1 terms of concurrent access amount and virtual machine resource amount are known, while the last m terms representing virtual machine load are missing;
step S2: designing a mask vector;
marking the missing data position through a mask vector; k-dimensional vector m= (M 1 ,...,M k ) The value of each component is 0 or 1, wherein 1 indicates that the value of the corresponding component in X is not missing, and 0 indicates that the value of the corresponding component in X is not missing; each X corresponds to one M, the values of the components in M are set according to X, when the components in X are i (i∈[1,k]) When 0, the component M in the corresponding M i Also 0; when component X in X i (i∈[1,k]) When the value is not 0, the component M in the corresponding M i 1, thereby forming a mask vector M;
step S3: designing a random noise vector;
initially padding the missing data using random noise; randomly generating a k-dimensional random noise vector z= (Z) 1 ,…,Z k ) The value range of each component is 0,1]Thereby forming a random noise vector Z;
step S4: designing a prompt vector;
the prompt vector intensifies the countermeasure process of the generator and the discriminator, prompts the discriminator for partial missing information of the original data, enables the discriminator to pay more attention to the prompt part of the prompt vector, and simultaneously forces the generator to generate more real data; the generation method of the prompt vector H is as follows:
step S4.1: firstly, generating a random vector B; k-dimensional vector b= (B) 1 ,…,B k ) The value of each component of (2) is 0 or 1; the generation method of each component value comprises the following steps: randomly generating a number p from {1, …, k }, setting the value of the p-th component in B to 0, and setting the values of the rest components to 1;
step S4.2: generating a prompt vector H according to the B; k-dimensional vector h= (H) 1 ,…,H k ) The generation method of H with the value of 0, 0.5 or 1 of each component is shown in the formula 1:
H=B⊙M+0.5(1-B) (1)
wherein, as follows, the point multiplication of the vector;
step S5: designing a generator G;
a generator used in the GAIN-VMLP to generate prediction data; the input of the generator is k-dimensional input data vector X with missing value, mask vector M and random noise vector Z, and the output data vector with virtual machine load predicted value is output through three full-connection layersSince the generator predicts not only the value of the position to be predicted, but also the value of the original input data, the output vector not only should make the predicted value spoof the discriminator, but also makes the value output of the original input data approach to the true value; the two loss functions of the generator are therefore shown in equations 2 and 3:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data; x represents an input data vector; />Representing the data vector generated by the generator; k represents the dimension of the data vector x;
the goal of generator G is to minimize the weighted sum of the two loss functions, as shown in equation 4:
wherein K is G The number of training samples in each batch when the generator G adopts a gradient descent method for training is represented, and alpha is a super parameter;
step S6: designing a discriminator D;
the GAIN-VMLP uses a discriminator to determine whether the data is real data in the dataset or false data generated by the generator, the input of which is the output of the generatorAnd a prompt vector H for outputting a judgment result expressed in a probability form through three full connection layers>
The loss function of the discriminator is shown in fig. 5:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data;
for equation 5, the training criteria for the expected arbiter for the authenticity determination of the predicted virtual machine load is shown in equation 6:
wherein K is D The number of training samples of each batch when the discriminator D adopts a gradient descent method for training is represented;
step S7: carrying out standardized design on input data;
carrying out standardization processing on each dimension of the GAIN-VMLP input data, mapping the data into a [0,1] interval, wherein a standardization formula is shown in a formula 7:
step 3: training a GAIN-VMLP model;
the training process of the GAIN-VMLP model is as follows:
step 3.1: processing the training data set; the training data set consists of a plurality of pieces of data obtained by running or performing benchmark test of the virtual machine, wherein each piece of data consists of n virtual machine resource quantity index values, concurrent access quantity and m virtual machine load index values; randomly selecting a plurality of data in the data set according to a proportion, emptying the load index value of the virtual machine to represent a missing value;
step 3.2: generating an input data vector; according to the standardized design of input data, carrying out standardized processing on each piece of data in a data set by adopting a 7, and marking items with empty load values in each piece of data by using a-1, so as to form a data set X consisting of a plurality of input data vectors standardized by the state vectors of the k-dimensional virtual machine;
step 3.3: setting a batch processing size s; training the GAIN-VMLP model using a small batch gradient descent method; the number of virtual machine state vectors input into the GAIN-VMLP per batch will be controlled according to a batch size (batch size) parameter s;
step 3.4: calculating a mask vector; generating a mask vector M for each data vector X in the data set X according to a mask vector design method, thereby forming a mask set M of the data set X;
step 3.5: and (3) performing discriminant optimization training: the optimized training process of the discriminator is as follows:
step 3.5.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data vectors from the mask set M +.>
Step 3.5.2: generating s independent random noise Z distributed in the same way;
step 3.5.3: generating s independent random vectors B distributed in the same way;
step 3.5.4: using a generator for s data vectorsGenerating s data vectors->
Step 3.5.5: updating the training discriminant D based on the gradient descent method 5;
step 3.6: performing generator optimization training; the generator optimization training process is as follows:
step 3.6.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data from the mask set M>
Step 3.6.2: generating s independent random noise Z distributed in the same way;
step 3.6.3: generating s independent random vectors B distributed in the same way;
step 3.6.4: generating s hint vectors based on 1
Step 3.6.5: updating the training generator G based on 4 using a gradient descent method;
step 4: predicting the load of the virtual machine by using the trained GAIN-VMLP model;
setting a virtual machine resource quantity index and a virtual machine load index according to actual application requirements, combining given concurrency quantity to form a virtual machine state vector, inputting the virtual machine state vector into a GAIN-VMLP model, and obtaining a model output result which is a virtual machine load prediction result, thereby realizing the prediction of the load of the virtual machine.
The invention has the beneficial technical effects that:
1. the virtual machine load prediction method based on missing value filling is constructed, the problem of model establishment in the prediction method is discussed, and the virtual machine load in cloud computing is predicted. And determining the selected model and related parameters according to the experimental result and the actual application requirement.
2. The invention discloses a virtual machine load prediction method based on deficiency value filling, and introduces a prediction process and a prediction algorithm of the model. Through experimental comparison analysis, the method has good effects on prediction precision and stability.
3. The virtual machine load prediction method based on the missing value filling is provided, and the problem is solved by adopting a missing value filling algorithm GAIN, so that the virtual machine load prediction problem in cloud computing can be effectively solved, and effective support is provided for elastic expansion adjustment of virtual machine resources in cloud computing.
Drawings
FIG. 1 is a schematic diagram of a virtual machine state vector for a GAIN-VMLP model according to an embodiment of the present invention;
FIG. 2 is a mask vector diagram of a GAIN-VMLP model according to an embodiment of the present invention;
FIG. 3 is a diagram of a GAIN-VMLP model architecture according to an embodiment of the present invention;
FIG. 4 shows the result of predicting CPU load according to the embodiment of the invention;
FIG. 5 shows the result of predicting the load of RAM according to the embodiment of the invention;
FIG. 6 shows experimental results of the influence of different numbers of data set samples on the algorithm in the embodiment of the invention;
FIG. 7 illustrates experimental results of the influence of different distribution forms of the data set on algorithm accuracy according to the embodiment of the present invention;
FIG. 8 shows experimental results of the influence of different iteration times on algorithm accuracy in the algorithm of the embodiment of the invention;
FIG. 9 shows experimental results of test sets with different proportions for different full-connection layers in 10000 rounds of iteration according to the embodiment of the invention;
FIG. 10 shows experimental results of test sets with different proportions when different full connection layers are iterated for 20000 rounds according to the embodiment of the invention;
FIG. 11 shows experimental results of test sets of different proportions for different full-connection layers in 30000 rounds of iteration;
FIG. 12 shows experimental results of the influence of different hyper-parameters alpha on algorithm accuracy in the embodiment of the invention;
FIG. 13 shows experimental results of the influence of different predicted positions on algorithm accuracy according to the embodiment of the present invention.
Detailed Description
The invention is further described below with reference to the drawings and examples;
a virtual machine load prediction method based on missing value filling specifically comprises the following steps:
step 1: converting the virtual machine load prediction problem into a missing virtual machine load value filling problem;
the virtual machine load prediction is to predict the load of the virtual machine in a certain period according to the concurrent access quantity and the resource quantity of the virtual machine in the period; the resource amount of the virtual machine comprises a plurality of indexes such as CPU main frequency, CPU core number, memory capacity, internal network bandwidth, external network bandwidth, system disk type, system disk capacity and the like, and the specific adoption of the indexes depends on the virtual machine resource description and actual application requirements provided by a cloud server provider; the load of the virtual machine comprises a plurality of indexes such as CPU utilization rate, memory utilization rate, IO consumption, internal network bandwidth utilization rate, external network bandwidth utilization rate and the like, and the specific indexes are predicted to depend on actual application requirements;
the concurrent access quantity and the resource quantity of the virtual machine to be subjected to load prediction in a certain period are taken as known values, the virtual machine load to be predicted is taken as a missing value, and the virtual machine load is filled by using a missing value filling method, so that a task of predicting the virtual machine load is realized, and the virtual machine load prediction problem is converted into a missing virtual machine load value filling problem; the method comprises the following steps:
the method for converting the virtual machine load prediction problem into the missing virtual machine load value filling problem by predicting m virtual machine loads according to the actual application requirements by adopting concurrent access quantity and n virtual machine resource quantity indexes comprises the following steps:
n virtual machine resource quantity indexes, virtual machine concurrent access quantity and m virtual machine load indexes of the virtual machine jointly form a virtual machine state described by k (k=n+1+m) indexes; the first n items are virtual machine resource amounts, the n+1th item is concurrent access amount, and the last m items are virtual machine loads to be predicted;
the specific values of the state indexes of the virtual machine in a certain time period form a k-dimensional virtual machine state vector, as shown in figure 1; predicting the virtual machine load based on the virtual machine resource amount and the user concurrent access amount aiming at a virtual machine of the load to be predicted, namely predicting the value of the m items by using the first n+1 items of the virtual machine state vector; taking the first n+1 items of the virtual machine state vector as known values, taking the last m items as missing values, filling the missing values to complete the prediction work of the virtual machine load, and converting the virtual machine load prediction problem into a missing virtual machine load value filling problem;
the embodiment of the invention uses 2 virtual machine resource quantity indexes such as CPU core number, memory capacity and the like and 2 virtual machine loads such as CPU utilization rate, memory utilization rate and the like of a concurrent access quantity prediction virtual machine as an example for explanation; the 2 virtual machine resource quantity indexes, the concurrent access quantity and the 2 virtual machine load indexes jointly form a virtual machine state described by 5 indexes; the first 2 items of the virtual machine state are CPU core number and memory capacity, the 3 rd item is concurrent access quantity, and the last 2 items are CPU utilization rate and memory utilization rate; aiming at a virtual machine of a load to be predicted, the first 3 items of the state vector of the virtual machine are known values, the last 2 items are missing values, and the prediction work of the load of the virtual machine can be completed by filling the missing values of the last 2 items, so that the virtual machine load prediction problem is converted into a missing virtual machine load value filling problem.
Step 2: constructing a GAIN-VMLP (virtual machine load prediction model) based on GAIN; (GAIN for VM Load Prediction); as shown in fig. 3;
the GAIN-VMLP takes the virtual machine resource quantity and the concurrent access quantity (namely the first n+1 items of the virtual machine state vector) as known values, takes the virtual machine load to be predicted (namely the last m items of the virtual machine state vector) as a missing value, converts the virtual machine load prediction problem into a missing virtual machine load value filling problem, and fills the missing virtual machine load value by adopting GAIN, thereby completing the prediction of the virtual machine load, and constructing a GAIN-VMLP model for virtual machine load prediction;
the filling of the missing virtual machine load value by adopting GAIN specifically comprises the following steps: generating fitting data of the missing virtual machine load value through a generator, and judging whether the data are real through a discriminator so as to achieve the aim of countermeasure;
the input of the GAIN-VMLP model is a k-dimensional virtual machine state vector with a missing virtual machine load value (the last m items are virtual machine loads to be predicted), and the output is a k-dimensional virtual machine state vector with a virtual machine load prediction value (the last m items are virtual machine loads for which results have been predicted);
the method for constructing the GAIN-VMLP based virtual machine load prediction model comprises the following specific processes:
step S1: designing an input data vector;
the missing items in the virtual machine state vector are marked by a special value (the special value is not in the value range of any index) to form an input data vector X; for a k-dimensional virtual machine state vector, where the first n+1 terms representing the amount of concurrent access and the amount of virtual machine resources are known, while the last m terms representing the virtual machine load are missing;
step S2: designing a mask vector; the mask vector is shown in fig. 2;
marking the missing data position through a mask vector; k-dimensional vector m= (M 1 ,...,M k ) The value of each component is 0 or 1, wherein 1 indicates that the value of the corresponding component in X is not missing,0 indicates that the value of the corresponding component in X is not missing; each X corresponds to one M, the values of the components in M are set according to X, when the components in X are i (i∈[1,k]) When 0, the component M in the corresponding M i Also 0; when component X in X i (i∈[1,k]) When the value is not 0, the component M in the corresponding M i 1, thereby forming a mask vector M;
step S3: designing a random noise vector;
initially padding the missing data using random noise; randomly generating a k-dimensional random noise vector z= (Z) 1 ,…,Z k ) The value range of each component is 0,1]Thereby forming a random noise vector Z;
step S4: designing a prompt vector;
the prompt vector intensifies the countermeasure process of the generator and the discriminator, prompts the discriminator for partial missing information of the original data, enables the discriminator to pay more attention to the prompt part of the prompt vector, and simultaneously forces the generator to generate more real data; the generation method of the prompt vector H is as follows:
step S4.1: firstly, generating a random vector B; k-dimensional vector b= (B) 1 ,…,B k ) The value of each component of (2) is 0 or 1; the generation method of each component value comprises the following steps: randomly generating a number p from {1, …, k }, setting the value of the p-th component in B to 0, and setting the values of the rest components to 1;
step S4.2: generating a prompt vector H according to the B; k-dimensional vector h= (H) 1 ,…,H k ) The generation method of H with the value of 0, 0.5 or 1 of each component is shown in the formula 1:
H=B⊙M+0.5(1-B) (1)
wherein, as follows, the point multiplication of the vector;
step S5: designing a generator G;
a generator used in the GAIN-VMLP to generate prediction data; the input of the generator is k-dimensional input data vector X with missing value, mask vector M and random noise vector Z, and the output data vector with virtual machine load predicted value is output through three full-connection layersBecause the generator predicts not only the value of the position to be predicted, but also the value of the original input data, the output vector not only needs to make the predicted value spoof the discriminator, but also makes the value output of the original input data approximate to the true value; the two loss functions of the generator are shown in equations 2 and 3:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data; x represents an input data vector; />Representing the data vector generated by the generator; k represents the dimension of the data vector x;
the goal of generator G is to minimize the weighted sum of the two loss functions, as shown in equation 4:
wherein K is G The number of training samples in each batch when the generator G adopts a gradient descent method for training is represented, and alpha is a super parameter;
step S6: designing a discriminator D;
the GAIN-VMLP uses a discriminator to determine whether the data is real data in the dataset or false data generated by the generator, which is input to the generatorOutput ofAnd a prompt vector H for outputting a judgment result expressed in a probability form through three full connection layers>
The loss function of the discriminator is shown in fig. 5:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data;
for equation 5, the training criteria for the expected arbiter for the authenticity determination of the predicted virtual machine load is shown in equation 6:
wherein K is D The number of training samples of each batch when the discriminator D adopts a gradient descent method for training is represented;
step S7: carrying out standardized design on input data;
carrying out standardization processing on each dimension of the GAIN-VMLP input data, mapping the data into a [0,1] interval, wherein a standardization formula is shown in a formula 7:
step 3: training a GAIN-VMLP model;
the training process of the GAIN-VMLP model is as follows:
step 3.1: processing the training data set; the training data set consists of a plurality of pieces of data obtained by running or performing benchmark test of the virtual machine, wherein each piece of data consists of n virtual machine resource quantity index values, concurrent access quantity and m virtual machine load index values; randomly selecting a plurality of data in the data set according to a proportion, emptying the load index value of the virtual machine to represent a missing value;
step 3.2: generating an input data vector; according to the standardized design of input data, carrying out standardized processing on each piece of data in a data set by adopting a 7, and marking items with empty load values in each piece of data by using a-1, so as to form a data set X consisting of a plurality of input data vectors standardized by the state vectors of the k-dimensional virtual machine;
step 3.3: setting a batch processing size s; training the GAIN-VMLP model using a small batch gradient descent method; the number of virtual machine state vectors input into the GAIN-VMLP per batch will be controlled according to a batch size (batch size) parameter s; if s is set to 4, then 4 virtual machine state vectors will be selected for each batch to be entered into the GAIN-VMLP for processing.
Step 3.4: calculating a mask vector; generating a mask vector M for each data vector X in the data set X according to a mask vector design method, thereby forming a mask set M of the data set X;
step 3.5: and (3) performing discriminant optimization training: the optimized training process of the discriminator is as follows:
step 3.5.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data vectors from the mask set M +.>
Step 3.5.2: generating s independent random noise Z distributed in the same way;
step 3.5.3: generating s independent random vectors B distributed in the same way;
step 3.5.4: using a generator for s data vectorsGenerating s data vectors->
Step 3.5.5: updating the training discriminant D based on the gradient descent method 5;
step 3.6: performing generator optimization training; the generator optimization training process is as follows:
step 3.6.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data from the mask set M>
Step 3.6.2: generating s independent random noise Z distributed in the same way;
step 3.6.3: generating s independent random vectors B distributed in the same way;
step 3.6.4: generating s hint vectors based on 1
Step 3.6.5: updating the training generator G based on 4 using a gradient descent method;
step 4: predicting the load of the virtual machine by using the trained GAIN-VMLP model;
setting a virtual machine resource quantity index and a virtual machine load index according to actual application requirements, combining given concurrency quantity to form a virtual machine state vector, inputting the virtual machine state vector into a GAIN-VMLP model, and obtaining a model output result which is a virtual machine load prediction result, thereby realizing the prediction of the load of the virtual machine.
The invention comprises a virtual machine load prediction method based on missing value filling, a prediction process of the model and a prediction algorithm; the method for predicting the CPU load and the RAM load of the virtual machine by adopting the GAIN algorithm is provided.
The virtual machine load prediction example based on deficiency value filling is as follows:
designing an experimental scheme;
1. experimental data set. Data is acquired through the cloud platform, and only one service is supposed to run on each virtual machine. The method comprises the steps of presetting 16 virtual machine resource static attribute allocation schemes, simulating the conditions of 9 concurrent user access frequencies of poisson distribution, normal distribution, uniform distribution, chi-square distribution, mutation distribution, exponential distribution, gradual increase distribution, gradual decrease distribution, random distribution and the like, so as to form different virtual machine resource load conditions (namely different virtual machine resource states), generating 100 pieces of data in total at each distribution user access frequency in each allocation mode, obtaining 14400 pieces of experimental data in total, and dividing the data into two parts of training data and test data.
2. Experimental protocol. Based on the data set, seven experimental schemes are designed by considering the influence of different parameters on algorithm operation and results, and Root Mean Square Error (RMSE) is used as an evaluation basis. The smaller the RMSE, the smaller the gap between the predicted result and the actual value. The definition of RMSE is shown in equation 9.
Experiment 1: when the method is used for exploring the GAIN algorithm to predict the load of the virtual machine, the precision of the load RAM of the CPU is independently predicted and the precision difference of the two loads is simultaneously predicted. Eight sets of comparative experiments were designed to increase the ratio of the test set from 0.1 to 0.8 with a step size of 0.1. In each group of experiments, the GAIN algorithm was used to predict the CPU load alone, the RAM load alone, and both the CPU and RAM loads simultaneously, and the RMSE was recorded.
Experiment 2: the method is used for exploring the influence of different numbers of data set samples in the GAIN algorithm on the algorithm. The test set ratio was set to 0.2, the sample size was gradually increased from 2000 to 18000, the step size was 2000, and the RMSE of the algorithm was recorded.
Experiment 3: the method is used for exploring the influence of different distribution forms of data sets in the GAIN algorithm on algorithm accuracy. Experiments were performed on 9 distribution types, i.e., poisson distribution, normal distribution, uniform distribution, chi-square distribution, mutation distribution, exponential distribution, increasing distribution, decreasing distribution, random distribution, and the like.
Experiment 4: the method is used for exploring the influence of different iteration times in the GAIN algorithm on the algorithm accuracy. Gradually increasing the iteration round number of the GAIN algorithm from 1000 times to 10000 times, wherein the step length is 1000, and recording the RMSE.
Experiment 5: the method is used for exploring the influence of different full connection layer numbers in the GAIN algorithm on algorithm accuracy. The default GAIN is three full connection layers, the number of full connection layers is increased from 3 to 5 in this experiment, the step size is 1, and the RMSE is recorded for different numbers of full connection layers.
Experiment 6: the method is used for exploring the influence of different super-parameters alpha in the GAIN algorithm on the accuracy of the algorithm. The hyper-parameter alpha affects the loss function of generator G, and the set of settings hyper-parameter alpha is incremented from 1 to 512.
Experiment 7: the method is used for exploring the influence of different positions to be predicted in the GAIN algorithm on the accuracy of the algorithm. The position to be predicted by the virtual machine prediction algorithm based on GAIN is the last two columns, namely, the CPU load and the RAM load are placed at the end. The experiment sets three groups of control experiments, the data to be predicted are respectively placed in the first two columns, the middle two columns and the last two columns of the data set, RMSE is respectively recorded, and the influence of different prediction positions on algorithm accuracy is explored.
Experimental results and analysis;
experiment 1: the pair of results of the GAIN algorithm separately predicting the RAM load and the pair of results of the CPU load prediction in the simultaneous prediction of the CPU load and the RAM load is shown in fig. 4, and the pair of results of the GAIN algorithm separately predicting the RAM load and the pair of results of the RAM load prediction in the simultaneous prediction of the CPU load and the RAM load is shown in fig. 5.
As can be seen from fig. 4 and 5, the GAIN algorithm predicts CPU load and RAM load simultaneously with almost the same accuracy as when predicting both loads separately. And meanwhile, two loads are predicted, half of training time can be saved, prediction accuracy is hardly reduced, and cost in training a model is greatly reduced.
Experiment 2: the test was performed according to the parameters of the experiment three in the previous section, and only the CPU load was predicted, and the test results are shown in fig. 6. As can be seen from fig. 6, as the data set increases, RMSE tends to decrease gradually, and the algorithm solution accuracy increases gradually.
Experiment 3: only the CPU load was predicted, and the experimental results are shown in fig. 7. As can be seen from fig. 7, under different concurrency distributions, the prediction errors of the GAIN algorithm and the ELM algorithm are relatively stable, the prediction errors of the BP algorithm and the DBN algorithm have relatively large fluctuation, and the GAIN algorithm has the smallest prediction error and the best effect, which indicates that the method provided herein has relatively high accuracy and stability in predicting the CPU load.
Experiment 4: only the CPU load was predicted, and the experimental results are shown in fig. 8. As can be seen from fig. 8, as the number of iteration rounds increases, the solution accuracy of the GAIN algorithm increases.
Experiment 5: only the CPU load was predicted, and the results are shown in fig. 9, 10 and 11. The result fluctuation of the five-layer full-connection-layer GAIN algorithm is relatively large in ten-thousand rounds of iteration, the stability is relatively poor, and the algorithm is relatively stable in twenty-thousand rounds of iteration and thirty-thousand rounds of iteration. When the iteration times exceed twenty-four rounds, the precision of the five-layer fully-connected layer is slightly higher than that of the three-layer fully-connected layer, and particularly when the iteration times are twenty-four rounds, the precision improvement is relatively large.
Experiment 6: only the CPU load was predicted, and the result is shown in fig. 12. As can be seen from FIG. 12, when alpha is lower than 128, the accuracy of the algorithm is relatively high, and when the alpha exceeds 128, the RMSE of the algorithm is greatly increased, and the accuracy of the algorithm is greatly reduced.
Experiment 7: only the CPU load was predicted, and the result is shown in fig. 13. As can be seen from fig. 13, at different test set ratios, the different predicted positions have little effect on the accuracy of the algorithm. When the test set duty cycle exceeds 0.5, the error in predicting the position in the middle will be slightly larger than the other two positions.

Claims (9)

1. The virtual machine load prediction method based on the deficiency value filling is characterized by comprising the following steps of:
step 1: converting the virtual machine load prediction problem into a missing virtual machine load value filling problem;
step 2: constructing a GAIN-VMLP (virtual machine load prediction model) based on GAIN;
step 3: training a GAIN-VMLP model;
step 4: predicting the load of the virtual machine by using the trained GAIN-VMLP model;
setting a virtual machine resource quantity index and a virtual machine load index according to actual application requirements, combining given concurrency quantity to form a virtual machine state vector, inputting the virtual machine state vector into a GAIN-VMLP model, and obtaining a model output result which is a virtual machine load prediction result, thereby realizing the prediction of the load of the virtual machine.
2. The virtual machine load prediction method based on missing value filling according to claim 1, wherein the virtual machine load prediction in step 1 predicts the load of a virtual machine in a certain period according to the concurrent access amount and the resource amount of the virtual machine in the certain period; and setting indexes of the concurrent access quantity, the resource quantity and the load according to actual application requirements.
3. The virtual machine load prediction method based on missing value filling according to claim 1, wherein step 1 specifically comprises:
the concurrent access quantity and the resource quantity of the virtual machine to be subjected to load prediction in a certain period are taken as known values, the virtual machine load to be predicted is taken as a missing value, and the virtual machine load is filled by using a missing value filling method, so that a task of predicting the virtual machine load is realized, and the virtual machine load prediction problem is converted into a missing virtual machine load value filling problem; the method comprises the following steps:
the method for converting the virtual machine load prediction problem into the missing virtual machine load value filling problem by predicting m virtual machine loads according to the actual application requirements by adopting concurrent access quantity and n virtual machine resource quantity indexes comprises the following steps:
n virtual machine resource quantity indexes, virtual machine concurrent access quantity and m virtual machine load indexes of the virtual machine jointly form a virtual machine state described by k (k=n+1+m) indexes; the first n items are virtual machine resource amounts, the n+1th item is concurrent access amount, and the last m items are virtual machine loads to be predicted;
the specific value of each state index of the virtual machine in a certain time period forms a k-dimensional virtual machine state vector; predicting the virtual machine load based on the virtual machine resource amount and the user concurrent access amount aiming at a virtual machine of the load to be predicted, namely predicting the value of the m items by using the first n+1 items of the virtual machine state vector; and taking the first n+1 items of the virtual machine state vector as known values, taking the last m items as missing values, filling the missing values to complete the prediction work of the virtual machine load, and converting the virtual machine load prediction problem into a missing virtual machine load value filling problem.
4. The virtual machine load prediction method based on missing value filling according to claim 1, wherein in step 2, the GAIN-VMLP takes the first n+1 item of the virtual machine resource amount and the concurrent access amount, i.e. the virtual machine state vector, as a known value, takes the virtual machine load to be predicted, i.e. the last m items of the virtual machine state vector, as a missing value, converts the virtual machine load prediction problem into a missing virtual machine load value filling problem, and fills the missing virtual machine load value by adopting GAIN, thereby completing the prediction of the virtual machine load, and thus constructing a GAIN-VMLP model for virtual machine load prediction;
the filling of the missing virtual machine load value by adopting GAIN specifically comprises the following steps: generating fitting data of the missing virtual machine load value through a generator, and judging whether the data are real through a discriminator so as to achieve the aim of countermeasure;
the input of the GAIN-VMLP model is the k-dimensional virtual machine state vector with the missing virtual machine load value, namely the last m items are virtual machine loads to be predicted, and the output is the k-dimensional virtual machine state vector with the virtual machine load predicted value, namely the last m items are virtual machine loads for which the result is predicted.
5. The virtual machine load prediction method based on missing value filling according to claim 1, wherein in the step 2, the virtual machine load prediction model GAIN-VMLP based on GAIN is constructed, and the specific process is as follows:
step S1: designing an input data vector;
the missing items in the virtual machine state vector are marked by a special value which is not in the value range of any index to form an input data vector X; for a k-dimensional virtual machine state vector, the first n+1 terms of concurrent access amount and virtual machine resource amount are known, while the last m terms representing virtual machine load are missing;
step S2: designing a mask vector;
marking the missing data position through a mask vector; k-dimensional vector m= (M 1 ,...,M k ) The value of each component is 0 or 1, wherein 1 indicates that the value of the corresponding component in X is not missing, and 0 indicates that the value of the corresponding component in X is not missing; each X corresponds to one M, the values of the components in M are set according to X, when the components in X are i (i∈[1,k]) When 0, the component M in the corresponding M i Also 0; when component X in X i (i∈[1,k]) When the value is not 0, the component M in the corresponding M i 1, thereby forming a mask vector M;
step S3: designing a random noise vector;
initially padding the missing data using random noise; randomly generating a k-dimensional random noise vector z= (Z) 1 ,…,Z k ) The value range of each component is 0,1]Thereby forming a random noise vector Z;
step S4: designing a prompt vector;
the prompt vector intensifies the countermeasure process of the generator and the discriminator, prompts the discriminator for partial missing information of the original data, enables the discriminator to pay more attention to the prompt part of the prompt vector, and simultaneously forces the generator to generate more real data;
step S5: designing a generator G;
a generator used in the GAIN-VMLP to generate prediction data; the input of the generator is k-dimensional input data vector X with missing value, mask vector M and random noise vector Z, and the output data vector with virtual machine load predicted value is output through three full-connection layersThe output vector is not only required to make the predicted value deception discriminator, but also to make the value output of the original input data approach to the true value; the two loss functions of the generator are therefore shown in equations 2 and 3:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data; x represents an input data vector; />Representing the data vector generated by the generator; k represents the dimension of the data vector x;
the goal of generator G is to minimize the weighted sum of the two loss functions, as shown in equation 4:
wherein K is G The number of training samples in each batch when the generator G adopts a gradient descent method for training is represented, and alpha is a super parameter;
step S6: designing a discriminator D;
the GAIN-VMLP uses a discriminator to determine whether the data is real data in the dataset or false data generated by the generator, the input of which is the output of the generatorAnd a prompt vector H for outputting a judgment result expressed in a probability form through three full connection layers>
The loss function of the discriminator is shown in fig. 5:
where m represents a mask vector; b represents a random vector;a discrimination result of the discriminator, which indicates a probability of whether or not the data generated by the generator is missing data;
for equation 5, the training criteria for the expected arbiter for the authenticity determination of the predicted virtual machine load is shown in equation 6:
wherein K is D The number of training samples of each batch when the discriminator D adopts a gradient descent method for training is represented;
step S7: carrying out standardized design on input data;
carrying out standardization processing on each dimension of the GAIN-VMLP input data, mapping the data into a [0,1] interval, wherein a standardization formula is shown in a formula 7:
6. the virtual machine load prediction method based on missing value padding of claim 5, wherein the generating method of the hint vector H in step S4 is as follows:
step S4.1: firstly, generating a random vector B; k-dimensional vector b= (B) 1 ,…,B k ) The value of each component of (2) is 0 or 1; the generation method of each component value comprises the following steps: randomly generating a number p from {1, …, k }, setting the value of the p-th component in B to 0, and setting the values of the rest components to 1;
step S4.2: generating a prompt vector H according to the B; k-dimensional vector h= (H) 1 ,…,H k ) The generation method of H with the value of 0, 0.5 or 1 of each component is shown in the formula 1:
H=B⊙M+0.5(1-B) (1)
wherein, as indicated by the letter, "-represents the dot product calculation of the vector.
7. The virtual machine load prediction method based on missing value filling according to claim 1, wherein step 3 specifically comprises:
step 3.1: processing the training data set; the training data set consists of a plurality of pieces of data obtained by running or performing benchmark test of the virtual machine, wherein each piece of data consists of n virtual machine resource quantity index values, concurrent access quantity and m virtual machine load index values; randomly selecting a plurality of data in the data set according to a proportion, emptying the load index value of the virtual machine to represent a missing value;
step 3.2: generating an input data vector; according to the standardized design of input data, carrying out standardized processing on each piece of data in a data set by adopting a 7, and marking items with empty load values in each piece of data by using a-1, so as to form a data set X consisting of a plurality of input data vectors standardized by the state vectors of the k-dimensional virtual machine;
step 3.3: setting a batch processing size s; training the GAIN-VMLP model using a small batch gradient descent method; the number of virtual machine state vectors input into the GAIN-VMLP per batch will be controlled according to a batch size (batch size) parameter s;
step 3.4: calculating a mask vector; generating a mask vector M for each data vector X in the data set X according to a mask vector design method, thereby forming a mask set M of the data set X;
step 3.5: and (3) performing discriminant optimization training:
step 3.6: and performing generator optimization training.
8. The virtual machine load prediction method based on missing value padding of claim 7, wherein the optimized training process of the step 3.5 arbiter is as follows:
step 3.5.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data vectors from the mask set M +.>
Step 3.5.2: generating s independent random noise Z distributed in the same way;
step 3.5.3: generating s independent random vectors B distributed in the same way;
step 3.5.4: using a generator for s data vectorsGenerating s data vectors->
Step 3.5.5: the training discriminant D is updated based on equation 5 using a gradient descent method.
9. The virtual machine load prediction method based on missing value padding of claim 7, wherein the step 3.6 generator optimizes the training process as follows:
step 3.6.1: selecting s data vectors from data set XAnd simultaneously selecting mask vectors corresponding to the s data from the mask set M>
Step 3.6.2: generating s independent random noise Z distributed in the same way;
step 3.6.3: generating s independent random vectors B distributed in the same way;
step 3.6.4: generating s hint vectors based on 1
Step 3.6.5: the training generator G is updated based on equation 4 using a gradient descent method.
CN202310520971.8A 2023-05-10 2023-05-10 Virtual machine load prediction method based on missing value filling Pending CN116594855A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310520971.8A CN116594855A (en) 2023-05-10 2023-05-10 Virtual machine load prediction method based on missing value filling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310520971.8A CN116594855A (en) 2023-05-10 2023-05-10 Virtual machine load prediction method based on missing value filling

Publications (1)

Publication Number Publication Date
CN116594855A true CN116594855A (en) 2023-08-15

Family

ID=87600146

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310520971.8A Pending CN116594855A (en) 2023-05-10 2023-05-10 Virtual machine load prediction method based on missing value filling

Country Status (1)

Country Link
CN (1) CN116594855A (en)

Similar Documents

Publication Publication Date Title
US6031984A (en) Method and apparatus for optimizing constraint models
CN113037877B (en) Optimization method for time-space data and resource scheduling under cloud edge architecture
Ding et al. Kubernetes-oriented microservice placement with dynamic resource allocation
CN113938488A (en) Load balancing method based on dynamic and static weighted polling
CN108092804B (en) Q-learning-based power communication network utility maximization resource allocation strategy generation method
CN113822456A (en) Service combination optimization deployment method based on deep reinforcement learning in cloud and mist mixed environment
CN112148483A (en) Container migration method and related device
Bhatnagar et al. Stochastic algorithms for discrete parameter simulation optimization
CN114661463A (en) BP neural network-based system resource prediction method and system
CN112130927A (en) Reliability-enhanced mobile edge computing task unloading method
CN110058942B (en) Resource allocation system and method based on analytic hierarchy process
CN115499511B (en) Micro-service active scaling method based on space-time diagram neural network load prediction
CN116594855A (en) Virtual machine load prediction method based on missing value filling
CN112257977B (en) Logistics project construction period optimization method and system with resource limitation under fuzzy man-hour
CN115412401A (en) Method and device for training virtual network embedding model and virtual network embedding
CN111598390B (en) Method, device, equipment and readable storage medium for evaluating high availability of server
CN115767569A (en) Service prediction method and device, electronic equipment and readable storage medium
CN114327925A (en) Power data real-time calculation scheduling optimization method and system
CN113177613A (en) System resource data distribution method and device
Tekin et al. Dynamic server allocation for unstable queueing networks with flexible servers
Dong et al. Accelerating skycube computation with partial and parallel processing for service selection
CN105844110A (en) Method for solving software and hardware partitioning through self-adaptive domain tabu search on basis of GPU (graphics processing unit)
Li et al. A sort-based interest matching algorithm with two exclusive judging conditions for region overlap
Li et al. A fast real-time qos-aware service selection algorithm
TW202338651A (en) Method for optimizing resource allocation based on prediction with reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination