CN117851031A - Cloud resource prediction method, device and medium based on CNN and GRU - Google Patents

Cloud resource prediction method, device and medium based on CNN and GRU Download PDF

Info

Publication number
CN117851031A
CN117851031A CN202311660658.0A CN202311660658A CN117851031A CN 117851031 A CN117851031 A CN 117851031A CN 202311660658 A CN202311660658 A CN 202311660658A CN 117851031 A CN117851031 A CN 117851031A
Authority
CN
China
Prior art keywords
cnn
model
gru
cloud resource
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311660658.0A
Other languages
Chinese (zh)
Inventor
王小刚
黄超
肖顿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Dianji University
Original Assignee
Shanghai Dianji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Dianji University filed Critical Shanghai Dianji University
Priority to CN202311660658.0A priority Critical patent/CN117851031A/en
Publication of CN117851031A publication Critical patent/CN117851031A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention relates to a cloud resource prediction method, a device and a medium based on CNN and GRU, wherein the method comprises the following steps: acquiring historical cloud resource data, and carrying out normalization processing and sliding window creation; establishing a CNN-GRU model taking historical cloud resource data as input and a cloud resource prediction result as output, wherein the CNN-GRU model comprises a CNN model, a GRU model and a weighted fusion output layer, and the weighted fusion output layer fuses the CNN and GRU output by using a weighted average method; defining a loss function of a CNN-GRU model by using a mean square error, taking historical cloud resource data as a training set, and training the model by using a random gradient descent method to minimize the loss function; and taking the historical cloud resource data as input, and carrying out cloud resource prediction by utilizing the CNN-GRU model after training to generate cloud resource use prediction at a future time point. Compared with the prior art, the method has the advantages of accurate prediction and the like.

Description

Cloud resource prediction method, device and medium based on CNN and GRU
Technical Field
The invention relates to the field of cloud resource prediction, in particular to a cloud resource prediction method, device and medium based on CNN and GRU.
Background
In cloud computing, efficient management of resources is critical to improving performance, reducing costs, and providing a good user experience. Currently, most methods of cloud resource prediction combine historical cloud resource usage data to convert user cloud resource demand planning prediction into regression problems. The configuration and optimized provisioning of computing resources largely leaves the analysis of the resource usage data. Existing cloud computing resource usage prediction methods mostly rely on univariate time series prediction models. These models consider only a single resource usage index, such as CPU utilization, memory utilization, network bandwidth utilization, etc., make future predictions from historical data of target indices, and provide single-step predictions, ignoring multidimensional relationships between variables and making it difficult to handle nonlinear relationships.
Disclosure of Invention
The invention aims to provide a cloud resource prediction method, device and medium based on CNN and GRU, which can more accurately predict resource requirements in a future period by using a gating circulation unit and a convolutional neural network and performing model training according to historical data such as resource utilization rate, performance index, log data and the like of cloud resources related to users, thereby helping cloud service providers and users to make better resource allocation decisions.
The aim of the invention can be achieved by the following technical scheme:
a cloud resource prediction method based on CNN and GRU comprises the following steps:
s1, acquiring historical cloud resource data, and carrying out normalization processing and sliding window creation;
s2, establishing a CNN-GRU model taking historical cloud resource data as input and a cloud resource prediction result as output, wherein the CNN-GRU model comprises a CNN model, a GRU model and a weighted fusion output layer, the output of the CNN model is used as the input of the GRU model, and the weighted fusion output layer fuses the outputs of the CNN model and the GRU model by using a weighted average method to obtain a cloud resource prediction result;
s3, defining a loss function of the CNN-GRU model by using a mean square error, training the model by using a random gradient descent method to minimize the loss function by taking historical cloud resource data as a training set, and adjusting model parameters;
and S4, taking historical cloud resource data as input, and carrying out cloud resource prediction by utilizing the trained CNN-GRU model to generate cloud resource use prediction at a future time point.
The historical cloud resource data comprises average CPU utilization rate, maximum CPU utilization amount and maximum memory utilization amount of the virtual machine.
The normalization process adopts Min-Max normalization.
The sliding window creation is specifically:
assume that normalized time-series data is represented as X normalized The size of the time window is T, the stride is S, and the subsequence of the ith time window is expressed as: x is X window =[X i ,X i+1 ,...,X i+T-1 ]Wherein X is i Is the i-th data point of the time series, i=1, 2,..n-t+1, N is the total length of the time series.
The CNN model comprises an input layer, a convolution layer, a downsampling layer, a full connection layer and an output layer, wherein the convolution layer is designed for each input characteristic respectively, each convolution layer is responsible for extracting the corresponding input characteristic, and a attention mechanism is introduced between the convolution layer and the downsampling layer and used for dynamically adjusting the degree of executing the convolution and downsampling operations at each position through weights.
The weighted fusion output layer utilizes a weighted average method to fuse the outputs of the CNN model and the GRU model, and the method specifically comprises the following steps:
assuming that the output of the GRU model is O GRU The output of the CNN model is O CNN The weighted fusion output layer obtains the final output O through weighted average combination final
O final =α*O GRU +(1-α)*O CNN
Where a is a weight parameter.
The loss function of the CNN-GRU model at the time step t is calculated by the average of square differences between the true value and the model predicted value:
wherein,is the true value of the training data for the f-th feature at time step t,/for the feature>Is the predicted value of the model for the F feature at time step t, and F is the feature number.
In the training process of the CNN-GRU model, in each training iteration, calculating the gradient of a loss function on model parameters, and updating the parameters in the negative gradient direction, wherein the updating of a certain parameter is performed:
wherein eta is learning rate, step length of control parameter update and gradientRepresenting the partial derivative of the loss function with respect to the parameter ω;
for the parameter set θ of the entire model, the update rule is:
wherein,representing the gradient of the loss function with respect to the model parameters.
A cloud resource prediction device based on CNN and GRU, comprising a memory, a processor and a program stored in the memory, wherein the processor realizes the method when executing the program.
A storage medium having stored thereon a program which when executed performs a method as described above.
Compared with the prior art, the invention has the following beneficial effects:
(1) The GRU model adopted by the invention can better process long-term dependency in the sequence data, so that the model can learn and predict complex time sequence modes more effectively, dynamic model adjustment is realized, and the weight and parameters of the model are automatically adjusted according to real-time cloud resource use conditions and trends so as to adapt to the change of different time periods better.
(2) Some local and local related modes may exist in cloud resource prediction, and the traditional method may be difficult to capture the local features, but the method can effectively extract the local modes by introducing the convolutional neural network CNN model, learn the complex relationship in time and space, thereby capturing cloud resource usage modes in different areas and at different times more accurately, so that the model can understand the spatial distribution and time dynamics of cloud resources more comprehensively, and the structural property of data is reflected better.
(3) The method and the system can accurately predict the future cloud resource demand, improve the utilization rate of the cloud resource, save energy consumption, reduce cost, shorten the response time of the user and improve the user experience.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a CNN-GRU model structure according to the present invention;
FIG. 3 is a schematic diagram of a process for processing average CPU utilization using a CNN-GRU model in an embodiment of the invention.
Detailed Description
The invention will now be described in detail with reference to the drawings and specific examples. The present embodiment is implemented on the premise of the technical scheme of the present invention, and a detailed implementation manner and a specific operation process are given, but the protection scope of the present invention is not limited to the following examples.
In order to accurately predict future cloud resource demands so as to meet the conditions that a cloud service provider effectively allocates and manages resources and avoid excessive or insufficient resources, the embodiment provides a cloud resource prediction method based on CNN (convolutional neural network) and GRU (gate control loop unit), as shown in fig. 1, which comprises the following steps:
s1, acquiring historical cloud resource data, carrying out normalization processing and sliding window creation, and dividing time sequence data into a plurality of subsequences to be used as input of a CNN-GRU model.
In this embodiment, the historical cloud resource data includes average CPU usage, maximum CPU usage, and maximum memory usage of the virtual machine, which may be from a monitoring system, log file, or other data source. At the same time, features related to cloud resource usage are extracted, which may include:
time-dependent features: historical resource usage data, such as CPU usage at the previous time, memory usage at the previous day, etc.;
statistical characteristics: such as mean, variance, etc. within a sliding window.
The original data is extracted from the virtual machine to be normalized, so that the accuracy and training speed of the model can be improved, and the influence of an abnormal value on the model can be reduced. Historical cloud resource usage data normalization processing of virtual machines typically involves mapping data of different scales and ranges to the same standard scale for better comparison and analysis. This embodiment uses a common Min-Max normalization that maps the data linearly to a specified range, typically [0,1]. The following is a detailed Min-Max normalization formula:
assuming that the original data set is X, the minimum value is X Min Maximum value is X Max Normalized data is X normalized Expressed, the Min-Max normalized formula is as follows:
this formula maps the original data X into the range of 0, 1. The advantage of this approach is straightforward, but the disadvantage is sensitivity to outliers (outliers) because it depends on the minimum and maximum of the data. If normalization to other ranges is desired, instead of [0,1], the following formula can be used:
where new_max and new_min are the maximum and minimum values of the new range required.
Creating a time window for the data helps to capture the time series, provide input features, and increase the stability of the model. Creating a sliding window is a common method to divide time series data into a plurality of sub-sequences to be used as input to a model. The size of the time window and the stride defining the interval between the windows are two key parameters, where the window size defines the length of each sub-sequence. Assume that normalized time-series data is represented as X normalized The size of the time window is T, the stride is S, and the subsequence of the ith time window is expressed as: x is X window =[X i ,X i+1 ,...,X i+T-1 ]Wherein X is i Is the i data point of the time series, i=1, 2,... This formula shows that each time window is a sub-sequence of length T extracted from the original sequence. S is the stride between the windows, i.e. the interval between the windows, which means that the starting point of the next window is the starting point of the current window plus the stride.
S2, establishing a CNN-GRU model taking historical cloud resource data as input and a cloud resource prediction result as output.
The GRU is a cyclic neural network (RNN) variant suitable for time series data, and has a gating mechanism, so that long-term dependence in the time series data can be effectively captured and learned. This makes the method excellent in processing cloud resource usage data related to time. Convolutional Neural Networks (CNNs) are used to extract spatial information in time series data. Through the convolution layer, the model can capture the local mode and the characteristic in the resource use data, and the representation capability of the data is improved. According to the invention, by fusing the time sequence information and the space information, the advantage of deep learning is fully exerted, and the accuracy and the efficiency of cloud resource requirements are improved.
As shown in fig. 2, the CNN-GRU model includes a CNN model for capturing long-term dependency of time-series data, a GRU model for extracting local features, and a weighted fusion output layer.
In the CNN model, the present embodiment uses multiple convolution layers to extract different types of features, including average CPU usage, maximum memory usage, and the like. Each convolution layer may learn feature representations of different levels and degrees of abstraction. Specifically, a separate convolutional layer may be designed for each input data (average CPU usage, maximum CPU usage, and maximum memory usage) so that the network can learn the characteristics of each input separately. Thus, each convolution layer will be responsible for extracting features of its corresponding input. Input layer: representing the input data, each node representing a characteristic of the input data. Convolution layer: a convolution operation is included for extracting features from the input data. The convolution kernel slides over the input data, producing a convolution signature. Downsampling layer (pooling layer): for reducing the spatial dimensions of the feature map while preserving important information. Common pooling operations include maximum pooling or average pooling. Full tie layer: a layer is included that connects all nodes for learning global features in the image. Output layer: the final output layer of the CNN model typically uses a softmax activation function for probability output of classification problems. Furthermore, the present embodiment adds an attention mechanism between the convolutional layer and the downsampling layer, which is done in order to dynamically adjust the number of pooling layers using the output of the attention mechanism. In particular, the output of the attention mechanism may be used as a weight to determine the importance of each location. These weights may be used to adjust the strength of the convolution and pooling operations for each location or whether to perform these operations. An advantage of this approach is that it allows the network to dynamically decide the extent to which the rolling and pooling operations are performed at each location, thereby more flexibly adapting the characteristics of the input data.
The output of the CNN model serves as the input to the GRU model,in the GRU model, X t Representing the current time step of the input data. h is a t-1 Representing the hidden state of the last time step. r is (r) t Indicating that the gate is reset and whether the previous hidden state is ignored. z t Representing the update gate, controls how much previous hidden state information is retained at the current time step. h is a t A new hidden state representing the current time step is h t-1 And input x t Is a function of (2). As shown in FIG. 2, if the task is complex, a more powerful model is needed to capture abstract features and patterns in the data, then a full connection layer is added after the GRU model. The output of the flattened gating circulation unit is connected to a full connection layer, and the output of the full connection layer is obtained:
FC output =σ(W FC *Flatten(h t )+b FC )
where W is the weight matrix and b is the bias term.
And the weighted fusion output layer fuses the outputs of the CNN model and the GRU model by using a weighted average method to obtain a cloud resource prediction result. Assuming that the output of the GRU model is O GRU The output of the CNN model is O CNN The weighted fusion output layer obtains the final output O through weighted average combination final
O final =α*O GRU +(1-α)*O CNN
Where a is a weight parameter. O (O) final Representing predictions of average CPU usage, maximum CPU usage, and maximum memory usage for the virtual machine over a subsequent period of time.
In one embodiment, the process of weighted averaging is as follows:
and respectively calling the GRU model and the CNN model, inputting the input sequence into the CNN model to obtain an output model cnn_output of the CNN, and inputting the sequence output by the CNN model into the GRU model to obtain the output GRU _output of the GRU model. Defining weight parameters: and setting a weight parameter alpha to represent the weight proportion of GRU output and CNN output. The final output final_output is calculated using the defined weight parameters. Here, a weighted average method is used, where the GRU output is multiplied by the weight alpha, the CNN output is multiplied by the weight (1-alpha), and then the two are added to obtain the final output.
S3, defining a loss function of the CNN-GRU model by using a mean square error, taking historical cloud resource data as a training set, training the model by using a random gradient descent method to minimize the loss function, and adjusting model parameters.
The loss function of the CNN-GRU model over time step t is calculated by averaging the squared differences between the true and model predictions:
wherein,is the true value of the training data for the f-th feature at time step t,/for the feature>Is the predicted value of the model for the F feature at time step t, and F is the feature number.
Random gradient descent is an optimization algorithm that is used to minimize the loss function. In the training process of the CNN-GRU model, in each training iteration, the gradient of a loss function on model parameters is calculated, and the parameters are updated in the negative gradient direction, wherein the updating of a certain parameter is performed:
wherein eta is learning rate, step length of control parameter update and gradientRepresenting the partial derivative of the loss function with respect to the parameter ω;
for the parameter set θ of the entire model, the update rule is:
wherein,representing the gradient of the loss function with respect to the model parameters.
In this embodiment, it is assumed that x_train and y_train are training data, wherein the shape of x_train is (number of samples, time step, number of features) and the shape of y_train is (number of samples, number of target features). A GRU model is created using the Sequential model. The model comprises a GRU layer with 64 cells, the input shape is (X_train. Shape [1], X_train. Shape [2 ]), i.e. time steps and feature numbers. A CNN model is created using the Sequential model. The model contains a Conv1D layer with 32 filters and 3 kernel sizes, the activation function is 'relu'. A Dense layer with 64 units is then added, the activation function being also 'relu'. The outputs of the two models are combined using the merge_output variable. Here, the merge_output is obtained by applying a linear activation function in the Dense layer. The final model is compiled using the SGD optimizer to set the learning rate with the mean square error (mean squared error) as the loss function. Model training is performed using training data x_train and y_train.
In addition, the present embodiment uses the data of the verification set to evaluate the performance of the model, including the indexes such as the mean square error, the root mean square error, the mean absolute error, and the like, and adjusts the super parameters of the model through the verification set to improve the performance. And finally evaluating the generalization capability of the model by using the data of the test set, so as to ensure the prediction accuracy of the model on future data. At the same time, model performance is monitored periodically to ensure that it continues to be effective and that the model can be fine-tuned or retrained over time.
And S4, taking historical cloud resource data as input, and carrying out cloud resource prediction by utilizing the trained CNN-GRU model to generate cloud resource use prediction at a future time point.
According to the invention, through accurately predicting future cloud resource demands, the cloud service provider can more effectively allocate and manage resources, and the situation of excessive or insufficient resources is avoided. The method is beneficial to improving the resource utilization rate of the whole cloud computing system, reducing the cost and improving the performance. And the effective resource prediction can help the user to plan the resource purchase, so that unnecessary resource waste is avoided, and the cost on the cloud platform is reduced. This is of great importance to individual users, businesses and organizations.
The embodiment also provides a cloud resource prediction device based on CNN and GRU, which comprises a memory, a processor and a program stored in the memory, wherein the processor realizes the method when executing the program.
The above functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing describes in detail preferred embodiments of the present invention. It should be understood that numerous modifications and variations can be made in accordance with the concepts of the invention by one of ordinary skill in the art without undue burden. Therefore, all technical solutions which can be obtained by logic analysis, reasoning or limited experiments based on the prior art by a person skilled in the art according to the inventive concept shall be within the scope of protection defined by the claims.

Claims (10)

1. The cloud resource prediction method based on the CNN and the GRU is characterized by comprising the following steps of:
s1, acquiring historical cloud resource data, and carrying out normalization processing and sliding window creation;
s2, establishing a CNN-GRU model taking historical cloud resource data as input and a cloud resource prediction result as output, wherein the CNN-GRU model comprises a CNN model, a GRU model and a weighted fusion output layer, the output of the CNN model is used as the input of the GRU model, and the weighted fusion output layer fuses the outputs of the CNN model and the GRU model by using a weighted average method to obtain a cloud resource prediction result;
s3, defining a loss function of the CNN-GRU model by using a mean square error, training the model by using a random gradient descent method to minimize the loss function by taking historical cloud resource data as a training set, and adjusting model parameters;
and S4, taking historical cloud resource data as input, and carrying out cloud resource prediction by utilizing the trained CNN-GRU model to generate cloud resource use prediction at a future time point.
2. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the historical cloud resource data includes average cρu usage, maximum cρu usage and maximum memory usage of the virtual machine.
3. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the normalization process adopts Min-Max normalization.
4. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the sliding window creation specifically comprises:
assume that normalized time-series data is represented as X normalized The size of the time window is T, the stride is S, and the subsequence of the ith time window is expressed as: x is X window =[X i ,X i+1 ,...,X i+T-1 ]Wherein X is i Is the i data point of the time series, i=1, 2,..Length.
5. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the CNN model includes an input layer, a convolution layer, a downsampling layer, a full connection layer and an output layer, wherein the convolution layer is designed for each input feature, each convolution layer is responsible for extracting the feature of the corresponding input, and an attention mechanism is introduced between the convolution layer and the downsampling layer for dynamically adjusting the degree of performing the convolution and downsampling operations at each position through weights.
6. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the weighted fusion output uses a weighted average method to fuse the outputs of the CNN model and the GRU model specifically comprises:
assuming that the output of the GRU model is O GRU The output of the CNN model is O CNN The weighted fusion output layer obtains the final output O through weighted average combination final
O final =α*O GRU +(1-α)*O CNN
Where a is a weight parameter.
7. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein the calculation mode of the loss function of the CNN-GRU model in time step t is an average of square differences between a true value and a model predicted value:
wherein,is the true value of the training data for the f-th feature at time step t,/for the feature>Is the predicted value of the model for the F feature at time step t, and F is the feature number.
8. The cloud resource prediction method based on CNN and GRU according to claim 1, wherein in the training process of the CNN-GRU model, in each training iteration, the gradient of the loss function with respect to the model parameter is calculated, and the parameter is updated in the negative gradient direction, wherein for updating a certain parameter:
wherein eta is learning rate, step length of control parameter update and gradientRepresenting the partial derivative of the loss function with respect to the parameter ω;
for the parameter set θ of the entire model, the update rule is:
wherein,representing the gradient of the loss function with respect to the model parameters.
9. A cloud resource prediction device based on CNN and GRU, comprising a memory, a processor, and a program stored in the memory, wherein the processor implements the method of any one of claims 1-8 when executing the program.
10. A storage medium having a program stored thereon, wherein the program, when executed, implements the method of any of claims 1-8.
CN202311660658.0A 2023-12-05 2023-12-05 Cloud resource prediction method, device and medium based on CNN and GRU Pending CN117851031A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311660658.0A CN117851031A (en) 2023-12-05 2023-12-05 Cloud resource prediction method, device and medium based on CNN and GRU

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311660658.0A CN117851031A (en) 2023-12-05 2023-12-05 Cloud resource prediction method, device and medium based on CNN and GRU

Publications (1)

Publication Number Publication Date
CN117851031A true CN117851031A (en) 2024-04-09

Family

ID=90540950

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311660658.0A Pending CN117851031A (en) 2023-12-05 2023-12-05 Cloud resource prediction method, device and medium based on CNN and GRU

Country Status (1)

Country Link
CN (1) CN117851031A (en)

Similar Documents

Publication Publication Date Title
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN112734128B (en) 7-day power load peak prediction method based on optimized RBF
CN111861013B (en) Power load prediction method and device
CN112734135B (en) Power load prediction method, intelligent terminal and computer readable storage medium
CN112434848A (en) Nonlinear weighted combination wind power prediction method based on deep belief network
CN116307652A (en) Artificial intelligent resource allocation method for intelligent power grid
CN111461463A (en) Short-term load prediction method, system and equipment based on TCN-BP
CN117389824A (en) Cloud server load prediction method based on signal decomposition and mixing model
CN113449919B (en) Power consumption prediction method and system based on feature and trend perception
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
Yang et al. An ensemble prediction system based on artificial neural networks and deep learning methods for deterministic and probabilistic carbon price forecasting
CN116112563A (en) Dual-strategy self-adaptive cache replacement method based on popularity prediction
CN117110748A (en) Transformer substation main equipment operation state abnormality detection method based on fusion terminal
CN114154716B (en) Enterprise energy consumption prediction method and device based on graph neural network
CN115766125A (en) Network flow prediction method based on LSTM and generation countermeasure network
Li et al. Resource usage prediction based on BiLSTM-GRU combination model
CN105933138B (en) Space-time dimension combined cloud service credibility situation assessment and prediction method
CN117094535B (en) Artificial intelligence-based energy supply management method and system
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
Xing et al. Research of a novel combined deterministic and probabilistic forecasting system for air pollutant concentration
CN116431439A (en) Energy consumption monitoring and management system for green data center
CN115423091A (en) Conditional antagonistic neural network training method, scene generation method and system
CN115392567A (en) Power load prediction method, electronic equipment, device and readable storage medium
CN117851031A (en) Cloud resource prediction method, device and medium based on CNN and GRU
CN114139783A (en) Wind power short-term power prediction method and device based on nonlinear weighted combination

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination