CN113239624B - Short-term load prediction method, equipment and medium based on neural network combination model - Google Patents

Short-term load prediction method, equipment and medium based on neural network combination model Download PDF

Info

Publication number
CN113239624B
CN113239624B CN202110560339.7A CN202110560339A CN113239624B CN 113239624 B CN113239624 B CN 113239624B CN 202110560339 A CN202110560339 A CN 202110560339A CN 113239624 B CN113239624 B CN 113239624B
Authority
CN
China
Prior art keywords
load
imf
lstm
cnn
elm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110560339.7A
Other languages
Chinese (zh)
Other versions
CN113239624A (en
Inventor
陈曦
罗燎原
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha University of Science and Technology
Original Assignee
Changsha University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha University of Science and Technology filed Critical Changsha University of Science and Technology
Priority to CN202110560339.7A priority Critical patent/CN113239624B/en
Publication of CN113239624A publication Critical patent/CN113239624A/en
Application granted granted Critical
Publication of CN113239624B publication Critical patent/CN113239624B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J3/00Circuit arrangements for ac mains or ac distribution networks
    • H02J3/003Load forecast, e.g. methods or systems for forecasting future load demand
    • HELECTRICITY
    • H02GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
    • H02JCIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
    • H02J2203/00Indexing scheme relating to details of circuit arrangements for AC mains or AC distribution networks
    • H02J2203/20Simulating, e g planning, reliability check, modelling or computer assisted design [CAD]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Power Engineering (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Computer Hardware Design (AREA)
  • Geometry (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • Primary Health Care (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The embodiment of the invention relates to the technical field of electric power, and discloses a short-term load prediction method, equipment and a medium based on a neural network combination model. The method comprises the following steps: acquiring load data and carrying out normalization processing on the load data; decomposing the load data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain n inherent modal function components with different frequencies and a residual error component; respectively inputting the inherent mode function component and the residual error component into a CNN-LSTM-ELM prediction model obtained by pre-training to obtain a preliminary prediction result; and superposing the prediction results, wherein the obtained superposition prediction result is used as a load prediction result corresponding to the load data. By implementing the embodiment of the invention, the problem of insufficient precision of a single model can be solved, and the precision of load prediction is effectively improved.

Description

Short-term load prediction method, equipment and medium based on neural network combination model
Technical Field
The invention relates to the technical field of electric power, in particular to a short-term load prediction method, equipment and medium based on a neural network combination model.
Background
The electric power industry is an important basic industry related to national civilization and plays a vital role in economic development, industrial production and daily life of people. The main characteristics of the power load are that once the production is difficult to store, the short-term demand changes greatly, and the power load is influenced by various factors. Accurate short-term load forecasting therefore plays an important role in reducing power waste for power companies, maintaining a balance between production and demand, reducing production costs, increasing economic returns, as well as managing scheduling and future capacity planning.
A large number of domestic and foreign scholars have studied short-term load prediction for many years, and the short-term load prediction can be basically divided into a traditional method based on a statistical theory, a machine learning method and a model combination method. Conventional methods include time series models. The machine learning method comprises a Support Vector Machine (SVM), a Recurrent Neural Network (RNN), an Extreme Learning Machine (ELM), a long-short term memory neural network (LSTM), a Convolutional Neural Network (CNN), a gated cyclic unit (GRU) and the like. The model combination method is mainly divided into two categories. One type is to preprocess data of a load sequence through an algorithm such as Empirical Mode Decomposition (EMD) and the like and then predict the load sequence by combining other models. One is to automatically optimize the hyper-parameters in the prediction model through intelligent optimization algorithms such as Particle Swarm Optimization (PSO) and the like so as to improve the prediction performance of the model.
The main characteristics of the power load are that once the power load is produced and stored difficultly, the short-term demand change is large, the power load is always influenced by unstable factors such as weather conditions, dynamic electricity prices and social activities, and the load sequence shows highly nonlinear and non-stable characteristics. An integrated modal decomposition algorithm (EMD) can decompose highly nonlinear and non-stationary load data into a plurality of relatively stationary subsequences, but when there is a pause in the signal caused by an abnormal event, the EMD can generate a modal aliasing phenomenon. Support Vector Machines (SVMs) fit nonlinear load data well, but cannot process large amounts of data and require complex and time-consuming manual feature extraction and selection. The Convolutional Neural Network (CNN) can extract complex trend characteristics, the long-short term memory neural network (LSTM), the Convolutional Neural Network (CNN) and the gated round robin unit (GRU) can extract time sequence characteristics, but a plurality of parameters in the model are set by the experience of researchers, the uncertainty is high, and a single model cannot sufficiently learn the implicit characteristics of a load sequence.
Disclosure of Invention
Aiming at the defects, the embodiment of the invention discloses a short-term load prediction method, equipment and a medium based on a neural network combined model, which can overcome the problem of insufficient precision of a single model and effectively improve the precision of load prediction.
The embodiment of the invention discloses a short-term load prediction method based on a neural network combination model in a first aspect, which comprises the following steps:
acquiring load data and carrying out normalization processing on the load data;
decomposing the load data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain n inherent modal function components with different frequencies and a residual error component;
respectively inputting the inherent mode function component and the residual error component into a CNN-LSTM-ELM prediction model obtained by pre-training to obtain a preliminary prediction result;
and superposing the prediction results, wherein the obtained superposition prediction result is used as a load prediction result corresponding to the load data.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the training of the CNN-LSTM-ELM prediction model includes:
acquiring load sample data and carrying out normalization processing on the load sample data;
decomposing the load sample data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain a plurality of decomposition components, wherein the decomposition components comprise n inherent mode functions IMF with different frequencies 1 -IMF n A component and a residual res component;
determining the intrinsic mode function IMF 1 -IMF n The time scale of the components, and the CNN-LSTM-ELM prediction model is established, wherein the CNN-LSTM-ELM prediction model comprises a CNN-LSTM prediction model and an ELM model;
carrying out super-parameter optimization on the CNN-LSTM-ELM prediction model by using PSO to obtain an optimized CNN-LSTM-ELM prediction model;
IMF the inherent mode function 1 -IMF n Respectively inputting the component and the residual res component into a CNN-LSTM prediction model to obtain first output parameters, and respectively inputting the respective first output parameters into corresponding ELM models to obtain second output parameters;
superposing the second output parameters to obtain a training result;
and after the training result is subjected to reverse sequence normalization, comparing the training result with the real result by adopting one or more error evaluation index evaluation models until the error is smaller than a preset threshold value, and finishing the training of the CNN-LSTM-ELM prediction model.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, performing normalization processing on the load sample data includes:
mapping the load sample data to [0, 1] using the formula:
Figure BDA0003078720890000031
wherein x is normalized data; x is the number of min 、x max The minimum value and the maximum value in the load sample data set are respectively, and x is the original load sample data.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the decomposing the load sample data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain a plurality of decomposition components includes:
building a test load sequence of the ith test by adding white noises with different amplitudes obeying normal distribution in the load sample data:
x i (t)=x(t)+ε 0 ω i (t)
wherein epsilon 0 Is the noise coefficient, ω i (t) the amplitude of white noise corresponding to the ith test is greater than or equal to 1 and less than or equal to P, and P is the number of signal sequences and the total number of corresponding load sample data tests;
decomposing the test load sequence by EMD decomposition method, and obtaining IMF from P signal sequences 11 -IMF 1P Taking the mean value to obtain the first mean value
Figure BDA0003078720890000041
As a first natural mode function IMF 1
Figure BDA0003078720890000042
First residue sequence r 1 (t) is:
Figure BDA0003078720890000043
definition E j (. to) decompose a new load sequence r for the jth IMF component obtained by EMD decomposition of load sample data 1 (t)+ε 1 E 1i (t)) obtaining a second mean value
Figure BDA0003078720890000044
As a second natural mode function IMF 2
Figure BDA0003078720890000045
Second residue sequence:
Figure BDA0003078720890000046
repeating the steps to obtain the residual intrinsic mode function IMF 3 -IMF n Component, and the last residual res component:
Figure BDA0003078720890000047
Figure BDA0003078720890000048
wherein k is more than or equal to 3 and less than or equal to n; r is n (t) is the residual res component.
As an alternative implementation manner, in the first aspect of the embodiment of the present invention, the test payload sequence is decomposed by using an EMD decomposition method, and the IMFs obtained from the P signal sequences are used 11 -IMF 1P The method comprises the following steps:
finding out all extreme points of the test load sequence, fitting an upper envelope line and a lower envelope line of the test load sequence by utilizing a cubic spline interpolation function, and calculating the average value of the upper envelope line and the lower envelope line:
Figure BDA0003078720890000051
wherein e is 1i (t)、e 2i (t) upper and lower envelope curves of the test load sequence at the i-th test, respectively, e i (t) is the average of the upper and lower envelope lines of the test load sequence at the ith test;
and the test load sequence and the average value are subjected to difference operation to obtain a difference value c i (t):
c i (t)=x i (t)-e i (t)
Second step, if c i (t) is less than or equal to a preset reference value, and if the iteration times reach a preset iteration time, the step c is carried out i (t) as IMF 1i ', otherwise, will c i And (t) as a new test load sequence, repeatedly executing the first step until the obtained difference value is less than or equal to a preset reference value or the iteration number reaches a preset iteration number.
As an alternative implementation, in the first aspect of the embodiment of the present invention, the natural mode function is IMF 1 -IMF n The component and the residual res component are respectively input into a CNN-LSTM prediction model to obtain a first output parameter, which comprises:
constructing n +1 convolutional neural network models and n +1 long-short term memory neural network models, wherein each convolutional neural network model consists of two convolutional layers, two pooling layers and a smooth layer, and the long-short term memory neural network model consists of two LSTM layers;
IMF the natural mode function 1 -IMF n The component and residual res component are respectively input into n +1 convolution neural network models to obtain n +1 first output sub-parameters, A k For the kth first output sub-parameter, k is more than or equal to 1 and less than or equal to n +1, IMF 1 -IMF n The component and the residual res component are respectively input into n +1 long-short term memory neural network models to obtain n +1 second output sub-parameters, B k For the kth second output sub-parameter;
A is prepared from k And B k Connecting through two full-connection layers to output a first output parameter C k Wherein, C k The number of the first output parameters is the kth, and the number of the first output parameters is n + 1.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, the respectively inputting the respective first output parameters into the corresponding ELM models to obtain the second output parameters includes:
and constructing n +1 ELM models, inputting the n +1 first output parameters into the n +1 ELM models respectively, and outputting n +1 second output parameters.
As an optional implementation manner, in the first aspect of the embodiment of the present invention, performing hyper-parameter optimization on the CNN-LSTM-ELM prediction model by using PSO to obtain an optimized CNN-LSTM-ELM prediction model, including:
s11, initializing particle swarm algorithm parameters, wherein learning factors C1 and C2 are respectively set to be 2, inertia weight W is set to be 0.8, particle population N is set to be 20, particle dimension is set to be 8, maximum iteration number T is set to be 100, r1 and r2 are both [ 0-1%]Random number in between, maximum velocity V max Set to 5, minimum velocity V min Is set as-5;
s12, setting the current iteration number m to be 1, calculating the fitness value of the CNN-LSTM-ELM prediction model, wherein the fitness value is the mean square error of the CNN-LSTM-ELM prediction model, sorting the fitness value, taking each particle as the local optimum of the current population, and marking the local optimum as p best The smallest particle in the fitness value is taken as the global optimum and is recorded as g best
S13, updating the particle speed and position:
V m =W*V m +C1*r1*(p best -X m )+C2*r2*(g best -X m )
X m =X m +V m
wherein, X m For the m-th iteration particle position, V m M is more than or equal to 1 and less than or equal to T and is the m-th iterative particle speed;
s14, when reaching the preset valueNumber of iterations or global optimum g best And if the preset limit is met, obtaining the optimal hyperparameter CNN-LSTM-ELM prediction model, otherwise, enabling m to be m +1, and repeatedly executing the steps S12-S14 until all CNN-LSTM-ELM prediction models are obtained.
A second aspect of an embodiment of the present invention discloses an electronic device, including: a memory storing executable program code; a processor coupled with the memory; the processor calls the executable program codes stored in the memory for executing the short-term load prediction method based on the neural network combination model disclosed by the first aspect of the embodiment of the invention.
A third aspect of the embodiments of the present invention discloses a computer-readable storage medium storing a computer program, where the computer program enables a computer to execute a short-term load prediction method based on a neural network combination model disclosed in the first aspect of the embodiments of the present invention.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
(1) the invention decomposes nonlinear non-stationary load data into a plurality of relatively stationary subsequences with different frequencies by using CEEMDAN algorithm, and solves the modal aliasing phenomenon of EMD algorithm caused by uncertain factors such as noise by adding white noise with different amplitudes meeting normal distribution into the original sequence.
(2) The method combines the long-term and short-term memory neural network (LSTM) and the Convolutional Neural Network (CNN) to extract local trend characteristics and time characteristics, and has better characteristic extraction effect than a single model.
(3) According to the invention, the model prediction accuracy can be effectively improved by replacing a Dense layer network with an ELM network for prediction.
(4) The method utilizes Particle Swarm Optimization (PSO) to automatically search the hyperparameter of the CNN-LSTM-ELM mixed model, solves the problem of selecting key parameters according to experience, and greatly improves the model prediction accuracy.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without creative efforts.
FIG. 1 is a schematic flowchart of a short-term load prediction method based on a neural network combined model according to an embodiment of the present invention;
FIG. 2 is a schematic structural diagram of a CNN-LSTM-ELM prediction model training process disclosed in the embodiment of the present invention;
FIG. 3 is a schematic structural diagram of a CNN-LSTM-ELM prediction model disclosed in an embodiment of the present invention;
FIG. 4 is a schematic flowchart illustrating a process of performing hyper-parameter optimization on a CNN-LSTM-ELM prediction model according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the terms "first", "second", "third", "fourth", and the like in the description and the claims of the present invention are used for distinguishing different objects, and are not used for describing a specific order. The terms "comprises," "comprising," and any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The embodiment of the invention discloses a short-term load prediction method, equipment and a medium based on a neural network combination model, which can overcome the problem of insufficient precision of a single model and effectively improve the precision of load prediction based on predicting short-term power load by adopting a convolutional neural network, a long-term short-term memory neural network and an extreme learning machine mixed model which adopt self-adaptive noise complete set empirical mode decomposition and particle swarm optimization, and are described in detail by combining accompanying drawings.
Example one
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating a short-term load prediction method based on a neural network combination model according to an embodiment of the present invention. As shown in fig. 1, the short-term load prediction method based on the neural network combination model includes the following steps:
s110, a CNN-LSTM-ELM prediction model is created and trained.
In a preferred embodiment of the present invention, the CNN-LSTM-ELM prediction model includes a CNN-LSTM prediction model and an ELM model.
Referring to fig. 2, the training process includes the following steps:
and S111, acquiring load sample data and carrying out normalization processing on the load sample data.
The load sample data is short-term load historical data, K pieces of historical load data before the prediction time are determined to be used as the input sequence length of the CNN-LSTM-ELM prediction model, and a training set and a test set are constructed for each time component decomposed in the step 112.
Carrying out normalization processing on the load sample data, wherein the normalization processing comprises the following steps:
mapping the load sample data to [0, 1] using equation (1):
Figure BDA0003078720890000091
wherein x is the data after load sample data normalization; x is the number of min 、x max The minimum value and the maximum value in the load sample data set are respectively, and x is the original load sample data.
And S112, decomposing the load sample data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain a plurality of decomposition components.
Decomposition of an original load sequence (i.e. original load sample data) into a plurality of relatively stationary time sequence components of different frequencies, i.e. Intrinsic Mode Functions (IMFs), using a CEEMDAN decomposition algorithm 1 -IMF n While the residual components are denoted as residual res components, the decomposed components are thus represented by n intrinsic mode functions IMF 1 -IMF n And a residual res component.
The decomposition process is as follows:
(1) and building a test load sequence of the ith test by adding white noises with different amplitudes obeying normal distribution in the original load sample data:
x i (t)=x(t)+ε 0 ω i (t)
wherein epsilon 0 Is the noise coefficient, ω i (t) is the amplitude of white noise corresponding to the ith test, i is more than or equal to 1 and less than or equal to P, P is the number of signal sequences, the total number of corresponding load sample data tests, x i (t) the original load sample data and the load sample data superposed after white noise is added for the ith time are marked as a test load sequence, and x (t) is the original load sample data;
(2) decomposing the test load sequence by using an EMD decomposition method, and obtaining IMF (intrinsic mode function) of the P signal sequences 11 -IMF 1P Taking the mean value to obtain the first mean value
Figure BDA0003078720890000101
As a first natural mode function IMF 1
Figure BDA0003078720890000102
First residue sequence r 1 (t) is:
Figure BDA0003078720890000103
(3) definition of E j (. to) decompose a new load sequence r for the jth IMF component obtained by EMD decomposition of load sample data 1 (t)+ε 1 E 1i (t)) to obtain a second average value
Figure BDA0003078720890000104
As a second natural mode function IMF 2
Figure BDA0003078720890000105
Second residue sequence:
Figure BDA0003078720890000106
(4) repeating the above steps to obtain the residual intrinsic mode function IMF 3 -IMF n Component, and the last residual res component:
Figure BDA0003078720890000111
Figure BDA0003078720890000112
wherein k is more than or equal to 3 and less than or equal to n; r is n (t) is the residual res component.
Decomposing the test load sequence by EMD decomposition method, and obtaining IMF from P signal sequences 11 -IMF 1P It may comprise the steps of:
finding out all extreme points of the test load sequence, fitting an upper envelope line and a lower envelope line of the test load sequence by utilizing a cubic spline interpolation function, and calculating the average value of the upper envelope line and the lower envelope line:
Figure BDA0003078720890000113
wherein e is 1i (t)、e 2i (t) upper and lower envelope curves of the test load sequence at the i-th test, respectively, e i (t) is the average of the upper and lower envelope of the test load sequence at the ith test;
and the test load sequence and the average value are subjected to difference operation to obtain a difference value c i (t):
c i (t)=x i (t)-e i (t)
Second step, if c i (t) is less than or equal to a preset reference value, if the iteration number reaches a preset iteration number, the step c is carried out i (t) as IMF 1i ', otherwise, will c i And (t) as a new test load sequence, repeatedly executing the first step until the obtained difference value is less than or equal to a preset reference value or the iteration number reaches a preset iteration number.
S113, carrying out hyper-parameter optimization on the CNN-LSTM-ELM prediction model by using PSO to obtain an optimized CNN-LSTM-ELM prediction model.
The CNN-LSTM-ELM prediction model comprises a convolutional neural network model, a long-term and short-term memory neural network model and a full connection layer, and n +1 CNN-LSTM-ELM prediction models are arranged in total.
Each convolutional neural network model consists of two convolutional layers, two pooling layers and a smooth layer, each convolutional layer adopts a ReLU activation function, the size of a convolutional kernel is 1 multiplied by 3, the convolutional step length is 1, and the number of convolutional kernel channels is 64 and 32 respectively.
Each long-short term memory neural network model consists of two LSTM layers, the number of nodes of a hidden layer is 30, and each layer adopts a ReLU activation function.
Each full-connection layer consists of three sense layers, and nodes of the hidden layer are respectively 50, 30 and 1.
Referring to fig. 3, the optimizing process includes the following steps:
s1131, initializing parameters in the CNN-LSTM-ELM mixed model. The parameters in the mixed model respectively include the number of 2 convolutional layer convolutional kernel channels in the CNN model, the number of neurons in 2 LSTM layers, the number of neurons in 2 Dense layers, the number of hidden layer nodes in the ELM model, and the length of an input sequence of the model.
S1132, initializing particle swarm algorithm parameters, wherein learning factors C1 and C2 are respectively set to be 2, the inertia weight W is set to be 0.8, the particle population N is set to be 20, the particle dimension is set to be 8, the maximum iteration number T is set to be 100, and r1 and r2 are both [ 0-1 ]]Random number in between, maximum velocity V max Set to 5, minimum velocity V min Is set to-5;
s1133, setting the current iteration number m to be 1, calculating a fitness value of the CNN-LSTM-ELM prediction model, wherein the fitness value is the mean square error of the CNN-LSTM-ELM prediction model, sorting the fitness value, taking each particle as the local optimum of the current population, and marking the local optimum as p best Taking the smallest particle in the fitness value as the global optimum, and recording as g best
S1134, updating the particle speed and position:
V m =W*V m +C1*r1*(p best -X m )+C2*r2*(g best -X m )
X m =X m +V m
wherein X m For the m-th iteration of the particle position, V m M is more than or equal to 1 and less than or equal to T, which is the m-th iterative particle speed;
s1135, when the preset iteration number or the global optimal g is reached best And if the preset limit is met, obtaining the optimal hyperparameter CNN-LSTM-ELM prediction model, otherwise, setting m to m +1, and repeatedly executing the steps S1132-S1135 until all CNN-LSTM-ELM prediction models are obtained.
S114, IMF of the intrinsic mode function 1 -IMF n And respectively inputting the component and the residual res component into a CNN-LSTM prediction model to obtain a first output parameter.
IMF the natural mode function 1 -IMF n The component and residual res component are respectively input into n +1 convolution neural network models to obtainn +1 first output subparameters, A k K is more than or equal to 1 and less than or equal to n + 1; simultaneous IMF of natural mode functions 1 -IMF n The component and residual res component are respectively input into n +1 long-short term memory neural network models to obtain n +1 second output sub-parameters, B k Is the kth second output sub-parameter.
A is prepared from k And B k Connecting through full connection layer, and outputting first output parameter C through second Dense layer k Wherein, C k The number of the first output parameters is the kth, and the number of the first output parameters is n + 1.
And S115, respectively inputting the respective first output parameters into the corresponding ELM models to obtain second output parameters.
Constructing n +1 extreme learning machine models (ELM models), wherein the number of hidden layer nodes of the ELM models is 20, inputting the n +1 first output parameters into the n +1 ELM models respectively, and outputting n +1 second output parameters, namely the first output parameter C k And inputting the kth ELM model to obtain a second output parameter Dk.
And S116, superposing all the second output parameters to obtain a training result, wherein the training result is the sum of all the second output parameters.
And S117, after the training result is subjected to reverse sequence normalization, comparing the training result with the real result by adopting one or more error evaluation index evaluation models until the error is smaller than a preset threshold value, and finishing the training of the CNN-LSTM-ELM prediction model.
The common error evaluation index evaluation model may be mean absolute error MAE, root mean square error RMSE, mean absolute percentage error MAPE, and the like, which is not limited herein.
And S120, acquiring load data and carrying out normalization processing on the load data.
The load data is to-be-predicted load data, and may be load data of a cycle after the load sample data, and the normalization processing process is similar to that in step 111, and is not described here again.
And 130, decomposing the load data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain n inherent modal function components with different frequencies and a residual component.
This process is similar to step 112 described above and will not be described further herein.
140, respectively inputting the inherent mode function component and the residual error component into a pre-trained CNN-LSTM-ELM prediction model to obtain a preliminary prediction result.
This process is similar to steps 114 and 115 described above and will not be described further herein.
And 150, overlapping the prediction results, and taking the obtained overlapped prediction result as a load prediction result corresponding to the load data.
This process is similar to step 116 described above and will not be described further herein.
From the above, the invention provides a short-term power load prediction method based on a convolution neural network, a long-term and short-term memory neural network and a limit learning machine (CEEMDAN-CNN-LSTM-ELM-PSO) mixed model of adaptive noise complete set empirical mode decomposition (CEEMDAN) and particle swarm optimization. The method is characterized in that a Particle Swarm Optimization (PSO) algorithm is used for automatically searching the optimal hyper-parameter of a hybrid model, the difficulty of selecting a key parameter according to experience is overcome, a Convolutional Neural Network (CNN) is used for extracting the local trend characteristic of load data, a long-short term memory neural network (LSTM) is used for extracting a time sequence characteristic, then the extracted characteristics are combined and input into a Dense layer for prediction, and an Extreme Learning Machine (ELM) network is used for replacing the last Dense layer for prediction, so that the problem of insufficient precision of a single model can be overcome, and the precision of load prediction is effectively improved.
Example two
Referring to fig. 5, fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the disclosure. As shown in fig. 5, the electronic device may include:
a memory 210 storing executable program code;
a processor 220 coupled to the memory 210;
the processor 220 calls the executable program code stored in the memory 210 to perform part or all of the steps of a short-term load prediction method based on a neural network combined model disclosed in the first embodiment.
The embodiment of the invention discloses a computer readable storage medium which stores a computer program, wherein the computer program enables a computer to execute part or all of the steps of the short-term load prediction method based on the neural network combination model disclosed in the embodiment.
The embodiment of the invention also discloses a computer program product, wherein when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps of the short-term load prediction method based on the neural network combined model disclosed in the embodiment.
The embodiment of the invention also discloses an application publishing platform, wherein the application publishing platform is used for publishing the computer program product, and when the computer program product runs on a computer, the computer is enabled to execute part or all of the steps in the short-term load prediction method based on the neural network combination model disclosed in the embodiment.
In various embodiments of the present invention, it should be understood that the sequence numbers of the processes do not mean the execution sequence necessarily in order, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated units, if implemented as software functional units and sold or used as a stand-alone product, may be stored in a computer accessible memory. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product, which is stored in a memory and includes several requests for causing a computer device (which may be a personal computer, a server, or a network device, etc., and may specifically be a processor in the computer device) to execute part or all of the steps of the method according to the embodiments of the present invention.
In the embodiments provided herein, it should be understood that "B corresponding to a" means that B is associated with a from which B can be determined. It should also be understood, however, that determining B from a does not mean determining B from a alone, but may also be determined from a and/or other information.
Those of ordinary skill in the art will appreciate that some or all of the steps of the methods of the embodiments may be implemented by instructions associated with hardware via a program, and the program may be stored in a computer-readable storage medium, where the storage medium includes Read-Only Memory (ROM), Random Access Memory (RAM), Programmable Read-Only Memory (PROM), Erasable Programmable Read-Only Memory (EPROM), One-time Programmable Read-Only Memory (OTPROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD-ROM), or other Memory, such as a magnetic disk, a, A tape memory, or any other medium readable by a computer that can be used to carry or store data.
The method, the device and the medium for predicting the short-term load based on the neural network combined model disclosed by the embodiment of the invention are introduced in detail, a specific example is applied in the text to explain the principle and the implementation mode of the invention, and the description of the embodiment is only used for helping to understand the method and the core idea of the invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (8)

1. A short-term load prediction method based on a neural network combined model is characterized by comprising the following steps:
acquiring load data and carrying out normalization processing on the load data;
decomposing the load data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain n inherent modal function components with different frequencies and a residual error component;
respectively inputting the inherent mode function component and the residual error component into a CNN-LSTM-ELM prediction model obtained by pre-training to obtain a preliminary prediction result;
superposing the prediction results, wherein the obtained superposed prediction results are used as load prediction results corresponding to the load data;
wherein, the training of the CNN-LSTM-ELM prediction model comprises the following steps:
acquiring load sample data and carrying out normalization processing on the load sample data;
decomposing the load sample data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain a plurality of decomposition components, wherein the decomposition components comprise n inherent mode functions IMF with different frequencies 1 -IMF n A component and a residual res component;
determining the intrinsic mode function IMF 1 -IMF n Establishing a CNN-LSTM-ELM prediction model by the time scale of the components, wherein the CNN-LSTM-ELM prediction model comprises a CNN-LSTM prediction model and an ELM model;
carrying out hyper-parameter optimization on the CNN-LSTM-ELM prediction model by using PSO to obtain an optimized CNN-LSTM-ELM prediction model;
IMF the inherent mode function 1 -IMF n Respectively inputting the component and the residual res component into a CNN-LSTM prediction model to obtain first output parameters, and respectively inputting the respective first output parameters into corresponding ELM models to obtain second output parameters;
superposing the second output parameters to obtain a training result;
after the training result is subjected to reverse sequence normalization, comparing the training result with the real result by adopting one or more error evaluation index evaluation models until the error is smaller than a preset threshold value, and finishing the training of the CNN-LSTM-ELM prediction model;
wherein the natural mode function is IMF 1 -IMF n The component and the residual res component are respectively input into a CNN-LSTM prediction model to obtain a first output parameter, which comprises:
constructing n +1 convolutional neural network models and n +1 long-short term memory neural network models, wherein each convolutional neural network model consists of two convolutional layers, two pooling layers and a smooth layer, and the long-short term memory neural network model consists of two LSTM layers;
IMF the natural mode function 1 -IMF n The component and the residual res component are respectively input into n +1 convolutional neural network models to obtain n +1 first output sub-parameters, A k For the kth first output sub-parameter, k is more than or equal to 1 and less than or equal to n +1, the intrinsic mode function IMF 1 -IMF n The component and residual res component are respectively input into n +1 long-short term memory neural network models to obtain n +1 second output sub-parameters, B k Is the kth second output sub-parameter;
a is prepared from k And B k Connecting through two full-connection layers to output a first output parameter C k Wherein, C k The number of the first output parameters is the kth, and the number of the first output parameters is n + 1.
2. The short-term load prediction method based on neural network combination model according to claim 1, wherein the normalization processing of the load sample data includes:
mapping the load sample data to [0, 1] using the formula:
Figure FDA0003734031880000021
wherein x is * The normalized data is obtained; x is the number of min 、x max The minimum value and the maximum value in the load sample data set are respectively, and x is the original load sample data.
3. The short-term load prediction method based on the neural network combination model according to claim 2, wherein the decomposing the load sample data after the normalization processing by using a CEEMDAN decomposition algorithm to obtain a plurality of decomposition components comprises:
and (3) building a test load sequence of the ith test by adding white noises with different amplitudes obeying normal distribution in the load sample data:
x i (t)=x(t)+ε 0 ω i (t)
wherein epsilon 0 Is the noise coefficient, ω i (t) is the amplitude of white noise corresponding to the ith test, i is more than or equal to 1 and less than or equal to P, P is the number of signal sequences, the total number of corresponding load sample data tests, x i (t) the original load sample data and the load sample data superposed after white noise is added for the ith time are marked as a test load sequence, and x (t) is the original load sample data;
decomposing the test load sequence by EMD decomposition method, and obtaining IMF from P signal sequences 11 -IMF 1P Taking the mean value to obtain the first mean value
Figure FDA0003734031880000031
As a first natural mode function IMF 1
Figure FDA0003734031880000032
First residue sequence r 1 (t) is:
Figure FDA0003734031880000033
definition E j (. to) decompose a new load sequence r for the jth IMF component obtained by EMD decomposition of load sample data 1 (t)+ε 1 E 1i (t)) obtaining a second mean value
Figure FDA0003734031880000034
As a second natural mode function IMF 2
Figure FDA0003734031880000035
Second residue sequence:
Figure FDA0003734031880000036
repeating the steps to obtain the residual intrinsic mode function IMF 3 -IMF n Component, and the last residual res component:
Figure FDA0003734031880000041
Figure FDA0003734031880000042
wherein k is more than or equal to 3 and less than or equal to n; r is a radical of hydrogen n (t) isThe residual res component remains.
4. The short-term load prediction method based on neural network combined model as claimed in claim 3, wherein the test load sequence is decomposed by EMD decomposition method, and the P signal sequences are obtained as IMF' 11 -IMF′ 1P Taking an average value, comprising:
finding out all extreme points of the test load sequence, fitting the upper envelope line and the lower envelope line of the test load sequence by utilizing a cubic spline interpolation function, and calculating the average value of the upper envelope line and the lower envelope line:
Figure FDA0003734031880000043
wherein e is 1i (t)、e 2i (t) upper and lower envelope lines of the test load sequence at the ith test, e i (t) is the average of the upper and lower envelope lines of the test load sequence at the ith test;
and the test load sequence and the average value are subjected to difference operation to obtain a difference value c i (t):
c i (t)=x i (t)-e i (t)
Second step, if c i (t) is less than or equal to a preset reference value, if the iteration number reaches a preset iteration number, the step c is carried out i (t) as IMF' 1i Otherwise, c is i And (t) as a new test load sequence, repeatedly executing the first step until the obtained difference value is less than or equal to a preset reference value or the iteration number reaches a preset iteration number.
5. The short-term load prediction method based on the neural network combination model as claimed in claim 1, wherein the step of inputting the respective first output parameters into the corresponding ELM models to obtain the second output parameters comprises:
and constructing n +1 ELM models, inputting the n +1 first output parameters into the n +1 ELM models respectively, and outputting n +1 second output parameters.
6. The short-term load prediction method based on neural network combination model as claimed in any one of claims 1-4, wherein the super-parameter optimization of the CNN-LSTM-ELM prediction model by using PSO to obtain the optimized CNN-LSTM-ELM prediction model comprises:
s11, initializing particle swarm algorithm parameters, wherein learning factors C1 and C2 are respectively set to be 2, an inertia weight W is set to be 0.8, a particle population N is set to be 20, a particle dimension is set to be 8, the maximum iteration number T is set to be 100, and r1 and r2 are both [ 0-1 ]]Random number in between, maximum velocity V max Set to 5, minimum velocity V min Is set to-5;
s12, setting the current iteration number m to be 1, calculating the fitness value of the CNN-LSTM-ELM prediction model, wherein the fitness value is the mean square error of the CNN-LSTM-ELM prediction model, sorting the fitness value, taking each particle as the local optimum of the current population, and marking the local optimum as p best Taking the smallest particle in the fitness value as the global optimum, and recording as g best
S13, updating the particle speed and position:
V m =W*V m +C1*r1*(p best -X m )+C2*r2*(g best -X m )
X m =X m +V m
wherein, X m For the m-th iteration particle position, V m M is more than or equal to 1 and less than or equal to T, which is the m-th iterative particle speed;
s14, when reaching the preset iteration number or global optimum g best And if the preset limit is met, obtaining the optimal hyperparameter CNN-LSTM-ELM prediction model, otherwise, enabling m to be m +1, and repeatedly executing the steps S12-S14 until all CNN-LSTM-ELM prediction models are obtained.
7. An electronic device, comprising: a memory storing executable program code; a processor coupled with the memory; the processor calls the executable program code stored in the memory for executing a short-term load prediction method based on a neural network combination model according to any one of claims 1 to 6.
8. A computer-readable storage medium storing a computer program, wherein the computer program causes a computer to execute a short-term load prediction method based on a neural network combination model according to any one of claims 1 to 6.
CN202110560339.7A 2021-05-21 2021-05-21 Short-term load prediction method, equipment and medium based on neural network combination model Expired - Fee Related CN113239624B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110560339.7A CN113239624B (en) 2021-05-21 2021-05-21 Short-term load prediction method, equipment and medium based on neural network combination model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110560339.7A CN113239624B (en) 2021-05-21 2021-05-21 Short-term load prediction method, equipment and medium based on neural network combination model

Publications (2)

Publication Number Publication Date
CN113239624A CN113239624A (en) 2021-08-10
CN113239624B true CN113239624B (en) 2022-08-23

Family

ID=77138119

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110560339.7A Expired - Fee Related CN113239624B (en) 2021-05-21 2021-05-21 Short-term load prediction method, equipment and medium based on neural network combination model

Country Status (1)

Country Link
CN (1) CN113239624B (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113837441A (en) * 2021-08-24 2021-12-24 山东大学 Power load prediction method and system considering reconstruction accuracy after EEMD decomposition
CN113837480B (en) * 2021-09-29 2023-11-07 河北工业大学 Impact load prediction method based on improved GRU and differential error compensation
CN114066031A (en) * 2021-11-08 2022-02-18 国网山东综合能源服务有限公司 Day-by-day optimization scheduling method and system of comprehensive energy system
CN114091766B (en) * 2021-11-24 2024-04-12 东北电力大学 CEEMDAN-LSTM-based space load prediction method
CN115169232B (en) * 2022-07-11 2024-03-01 山东科技大学 Daily peak load prediction method, computer equipment and readable storage medium
CN115514439A (en) * 2022-09-26 2022-12-23 华工未来科技(江苏)有限公司 Channel air interface utilization rate prediction method, system, electronic equipment and medium
CN115907131A (en) * 2022-11-16 2023-04-04 国网宁夏电力有限公司经济技术研究院 Method and system for building electric heating load prediction model in northern area
CN115860277B (en) * 2023-02-27 2023-05-09 西安骏硕通信技术有限公司 Data center energy consumption prediction method and system
CN117744895A (en) * 2024-02-20 2024-03-22 山东华科信息技术有限公司 Thermodynamic load prediction method, device, equipment and storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784068A (en) * 2020-07-09 2020-10-16 北京理工大学 EEMD-based power load combined prediction method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11327475B2 (en) * 2016-05-09 2022-05-10 Strong Force Iot Portfolio 2016, Llc Methods and systems for intelligent collection and analysis of vehicle data
CN109242139A (en) * 2018-07-23 2019-01-18 华北电力大学 A kind of electric power day peak load prediction technique
CN109146183A (en) * 2018-08-24 2019-01-04 广东工业大学 Short-term impact load forecasting model method for building up based on signal decomposition and intelligent optimization algorithm
CN110363360A (en) * 2019-07-24 2019-10-22 广东工业大学 A kind of short-term wind power forecast method, device and equipment
CN111985692B (en) * 2020-07-22 2022-04-15 河海大学 CEEMDAN-based power load prediction method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111784068A (en) * 2020-07-09 2020-10-16 北京理工大学 EEMD-based power load combined prediction method and device

Also Published As

Publication number Publication date
CN113239624A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN113239624B (en) Short-term load prediction method, equipment and medium based on neural network combination model
Tian Short-term wind speed prediction based on LMD and improved FA optimized combined kernel function LSSVM
CN111222332B (en) Commodity recommendation method combining attention network and user emotion
Cui et al. Research on power load forecasting method based on LSTM model
CN107506868B (en) Method and device for predicting short-time power load
CN111814956B (en) Multi-task learning air quality prediction method based on multi-dimensional secondary feature extraction
CN111027772A (en) Multi-factor short-term load prediction method based on PCA-DBILSTM
CN113642225A (en) CNN-LSTM short-term wind power prediction method based on attention mechanism
CN116264388A (en) Short-term load prediction method based on GRU-LightGBM model fusion and Bayesian optimization
CN113222279A (en) Short-term load prediction method considering demand response
CN111861013A (en) Power load prediction method and device
Sarah et al. LSTM model to forecast time series for EC2 cloud price
CN111695024A (en) Object evaluation value prediction method and system, and recommendation method and system
CN111461445A (en) Short-term wind speed prediction method and device, computer equipment and storage medium
CN114548586A (en) Short-term power load prediction method and system based on hybrid model
CN116169670A (en) Short-term non-resident load prediction method and system based on improved neural network
Karny et al. Dealing with complexity: a neural networks approach
Damaševičius et al. Decomposition aided attention-based recurrent neural networks for multistep ahead time-series forecasting of renewable power generation
Wu et al. A forecasting model based support vector machine and particle swarm optimization
Priyatno et al. Feature selection using non-parametric correlations and important features on recursive feature elimination for stock price prediction
CN114358813B (en) Improved advertisement putting method and system based on field matrix factorization machine
Raximov et al. The importance of loss function in artificial intelligence
CN110956528B (en) Recommendation method and system for e-commerce platform
Li et al. Research on recommendation algorithm based on e-commerce user behavior sequence
CN113033903A (en) Fruit price prediction method, medium and equipment of LSTM model and seq2seq model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220823