CN116822742A - Power load prediction method based on dynamic decomposition-reconstruction integrated processing - Google Patents

Power load prediction method based on dynamic decomposition-reconstruction integrated processing Download PDF

Info

Publication number
CN116822742A
CN116822742A CN202310819672.4A CN202310819672A CN116822742A CN 116822742 A CN116822742 A CN 116822742A CN 202310819672 A CN202310819672 A CN 202310819672A CN 116822742 A CN116822742 A CN 116822742A
Authority
CN
China
Prior art keywords
decomposition
hawk
algorithm
model
components
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310819672.4A
Other languages
Chinese (zh)
Inventor
张学东
张楚
陈杰
彭甜
赵环宇
葛宜达
陈佳雷
王熠炜
王政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huaiyin Institute of Technology
Original Assignee
Huaiyin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huaiyin Institute of Technology filed Critical Huaiyin Institute of Technology
Priority to CN202310819672.4A priority Critical patent/CN116822742A/en
Publication of CN116822742A publication Critical patent/CN116822742A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/10Pre-processing; Data cleansing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • G06F18/2131Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on a transform domain processing, e.g. wavelet transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/004Artificial life, i.e. computing arrangements simulating life
    • G06N3/006Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0418Architecture, e.g. interconnection topology using chaos or fractal principles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Economics (AREA)
  • Evolutionary Biology (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Operations Research (AREA)
  • Primary Health Care (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Development Economics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a power load prediction method based on dynamic decomposition-reconstruction integrated processing, which comprises the steps of firstly obtaining power load data for preprocessing, decomposing the data by utilizing mlptdenoise decomposition, establishing a GCN-reformater power load prediction model according to decomposed components, optimizing the super parameters of the reformater by utilizing an improved OOA algorithm, selecting low-precision components needing secondary decomposition according to the performance evaluation of the decomposed components on a verification set, polymerizing all the low-precision components by adopting a displacement entropy to obtain two components with high complexity and low complexity, carrying out secondary decomposition by adopting a WPD (visual design) method, predicting D components of all the WPD decomposition by utilizing the GCN-reformater model, judging whether the re-decomposition is needed according to the index of the prediction result of the WPD decomposition component, and accumulating all the prediction components to obtain a final power load prediction result. Compared with the prior art, the method and the device can effectively improve the prediction accuracy of the power load.

Description

Power load prediction method based on dynamic decomposition-reconstruction integrated processing
Technical Field
The invention belongs to the technical field of power load prediction, and particularly relates to a power load prediction method based on dynamic decomposition-reconstruction integrated processing.
Background
Along with the change of the power demand moment, the short-term load prediction of the power system is an important component part of load prediction, is a foundation for realizing safe and economic operation of the power system, and has important significance in electric market transaction. Currently, the country is greatly pushing energy revolution and realizing double carbon targets, and the power industry is a large household of carbon emission. Accurate short-term power load prediction can greatly improve the utilization rate of electric energy and reduce carbon emission; and on the contrary, when the load prediction error is larger, a great amount of running cost and profit loss can be caused, and even the running reliability of the power system and the supply and demand balance of the power market can be influenced.
The method mainly comprises an exponential smoothing model method, a Kalman filtering method and the like, and the method considers time sequence relation among data, but has limited nonlinear relation data prediction capability and lacks generality. The second category is statistical methods, such as those based on vector autoregressive models, multiple linear regression models, etc., but they suffer from the problem of complex modeling.
Therefore, the third type of machine learning analysis method gradually becomes a research hot spot in recent years, but the electric load data has strong randomness and fluctuation, and the load data volume is huge, so that the traditional method has long training period, complex operation and low prediction precision in actual application. The decomposition integration technology can be used for simplifying complex data and extracting data features in a decomposition integration mode, but the single decomposition technology cannot completely eliminate randomness and irregularity in a time sequence, and the generated partial components have dynamic complexity and irregular frequency range, so that difficulty is brought to a prediction model. Although the above problems are solved to a certain extent by using the secondary decomposition technology, the difficulty is reduced, some components may still be unstable and high in complexity after secondary decomposition, and the number of times and components of decomposition are determined in advance, and cannot be dynamically adjusted according to the characteristics, so that mobility is poor, and prediction accuracy is also reduced.
Disclosure of Invention
The invention aims to: in order to solve the problems in the background art, the invention provides a power load prediction method based on dynamic decomposition-reconstruction integrated processing, improves the existing decomposition-reconstruction mode and can effectively improve the power load prediction precision.
The technical scheme is as follows: the invention provides a power load prediction method based on dynamic decomposition-integration reconstruction processing, which comprises the following steps:
step 1: acquiring original data of an electric power system, preprocessing the acquired original data, and dividing the original data into a training set, a verification set and a test set;
step 2: performing primary decomposition on the data by adopting mlptdenoise decomposition, and decomposing the data into a plurality of IMF components;
step 3: establishing a power load prediction model GCN-reformater based on a graph rolling network and a reformater model for each IMF component, performing optimization training on the GCN-reformater model,
step 4: improving the hawk optimization algorithm, and adopting a Tent chaotic mapping and cauchy reverse learning mixed mutation strategy to improve the global searching capability of the algorithm, so as to obtain an improved hawk optimization algorithm;
step 5: optimizing the GCN-reformater model by utilizing an improved hawk optimization algorithm to obtain an optimal super parameter; then, predicting each component by using the optimized model to obtain the prediction precision of the corresponding component, selecting a low-precision component to be secondarily decomposed according to the performance evaluation of the decomposed component on the verification set, wherein the RMSE obtained by predicting the low-precision component to be decomposed again is larger than or equal to the average value of the RMSE obtained by predicting all the components obtained by decomposition;
step 6: the low-precision components needing to be decomposed again are aggregated into high complexity and low complexity by adopting a permutation entropy PE;
step 7: after classifying and polymerizing the decomposed components by adopting the displacement entropy, further carrying out secondary decomposition by adopting a WPD method to generate new components;
step 8: and (3) inputting the secondary components obtained in the step (7) into a GCN-reformator model after training and optimization for prediction, judging whether re-decomposition is needed according to indexes of WPD decomposition component prediction results, if NRMSE is more than 10%, re-performing WPD decomposition, keeping the obtained products in line, and finally accumulating the prediction results of all the components to obtain final short-term load prediction.
Further, the step 1 of preprocessing the obtained raw data includes the following steps:
step 1.1: the original data is complemented with the missing value by using a Lagrange interpolation method, and the abnormal data is deleted;
step 1.2: the processed raw data is divided into three parts, namely a 60% training set, a 20% verification set and a 20% test set, wherein the training set is used for model construction, the verification set is used for component selection for further decomposition and super-parameter selection, and the test set is used for model verification.
Further, the step 2 decomposes the data by adopting mlptdenoise decomposition, and comprises the following steps:
step 2.1: performing wavelet transformation on the original signal to obtain wavelet coefficients of a plurality of scales;
step 2.2: carrying out dominant trend decomposition on each wavelet coefficient to obtain dominant trend and detail signals on the scale;
step 2.3: adding the dominant trends of each scale to obtain the dominant trend of the hierarchy;
step 2.4: taking the dominant trend of the hierarchy as a part of the signal, taking the detail signal as noise, and filtering and removing the noise;
step 2.5: reconstructing the signal after noise removal;
step 2.6: repeating the steps 2.2-2.5 until all layers of signals are decomposed, and obtaining mlptdenoise decomposition of the original signals.
Further, after mlptdenoise decomposition, the data is decomposed into a plurality of IMF components, each IMF is used to build a power load prediction model based on a graph convolution network and a reformator model, and the GCN-reformator model is optimized and trained, including the following steps:
step 3.1: constructing a continuous characteristic graph as input by load data of each IMF according to a time sliding window to form an electric network model, then embedding the electric network model into a GCN as a graph structure, wherein a space structure convolution layer comprises two layers of GCN networks, analyzing the topological structure of each regional area by utilizing a first layer of GCN structure, and extracting space characteristics;
step 3.2: the second-layer GCN network continues to extract information on the basis of the first-layer GCN network, the information extracted at different moments is input into a time sequence and used as input of a reformator, a graph model is built for a real problem, and the graph convolution neural network extracts hidden graph information by utilizing structure information of connection between edges and vertexes of a graph and attribute information attached to the graph structure;
for graph g= (V, E, L), the input signal X and the output signal Y are processed by the graph convolutional neural network in the following manner:
f(X,L)=Y (1)
wherein V is the number of nodes,e is a collection of edges; l is the adjacency matrix of the graph, L ε R N×N Element L in matrix L ij Representing node v in diagram G i and vj The connection relation between the two; the forward propagation formula of the graph convolution is:
in the formula ,e is a N-order unit square matrix, a self-connectivity matrix>Is a diagonal matrix>H 1 ∈R N×D Representing the output value of the first layer, where H 0 =x; alpha represents a ReLU activation function; w (W) 1 A parameter value representing the first layer;
step 3.3: dividing a sequence output by a graph rolling network into different hash buckets by using a attention mechanism of a reformater local sensitive hash, and sequencing according to the hash buckets to further obtain an attention mechanism result and aggregate global attributes of data;
step 3.4: and (3) training a fusion model based on the graph rolling network and the reform by using the training set and the verification set divided in the step (1), and training and predicting the GCN-reform model by using the decomposed IMF component.
Further, the step 4 adopts a Tent chaotic mapping and cauchy reverse learning hybrid variation strategy to improve a hawk optimization algorithm so as to improve the global searching capability and optimizing performance of the algorithm, and the method comprises the following steps:
step 4.1: the initial population which is distributed uniformly is obtained by introducing the Tent chaotic map, and the expression is as follows:
M i =lb+Y i (ub-lb) (5)
wherein i=1, 2,3, …, N-1, the first individual of the population is randomly generated, and the remaining N-1 individuals are generated by formula (4), formula (5); m is M i An ith individual that is the initial population; y is Y i An ith individual that is a mapping space;
step 4.2: introducing a cauchy mutation operator into an OOA algorithm, wherein the expression is as follows:
cauchy (0, 1) is a standard Cauchy random distribution at t=1,is the optimal solution at the t+1st iteration, O best (t) is the initial position of the hawk;
step 4.3: introducing a reverse learning strategy into an OOA algorithm, capturing a reverse solution in a corresponding solution space according to a current solution, and guiding individual optimization by comparing the two solutions to reserve a better solution, wherein the expression is as follows:
O′ best (t)=k 1 (ub+lb)-O best (t) (7)
O′ best (t) is the optimal individual inverse solution at the t-th iteration, i.e. the optimal position of the hawk, k 1 ,k 2 Respectively [0,1 ]]Random numbers in between;
step 4.4: in summary, the formula of the cauchy reverse learning hybrid variation strategy is as follows:
p is random probability subject to uniform distribution, when P>0.5, the algorithm uses the cauchy operator to mutate the optimal solution, and quickly escapes from the local optimal so as to ensure the steady optimization of the algorithm; when P is less than or equal to 0.5, the algorithm perturbs the current optimal solution by a reverse learning strategy, the reverse solution generated by reverse learning enlarges the exploitation range of the population, increases the probability that an individual approaches to the target position, and the random value k 1 The dynamic change improves the optimizing speed of the algorithm to a certain extent;
step 4.5: and after the mixed mutation strategy is finished, adopting a greedy algorithm to compare the fitness value of the mixed mutation strategy and the greedy algorithm to store the dominant individual, wherein the formula is as follows:
further, the step 5 optimizes the GCN-reform model by using an improved hawk optimization algorithm to obtain an optimal super parameter, wherein the super parameter comprises a learning rate, a multi-head multi-path local sensitive hash head number and a hidden layer size, the super parameter corresponds to an fitness value in the algorithm, and after the algorithm continuously iterates to calculate the optimal fitness value, the optimal super parameter value is output, and the super parameter method for optimizing the model by calculating the fitness value comprises the following steps:
step 5.1: firstly, initializing corresponding parameters of the algorithm, as shown in a formula,
o ij =lb j +r·(ub j -lb j ) (11)
wherein oij Is an individual lb j To optimize the lower boundary, ub j To optimize the upper boundary, r is [0,1]Random numbers in between;
step 5.2: during the search phase, for each hawk, the location of the other hawks is considered as underwater fish, and the fish group of the hawks is represented by the following formula:
wherein ,Ok Is the position of the hawk, FP i Is the aggregate of the i-th fish with hawks, F k ,F i Is the position of fish, O best Is the optimal position of the hawk;
the hawk randomly detects the position of one of the fish and attacks it, and on the basis of simulating the movement of the hawk to the fish, the new position of the corresponding hawk is calculated by using (13-1), (13-2), and if the new position is more optimal, the previous position of the hawk is replaced according to (14), as shown in the following table,
wherein ,is the first position update of the hawk, is the fish selected by SF for the hawk, and r is [0,1]I is one of {1,2}, }>For the best position of the hawk at the present stage, O i The hawk is the best position at present;
step 5.3: in the development stage, after a fish is killed, the fish is brought to a proper position to eat, a new random position is calculated as a position suitable for eating by using a formula (15) for each member in the population, then if the value of the objective function is improved at the new position, the previous position of the corresponding fish is replaced according to a formula (16), and finally the optimal position of the fish is obtained, namely, the optimal fitness value of an OOA algorithm corresponding to the hyper-parameters of the model is obtained:
wherein For secondary position update of osprey, lb j To optimize the lower boundary, ub j To optimize the upper boundary, t is the number of iterations, < ->For the best position of the hawk at the present stage, O i The hawk is the best position at present;
step 5.4: and (3) optimizing the GCN-reformator model by using an improved OOA algorithm, and then predicting the decomposed IMF components to obtain each IMF prediction result.
Further, the classifying the first decomposed component in step 6 using the permutation entropy PE includes the following steps:
step 6.1: classifying low-precision components using PE, and classifying the load sequence { x } t ,t=1, 2., N } performs phase space reconstruction, obtaining a matrix X:
wherein τ represents time lag, d represents embedding dimension, and k represents the number of reconstructed subsequences;
step 6.2: set X j Is the j-th row vector of the matrix X, is ordered in descending order, and is used for researching the information entropy H of PE according to the probability P of calculating each size relation arrangement p The method comprises the steps of carrying out a first treatment on the surface of the In total d-! The possible ordering of the embedding dimensions d, pi j and PE, relative frequencies are expressed as:
wherein ,πj Is X j In a sequential manner, f (pi j ) Is pi j Frequency of occurrence in time series;
step 6.3: PE was normalized to:
step 6.4: calculating the information entropy Hp of PE according to the expression, and if the probabilities of all the ordered modes are equal, setting the value of Hp to be 1; if the information entropy of PE of a component is greater than a given threshold θ, the component is displayed as a high complexity feature, whereas the component is displayed as a low complexity feature; the components are classified into 3 classes: high precision, high complexity low precision, low complexity low precision.
Further, said step 7 of performing a second decomposition using the WPD method to generate new components comprises the steps of,
step 7.1: performing WPD secondary decomposition on the low-complexity and high-complexity components under the low-precision components;
step 7.2: on the basis of wavelet transformation, the WPD method further decomposes the high-frequency sub-band in addition to the low-frequency sub-band when decomposing each level of signal, and finally calculates an optimal signal decomposition path by minimizing a cost function, and decomposes the original signal by the decomposition path; similarly, in the binary wavelet packet transformation, when each stage of wavelet packet is decomposed, a recurrence relation is also formed between the scale function and the wavelet function of the adjacent stages, the recurrence relation is shown in the following table,
where μ is the wavelet packet, h k Is a low-pass filter g k Is a high pass filter.
The beneficial effects are that:
1. the number of times and components of decomposition are required to be determined in advance at present for decomposition integration, so that mobility is relatively poor. The validity of the decomposition is measured based on the predictability and complexity of the components and the data decomposition and reconstruction processes are adjusted accordingly. It enables the dynamic decomposition reconstruction technique to extract and simplify the raw data sufficiently and effectively.
2. Aiming at different data processing modes of a single model, the invention provides that a graph convolution network and a reformater are fused, a continuous characteristic graph is constructed by load data as input, then an electric network model constructed by the continuous characteristic graph is used as a graph structure to be embedded into a GCN, network parameters are converted into an adjacent matrix in the GCN, so that local time sequence characteristics are extracted, and the reformater model establishes the connection between global change and the local characteristics. By adopting the model fusion method, the generalization capability of the model can be effectively improved, and the accuracy of the model can be improved.
3. Aiming at the problems that the hawk optimization algorithm has low convergence rate, is easy to sink into local optimization and the like in the optimizing process, the current optimal solution is subjected to disturbance processing by adopting a mixed variation strategy of fusion of Cauchy variation and reverse learning, so that the algorithm has larger population diversity in the local optimizing process, and takes less time to find the optimal position. The method is easier to escape from local extremum, and simultaneously, the population optimizing speed is increased to avoid initializing centralized individual distribution.
Drawings
FIG. 1 is a power load prediction flow chart;
FIG. 2 is a schematic view of GCN extracted spatial features;
FIG. 3 is a schematic flow chart of a GCN-reformater fusion model provided by the invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings. The following examples are only for more clearly illustrating the technical aspects of the present invention, and are not intended to limit the scope of the present invention.
The invention discloses a power load prediction method based on dynamic decomposition-reconstruction integrated processing, which is shown in fig. 1 and specifically comprises the following steps:
step 1: the method comprises the steps of obtaining original data of the power system, preprocessing the obtained original data, and dividing the original data into a training set, a verification set and a test set.
Step 1.1: and supplementing the missing values for the original data by using a Lagrange interpolation method, and deleting the abnormal data.
Step 1.2: the processed raw data was split into three parts, namely training set (60%), validation set (20%) and test set (20%). The training set is used for model construction, the verification set is used for component selection for further decomposition and super-parameter selection, and the test set is used for model verification.
Step 2: the data is decomposed by mlptdenoise decomposition, comprising the steps of:
step 2.1: and carrying out wavelet transformation on the original signal to obtain wavelet coefficients of multiple scales. The wavelet transformation is a local transformation of time and frequency domains, so that information can be effectively extracted from the signals, and the functions or the signals are subjected to multi-scale refinement analysis through operation functions such as expansion and translation.
Step 2.2: and carrying out dominant trend decomposition on each wavelet coefficient to obtain dominant trend and detail signals on the scale.
Step 2.3: and adding the dominant trends of each scale to obtain the dominant trend of the hierarchy.
Step 2.4: the dominant trend of the hierarchy is taken as a part of the signal, the detail signal is taken as noise, and the noise is filtered and removed.
Step 2.5: and reconstructing the signal after noise removal.
Step 2.6: repeating the steps 2.2-2.5 until all layers of signals are decomposed, and obtaining mlptdenoise decomposition of the original signals.
Step 3: after mlptdenoise decomposition, the data is decomposed into a plurality of IMF components, a power load prediction model based on a graph convolution network and a reformator model is established by utilizing each IMF, the GCN-reformator model is optimally trained, and low-precision components to be subjected to secondary decomposition are selected according to the performance evaluation of the decomposed components on a verification set while the model is trained, and the method comprises the following steps:
step 3.1: constructing a continuous characteristic graph as input according to load data of each IMF (inertial measurement unit) according to a time sliding window, then embedding an electric network model into a GCN (global control system) as a graph structure, wherein a space structure convolution layer comprises two layers of GCN networks, analyzing a topological structure of each regional area by utilizing a first layer of GCN structure, and extracting space characteristics;
step 3.2: the second layer GCN network continues to extract information on the basis of the first layer GCN network, and the information extracted at different moments is input into a time sequence to be used as an input of a reformater;
for graph g= (V, E, L), the input signal X and the output signal Y are processed by the graph convolutional neural network in the following manner:
f(X,L)=Y (1)
wherein V is the number of nodes,e is a collection of edges; l is the adjacency matrix of the graph, L ε R N×N Element L in matrix L ij Representing node v in diagram G i and vj The connection relation between the two; the forward propagation formula of the graph convolution is:
in the formula ,e is an N-order unit square matrix, the self-connectivity matrix D is a diagonal matrix, and the self-connectivity matrix D is a diagonal matrix>H 1 ∈R N×D Representing the output value of the first layer, where H 0 =x; alpha represents a ReLU activation function; w (W) 1 Representing the parameter value of the first layer.
Step 3.3: and dividing the sequence output by the graph rolling network into different hash buckets by using a attention mechanism of a reformatter local sensitive hash, and sequencing according to the hash buckets to obtain an attention mechanism result and aggregate the global attribute of the data.
Step 3.4: and (3) training a fusion model based on the graph rolling network and the reform by using the training set and the verification set divided in the step (1), and training and predicting the GCN-reform model by using the decomposed IMF component.
Step 4: the hawk optimization algorithm is improved, and the hawk optimization algorithm is improved by adopting a Tent chaotic mapping and cauchy reverse learning mixed variation strategy to improve the global searching capability and optimizing performance of the algorithm, and the method comprises the following steps:
step 4.1: the initial population which is distributed uniformly is obtained by introducing the Tent chaotic map, and the expression is as follows:
M i =lb+Y i (ub-lb) (5)
wherein i=1, 2,3, …, N-1, the first individual of the population is randomly generated, and the remaining N-1 individuals are generated by formula (4), formula (5); m is M i An ith individual that is the initial population; y is Y i Is the ith individual of the mapping space.
Step 4.2: introducing a cauchy mutation operator into an OOA algorithm, wherein the expression is as follows:
cauchy (0, 1) is a standard Cauchy random distribution at t=1,is the optimal solution at the t+1st iteration, O best And (t) is the initial position of the hawk.
Step 4.3: introducing a reverse learning strategy into an OOA algorithm, capturing a reverse solution in a corresponding solution space according to a current solution, and guiding individual optimization by comparing the two solutions to reserve a better solution, wherein the expression is as follows:
O′ best (t)=k 1 (ub+lb)-O best (t) (7)
O′ best (t) is the optimal individual inverse solution at the t-th iteration, i.e. the optimal position of the hawk, k 1 ,k 2 Respectively [0,1 ]]Random numbers in between.
Step 4.4: in summary, the formula of the cauchy reverse learning hybrid variation strategy is as follows:
p is random probability subject to uniform distribution, when P>0.5, the algorithm uses the cauchy operator to mutate the optimal solution, and quickly escapes from the local optimal so as to ensure the steady optimization of the algorithm; when P is less than or equal to 0.5, the algorithm perturbs the current optimal solution by a reverse learning strategy, the reverse solution generated by reverse learning enlarges the exploitation range of the population, increases the probability that an individual approaches to the target position, and the random value k 1 Dynamic change improves the optimizing speed of the algorithm to a certain extent.
Step 4.5: and after the mixed mutation strategy is finished, adopting a greedy algorithm to compare the fitness value of the mixed mutation strategy and the greedy algorithm to store the dominant individual, wherein the formula is as follows:
step 5: optimizing a GCN-reformater model by utilizing an improved hawk optimization algorithm to obtain optimal super parameters, wherein the super parameters comprise a learning rate, a multi-head multi-channel local sensitive hash head number and a hidden layer size, the super parameters correspond to fitness values in the algorithm, after the algorithm continuously and iteratively calculates the optimal fitness values, the optimal super parameters are output, and the super parameter method for optimizing the model by calculating the fitness values comprises the following steps:
step 5.1: firstly, initializing corresponding parameters of the algorithm, as shown in a formula,
o ij =lb j +r·(ub j -lb j ) (11)
wherein oij Is an individual lb j To optimize the lower boundary, ub j To optimize the upper boundary, r is [0,1]Random numbers in between.
Step 5.2: during the search phase, for each hawk, the location of the other hawks is considered as underwater fish, and the fish group of the hawks is represented by the following formula:
wherein ,Ok Is the position of the hawk, FP i Is the aggregate of the i-th fish with hawks, F k ,F i Is the position of fish, O best Is the best hawk position.
The hawk randomly detects the position of one of the fish and attacks it, and on the basis of simulating the movement of the hawk to the fish, the new position of the corresponding hawk is calculated by using (13-1), (13-2), and if the new position is more optimal, the previous position of the hawk is replaced according to (14), as shown in the following table,
wherein ,is the first position update of the hawk, is the fish selected by SF for the hawk, and r is [0,1]I is one of {1,2}, }>For the best position of the hawk at the present stage, O i Is the best position of the hawk at present.
Step 5.3: in the development stage, after a fish is killed, the fish is brought to a proper position to eat, a new random position is calculated as a position suitable for eating by using a formula (15) for each member in the population, then if the value of the objective function is improved at the new position, the previous position of the corresponding fish is replaced according to a formula (16), and finally the optimal position of the fish is obtained, namely, the optimal fitness value of an OOA algorithm corresponding to the hyper-parameters of the model is obtained:
wherein For secondary position update of osprey, lb j To optimize the lower boundary, ub j To optimize the upper boundary, t is the number of iterations, < ->For the best position of the hawk at the present stage, O i Is the best position of the hawk at present.
Step 5.4: and (3) optimizing the GCN-reformator model by using an improved OOA algorithm, and then predicting the decomposed IMF components to obtain each IMF prediction result. The RMSE predicted from the low-precision component that needs to be decomposed again should be equal to or greater than the average value of the RMSE predicted from all the components decomposed.
Step 6: classifying the primarily decomposed components using permutation entropy PE, comprising the steps of:
step 6.1: classifying low-precision components using PE, and classifying the load sequence { x } t T=1, 2, where, N } performs a phase space reconstruction of the phase, obtaining a matrix X:
where τ represents the time lag, d represents the embedding dimension, and k represents the number of reconstructed subsequences.
Step 6.2: set X j Is the j-th row vector of the matrix X, is ordered in descending order, and researches the information entropy of PE according to the probability P of calculating each size relation arrangement of the j-th row vector and the j-th row vectorH p The method comprises the steps of carrying out a first treatment on the surface of the In total d-! The possible ordering of the embedding dimensions d, pi j and PE, relative frequencies are expressed as:
wherein ,πj Is X j In a sequential manner, f (pi j ) Is pi j Frequency of occurrence in time series.
Step 6.3: PE was normalized to:
step 6.4: calculating the information entropy Hp of PE according to the expression, and if the probabilities of all the ordered modes are equal, setting the value of Hp to be 1; if the information entropy of PE of a component is greater than a given threshold θ, the component is displayed as a high complexity feature, whereas the component is displayed as a low complexity feature; the components are classified into 3 classes: high precision, high complexity low precision, low complexity low precision.
Step 7: the generation of new components by performing a second decomposition using the WPD method includes the steps of,
step 7.1: the low complexity and high complexity of the low-precision component are subjected to WPD quadratic decomposition.
Step 7.2: on the basis of wavelet transformation, the WPD method further decomposes the high-frequency sub-band in addition to the low-frequency sub-band when decomposing each level of signal, and finally calculates an optimal signal decomposition path by minimizing a cost function, and decomposes the original signal by the decomposition path; similarly, in the binary wavelet packet transformation, when each stage of wavelet packet is decomposed, a recurrence relation is also formed between the scale function and the wavelet function of the adjacent stages, the recurrence relation is shown in the following table,
where μ is the wavelet packet, h k Is a low-pass filter g k Is a high pass filter.
Step 8: after WPD decomposition is used, all decomposition components are subjected to GCN-reformater model after training and optimization by using an improved OOA algorithm, all D components are predicted by a fusion model, whether the decomposition is needed again is judged by using the index of the D component prediction result, if NRMSE is more than 10%, WPD decomposition is continued, and if NRMSE is less than 10%, prediction of all components is accumulated, so that final short-term load prediction is obtained. NRMSE is the standard deviation between the predicted value and the actual value divided by the range of actual values. For evaluating the accuracy of the predictive model. The smaller its value, the higher the accuracy of the predictive model.
The foregoing embodiments are merely illustrative of the technical concept and features of the present invention, and are intended to enable those skilled in the art to understand the present invention and to implement the same, not to limit the scope of the present invention. All equivalent changes or modifications made according to the spirit of the present invention should be included in the scope of the present invention.

Claims (8)

1. The power load prediction method based on dynamic decomposition-integration reconstruction processing is characterized by comprising the following steps of:
step 1: acquiring original data of an electric power system, preprocessing the acquired original data, and dividing the original data into a training set, a verification set and a test set;
step 2: performing primary decomposition on the data by adopting mlptdenoise decomposition, and decomposing the data into a plurality of IMF components;
step 3: establishing a power load prediction model GCN-reformater based on a graph rolling network and a reformater model for each IMF component, performing optimization training on the GCN-reformater model,
step 4: improving the hawk optimization algorithm, and adopting a Tent chaotic mapping and cauchy reverse learning mixed mutation strategy to improve the global searching capability of the algorithm, so as to obtain an improved hawk optimization algorithm;
step 5: optimizing the GCN-reformater model by utilizing an improved hawk optimization algorithm to obtain an optimal super parameter; then, predicting each component by using the optimized model to obtain the prediction precision of the corresponding component, selecting a low-precision component to be secondarily decomposed according to the performance evaluation of the decomposed component on the verification set, wherein the RMSE obtained by predicting the low-precision component to be decomposed again is larger than or equal to the average value of the RMSE obtained by predicting all the components obtained by decomposition;
step 6: the low-precision components needing to be decomposed again are aggregated into high complexity and low complexity by adopting a permutation entropy PE;
step 7: after classifying and polymerizing the decomposed components by adopting the displacement entropy, further carrying out secondary decomposition by adopting a WPD method to generate new components;
step 8: and (3) inputting the secondary components obtained in the step (7) into a GCN-reformator model after training and optimization for prediction, judging whether re-decomposition is needed according to indexes of WPD decomposition component prediction results, if NRMSE is more than 10%, re-performing WPD decomposition, keeping the obtained products in line, and finally accumulating the prediction results of all the components to obtain final short-term load prediction.
2. The power load prediction method based on the dynamic decomposition-reconstruction integrated process according to claim 1, wherein the preprocessing of the obtained raw data in step 1 includes the steps of:
step 1.1: the original data is complemented with the missing value by using a Lagrange interpolation method, and the abnormal data is deleted;
step 1.2: the processed raw data is divided into three parts, namely a 60% training set, a 20% verification set and a 20% test set, wherein the training set is used for model construction, the verification set is used for component selection for further decomposition and super-parameter selection, and the test set is used for model verification.
3. The power load prediction method based on the dynamic decomposition-reconstruction integrated process according to claim 1, wherein the step 2 decomposes data using mlptdenoise decomposition, comprising the steps of:
step 2.1: performing wavelet transformation on the original signal to obtain wavelet coefficients of a plurality of scales;
step 2.2: carrying out dominant trend decomposition on each wavelet coefficient to obtain dominant trend and detail signals on the scale;
step 2.3: adding the dominant trends of each scale to obtain the dominant trend of the hierarchy;
step 2.4: taking the dominant trend of the hierarchy as a part of the signal, taking the detail signal as noise, and filtering and removing the noise;
step 2.5: reconstructing the signal after noise removal;
step 2.6: repeating the steps 2.2-2.5 until all layers of signals are decomposed, and obtaining mlptdenoise decomposition of the original signals.
4. The power load prediction method based on dynamic decomposition-reconstruction integrated process according to claim 1, wherein after the mlptdenoise decomposition, the data is decomposed into a plurality of IMF components, each IMF is used to build a power load prediction model based on a graph convolution network and a reconstruction model, and the GCN-reconstruction model is optimally trained, and the method comprises the following steps:
step 3.1: constructing a continuous characteristic graph as input by load data of each IMF according to a time sliding window to form an electric network model, then embedding the electric network model into a GCN as a graph structure, wherein a space structure convolution layer comprises two layers of GCN networks, analyzing the topological structure of each regional area by utilizing a first layer of GCN structure, and extracting space characteristics;
step 3.2: the second-layer GCN network continues to extract information on the basis of the first-layer GCN network, the information extracted at different moments is input into a time sequence and used as input of a reformator, a graph model is built for a real problem, and the graph convolution neural network extracts hidden graph information by utilizing structure information of connection between edges and vertexes of a graph and attribute information attached to the graph structure;
for graph g= (V, E, L), the input signal X and the output signal Y are processed by the graph convolutional neural network in the following manner:
f(X,L)=Y (1)
wherein V is the number of nodes,e is a collection of edges; l is the adjacency matrix of the graph, L ε R N×N Element L in matrix L ij Representing node v in diagram G i and vj The connection relation between the two; the forward propagation formula of the graph convolution is:
in the formula ,e is a N-order unit square matrix, a self-connectivity matrix>In the form of a diagonal matrix,representing the output value of the first layer, where H 0 =x; alpha represents a ReLU activation function; w (W) 1 A parameter value representing the first layer;
step 3.3: dividing a sequence output by a graph rolling network into different hash buckets by using a attention mechanism of a reformater local sensitive hash, and sequencing according to the hash buckets to further obtain an attention mechanism result and aggregate global attributes of data;
step 3.4: and (3) training a fusion model based on the graph rolling network and the reform by using the training set and the verification set divided in the step (1), and training and predicting the GCN-reform model by using the decomposed IMF component.
5. The power load prediction method based on dynamic decomposition-reconstruction integrated processing according to claim 1, wherein the step 4 improves a hawk optimization algorithm by adopting a Tent chaotic mapping and cauchy reverse learning hybrid variation strategy to improve global searching capability and optimizing performance of the algorithm, and the method comprises the following steps:
step 4.1: the initial population which is distributed uniformly is obtained by introducing the Tent chaotic map, and the expression is as follows:
M i =lb+Y i (ub-lb) (5)
wherein i=1, 2,3, …, N-1, the first individual of the population is randomly generated, and the remaining N-1 individuals are generated by formula (4), formula (5); m is M i An ith individual that is the initial population; y is Y i An ith individual that is a mapping space;
step 4.2: introducing a cauchy mutation operator into an OOA algorithm, wherein the expression is as follows:
cauchy (0, 1) is a standard Cauchy random distribution at t=1,is the optimal solution at the t+1st iteration, O best (t) is the initial position of the hawk;
step 4.3: introducing a reverse learning strategy into an OOA algorithm, capturing a reverse solution in a corresponding solution space according to a current solution, and guiding individual optimization by comparing the two solutions to reserve a better solution, wherein the expression is as follows:
O′ best (t)=k 1 (ub+lb)-O best (t) (7)
O′ best (t) is the optimal individual inverse solution at the t-th iteration, i.e. the optimal position of the hawk, k 1 ,k 2 Respectively [0,1 ]]Random numbers in between;
step 4.4: in summary, the formula of the cauchy reverse learning hybrid variation strategy is as follows:
p is random probability subject to uniform distribution, when P>0.5, the algorithm uses the cauchy operator to mutate the optimal solution, and quickly escapes from the local optimal so as to ensure the steady optimization of the algorithm; when P is less than or equal to 0.5, the algorithm perturbs the current optimal solution by a reverse learning strategy, the reverse solution generated by reverse learning enlarges the exploitation range of the population, increases the probability that an individual approaches to the target position, and the random value k 1 The dynamic change improves the optimizing speed of the algorithm to a certain extent;
step 4.5: and after the mixed mutation strategy is finished, adopting a greedy algorithm to compare the fitness value of the mixed mutation strategy and the greedy algorithm to store the dominant individual, wherein the formula is as follows:
6. the power load prediction method based on dynamic decomposition-reconstruction integrated processing according to claim 5, wherein the step 5 optimizes the GCN-reform model by using an improved hawk optimization algorithm to obtain optimal super parameters, the super parameters include learning rate, multi-head multi-channel local sensitive hash head number and hidden layer size, the super parameters correspond to fitness values in the algorithm, after the algorithm continuously iterates to calculate the optimal fitness values, the value of the optimal super parameters is output, and the super parameter method for optimizing the model by calculating the fitness values comprises the following steps:
step 5.1: firstly, initializing corresponding parameters of the algorithm, as shown in a formula,
o ij =lb j +r·(ub j -lb j ) (11)
wherein oij Is an individual lb j To optimize the lower boundary, ub j To optimize the upper boundary, r is [0,1]Random numbers in between;
step 5.2: during the search phase, for each hawk, the location of the other hawks is considered as underwater fish, and the fish group of the hawks is represented by the following formula:
wherein ,Ok Is the position of the hawk, FP i Is the aggregate of the i-th fish with hawks, F k ,F i Is the position of fish, O best Is the optimal position of the hawk;
the hawk randomly detects the position of one of the fish and attacks it, and on the basis of simulating the movement of the hawk to the fish, the new position of the corresponding hawk is calculated by using (13-1), (13-2), and if the new position is more optimal, the previous position of the hawk is replaced according to (14), as shown in the following table,
wherein ,is the first position update of the hawk, is the fish selected by SF for the hawk, and r is [0,1]I is one of {1,2}, }>For the best position of the hawk at the present stage, O i The hawk is the best position at present;
step 5.3: in the development stage, after a fish is killed, the fish is brought to a proper position to eat, a new random position is calculated as a position suitable for eating by using a formula (15) for each member in the population, then if the value of the objective function is improved at the new position, the previous position of the corresponding fish is replaced according to a formula (16), and finally the optimal position of the fish is obtained, namely, the optimal fitness value of an OOA algorithm corresponding to the hyper-parameters of the model is obtained:
wherein For secondary position update of osprey, lb j To optimize the lower boundary, ub j To optimize the upper boundary, t is the number of iterations,for the best position of the hawk at the present stage, O i Is the current hawkAn optimal position;
step 5.4: and (3) optimizing the GCN-reformator model by using an improved OOA algorithm, and then predicting the decomposed IMF components to obtain each IMF prediction result.
7. The method for predicting the power load based on the dynamic decomposition-reconstruction integration process according to claim 1, wherein the step 6 classifies the primarily decomposed components using the permutation entropy PE, comprising the steps of:
step 6.1: classifying low-precision components using PE, and classifying the load sequence { x } t T=1, 2, where, N } performs a phase space reconstruction of the phase, obtaining a matrix X:
wherein τ represents time lag, d represents embedding dimension, and k represents the number of reconstructed subsequences;
step 6.2: set X j Is the j-th row vector of the matrix X, is ordered in descending order, and is used for researching the information entropy H of PE according to the probability P of calculating each size relation arrangement p The method comprises the steps of carrying out a first treatment on the surface of the In total d-! The possible ordering of the embedding dimensions d, pi j and PE, relative frequencies are expressed as:
wherein ,πj Is X j In a sequential manner, f (pi j ) Is pi j Frequency of occurrence in time series;
step 6.3: PE was normalized to:
step 6.4: calculating the information entropy Hp of PE according to the expression, and if the probabilities of all the ordered modes are equal, setting the value of Hp to be 1; if the information entropy of PE of a component is greater than a given threshold θ, the component is displayed as a high complexity feature, whereas the component is displayed as a low complexity feature; the components are classified into 3 classes: high precision, high complexity low precision, low complexity low precision.
8. The power load prediction method based on the dynamic decomposition-reconstruction integrated process according to claim 1, wherein said step 7 of performing the secondary decomposition by the WPD method to generate a new component includes the steps of,
step 7.1: performing WPD secondary decomposition on the low-complexity and high-complexity components under the low-precision components;
step 7.2: on the basis of wavelet transformation, the WPD method further decomposes the high-frequency sub-band in addition to the low-frequency sub-band when decomposing each level of signal, and finally calculates an optimal signal decomposition path by minimizing a cost function, and decomposes the original signal by the decomposition path; similarly, in the binary wavelet packet transformation, when each stage of wavelet packet is decomposed, a recurrence relation is also formed between the scale function and the wavelet function of the adjacent stages, the recurrence relation is shown in the following table,
where μ is the wavelet packet, h k Is a low-pass filter g k Is a high pass filter.
CN202310819672.4A 2023-07-05 2023-07-05 Power load prediction method based on dynamic decomposition-reconstruction integrated processing Pending CN116822742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310819672.4A CN116822742A (en) 2023-07-05 2023-07-05 Power load prediction method based on dynamic decomposition-reconstruction integrated processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310819672.4A CN116822742A (en) 2023-07-05 2023-07-05 Power load prediction method based on dynamic decomposition-reconstruction integrated processing

Publications (1)

Publication Number Publication Date
CN116822742A true CN116822742A (en) 2023-09-29

Family

ID=88127349

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310819672.4A Pending CN116822742A (en) 2023-07-05 2023-07-05 Power load prediction method based on dynamic decomposition-reconstruction integrated processing

Country Status (1)

Country Link
CN (1) CN116822742A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117973647A (en) * 2024-04-02 2024-05-03 南方电网数字电网研究院股份有限公司 Comprehensive energy system load prediction method and device and computer equipment

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117973647A (en) * 2024-04-02 2024-05-03 南方电网数字电网研究院股份有限公司 Comprehensive energy system load prediction method and device and computer equipment

Similar Documents

Publication Publication Date Title
Wang et al. The study and application of a novel hybrid forecasting model–A case study of wind speed forecasting in China
CN111860982A (en) Wind power plant short-term wind power prediction method based on VMD-FCM-GRU
CN112988723A (en) Traffic data restoration method based on space self-attention-diagram convolution cyclic neural network
CN112884236B (en) Short-term load prediction method and system based on VDM decomposition and LSTM improvement
CN111967183A (en) Method and system for calculating line loss of distribution network area
CN116562908A (en) Electric price prediction method based on double-layer VMD decomposition and SSA-LSTM
CN112434891A (en) Method for predicting solar irradiance time sequence based on WCNN-ALSTM
CN116822742A (en) Power load prediction method based on dynamic decomposition-reconstruction integrated processing
CN113420868A (en) Traveling salesman problem solving method and system based on deep reinforcement learning
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN114169251A (en) Ultra-short-term wind power prediction method
CN111723516A (en) Multi-target seawater intrusion management model based on adaptive DNN (deep dynamic network) substitution model
CN115766125A (en) Network flow prediction method based on LSTM and generation countermeasure network
CN116169670A (en) Short-term non-resident load prediction method and system based on improved neural network
CN117114184A (en) Urban carbon emission influence factor feature extraction and medium-long-term prediction method and device
CN116629431A (en) Photovoltaic power generation amount prediction method and device based on variation modal decomposition and ensemble learning
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
CN113128666A (en) Mo-S-LSTMs model-based time series multi-step prediction method
CN117455070A (en) Traditional Chinese medicine production data management system based on big data
CN113033898A (en) Electrical load prediction method and system based on K-means clustering and BI-LSTM neural network
CN112418504A (en) Wind speed prediction method based on mixed variable selection optimization deep belief network
Yang et al. Host load prediction based on PSR and EA-GMDH for cloud computing system
CN116632834A (en) Short-term power load prediction method based on SSA-BiGRU-Attention
CN117093885A (en) Federal learning multi-objective optimization method integrating hierarchical clustering and particle swarm
CN112465253B (en) Method and device for predicting links in urban road network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination