CN116596044A - Power generation load prediction model training method and device based on multi-source data - Google Patents
Power generation load prediction model training method and device based on multi-source data Download PDFInfo
- Publication number
- CN116596044A CN116596044A CN202310878115.XA CN202310878115A CN116596044A CN 116596044 A CN116596044 A CN 116596044A CN 202310878115 A CN202310878115 A CN 202310878115A CN 116596044 A CN116596044 A CN 116596044A
- Authority
- CN
- China
- Prior art keywords
- training
- sample
- power generation
- generation load
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000012549 training Methods 0.000 title claims abstract description 310
- 238000010248 power generation Methods 0.000 title claims abstract description 153
- 238000000034 method Methods 0.000 title claims abstract description 72
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 100
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 44
- 238000000605 extraction Methods 0.000 claims abstract description 19
- 238000004364 calculation method Methods 0.000 claims abstract description 13
- 239000002245 particle Substances 0.000 claims description 66
- 238000005457 optimization Methods 0.000 claims description 40
- 230000006870 function Effects 0.000 claims description 34
- 238000003066 decision tree Methods 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 23
- 239000013598 vector Substances 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 19
- 238000011156 evaluation Methods 0.000 claims description 19
- 230000035772 mutation Effects 0.000 claims description 17
- 238000004590 computer program Methods 0.000 claims description 14
- 230000003044 adaptive effect Effects 0.000 claims description 12
- 238000007635 classification algorithm Methods 0.000 claims description 11
- 238000003860 storage Methods 0.000 claims description 7
- 238000009826 distribution Methods 0.000 description 13
- 230000002776 aggregation Effects 0.000 description 11
- 238000004220 aggregation Methods 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 11
- 238000013138 pruning Methods 0.000 description 10
- 230000008569 process Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000010276 construction Methods 0.000 description 5
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 230000009286 beneficial effect Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 4
- 238000010801 machine learning Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000010200 validation analysis Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 210000002569 neuron Anatomy 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000000638 solvent extraction Methods 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000004134 energy conservation Methods 0.000 description 1
- 238000005265 energy consumption Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0499—Feedforward networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/003—Load forecast, e.g. methods or systems for forecasting future load demand
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02J—CIRCUIT ARRANGEMENTS OR SYSTEMS FOR SUPPLYING OR DISTRIBUTING ELECTRIC POWER; SYSTEMS FOR STORING ELECTRIC ENERGY
- H02J3/00—Circuit arrangements for ac mains or ac distribution networks
- H02J3/004—Generation forecast, e.g. methods or systems for forecasting future energy generation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Economics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Human Resources & Organizations (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Marketing (AREA)
- Tourism & Hospitality (AREA)
- Power Engineering (AREA)
- Game Theory and Decision Science (AREA)
- Entrepreneurship & Innovation (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Development Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to the technical field of power generation load prediction, and provides a power generation load prediction model training method and device based on multi-source data. After training data sets corresponding to different power generation load intervals are obtained, a preset sample expansion algorithm is adopted to conduct interpolation calculation on each training sample and neighbor samples of the corresponding training samples, new training samples corresponding to the corresponding training samples are obtained, and therefore a new training data set is generated; performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set; based on the characteristics of each sample in the current training set and the corresponding sample labels, training the power generation load prediction model to be trained to obtain a trained power generation load prediction model. The method improves the accuracy and the prediction efficiency of the power generation load prediction.
Description
Technical Field
The application relates to the technical field of power generation load prediction, in particular to a power generation load prediction model training method and device based on multi-source data.
Background
The power generation load prediction is a key ring in a power system, and is used for predicting the power demand of a power grid in a future period according to various factors such as historical data, weather information and the like. Power load prediction has an important role in power system planning, scheduling and marketing operations. Accurate power generation load prediction can ensure stable operation of a power system, reduce operation cost and improve energy utilization efficiency.
With the continued development of the power market and the increasing complexity of power systems, accurately predicting the power generation load has become a critical issue in power system scheduling and operation. Traditional load prediction methods rely mainly on historical load data, but the accuracy of these methods is limited due to the influence of various factors such as climate, holidays, economic development and the like. In addition, the conventional prediction method has the following disadvantages:
(1) Single data source: the traditional prediction method mainly relies on historical load data, and other data information related to the load, such as weather data, calendar information and the like, are not fully utilized, so that a prediction result is easily affected by abnormal fluctuation and atypical events.
(2) Model limitations: many conventional prediction methods employ linear models (e.g., linear regression, autoregressive, etc.) that perform poorly when dealing with nonlinear data.
In order to overcome the above problems, researchers have attempted to employ new methods for power generation load prediction, such as the introduction of machine learning and deep learning techniques. Although these methods improve the prediction accuracy to a certain extent, certain disadvantages exist, such as higher requirements on data quality and longer model training time.
Disclosure of Invention
The embodiment of the application aims to provide a power generation load prediction model training method and device based on multi-source data, which are used for solving the problems existing in the prior art and improving the accuracy and the prediction efficiency of power generation load prediction.
In a first aspect, a method for training a power generation load prediction model based on multi-source data is provided, and the method may include:
acquiring training data sets corresponding to different power generation load intervals, wherein training samples corresponding to each power generation load interval in the training data sets are multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics;
performing interpolation calculation on each training sample and a neighboring sample of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample and the corresponding new training sample correspond to the same power generation load interval, and the Euclidean distance between the neighboring sample of the training sample and the corresponding training sample is shortest;
Performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set;
based on the sample characteristics and the corresponding sample labels in the current training set, training a power generation load prediction model to be trained to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
In a second aspect, a power generation load prediction model training apparatus based on multi-source data is provided, the apparatus may include:
the system comprises an acquisition unit, a data processing unit and a data processing unit, wherein the acquisition unit is used for acquiring training data sets corresponding to different power generation load intervals, training samples corresponding to each power generation load interval in the training data sets comprise multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics;
The sample expansion unit is used for carrying out interpolation calculation on each training sample and the adjacent samples of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample and the corresponding new training sample correspond to the same power generation load interval, and the Euclidean distance between the adjacent samples of the training sample and the corresponding training sample is shortest;
the feature extraction unit is used for extracting features of each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set;
the training unit is used for training the power generation load prediction model to be trained based on the characteristics of each sample in the current training set and the corresponding sample labels to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
In a third aspect, an electronic device is provided, the electronic device comprising a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory are in communication with each other via the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of the above first aspects when executing a program stored on a memory.
In a fourth aspect, a computer-readable storage medium is provided, in which a computer program is stored which, when being executed by a processor, carries out the method steps of any of the first aspects.
According to the method, after training data sets corresponding to different power generation load sections are obtained, training samples corresponding to the power generation load sections in the training data sets are multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics; performing interpolation calculation on each training sample and a neighboring sample of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample corresponds to the same power generation load interval as the corresponding new training sample, and the Euclidean distance between the neighboring sample of the training sample and the corresponding training sample is shortest; performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set; based on each sample feature and corresponding sample label in the current training set, training the power generation load prediction model to be trained to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing each sample characteristic and a corresponding sample label by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimizing algorithm to predict a corresponding power generation load interval; the sample label is a power generation load section corresponding to the corresponding sample characteristic. The method improves the accuracy and the prediction efficiency of the power generation load prediction.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a power generation load prediction model training method based on multi-source data provided by an embodiment of the application;
fig. 2 is a schematic structural diagram of a power generation load prediction model training device based on multi-source data according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The application context of the power generation load prediction may include:
1. planning a power system: the power system planning needs to reasonably allocate resources according to predicted load demands, including construction of power plants, capacity expansion of power grids, improvement of power transmission and distribution facilities and the like. Accurate power generation load prediction can provide reliable basis for power system planning, so that future power requirements are met.
2. And (3) power scheduling: the power dispatching needs to adjust the output of the power plant in real time according to the real-time load prediction information so as to ensure the stable operation of the power system. The accurate power generation load prediction can reduce the dispatching cost and improve the operation efficiency of the power system.
3. The electric power market operates: the prediction of the power generation load is critical to the operation of the power market. The electric market participants develop investment, production and trading strategies based on the predicted load demand. Accurate power generation load prediction can reduce market risk and improve the efficiency of market operation.
4. Energy saving and emission reduction: the accurate power generation load prediction is beneficial to reasonably scheduling resources of the power system, so that the energy consumption and emission are reduced, and the energy conservation and emission reduction targets are realized.
5. Fusion of distributed energy and renewable energy: with the rapid development of distributed energy and renewable energy, power generation load prediction is becoming increasingly important for the scheduling and optimization of distributed energy and renewable energy. The accurate power generation load prediction can improve the utilization efficiency of distributed energy and renewable energy and reduce the influence on the environment.
In summary, the power generation load prediction is of great importance in various aspects of the power system. By improving the accuracy of power generation load prediction, a better decision basis can be provided for planning, scheduling and market operation of the power system, so that the stable operation of the power system is ensured, and the energy utilization efficiency is improved.
According to the power generation load prediction model training method based on the multi-source data, after the obtained training data sets corresponding to different power generation load intervals are preprocessed, a preset sample expansion algorithm is adopted to expand part or all training samples in the training data sets to obtain a new training data set, wherein the training samples are expanded in a power generation load prediction task based on a decision tree and a local aggregation degree dynamic weight SMOTE algorithm (L-SMOTE) to obtain rich training samples, and the accuracy of the model is improved.
And then, a convolutional neural network of an evolution algorithm is adopted to perform feature extraction on training samples in the new training data set, and a power generation load prediction model is constructed based on the extracted features and corresponding labels, so that a power generation load prediction result corresponding to the current test data set is obtained rapidly, and the accuracy and the prediction efficiency of power generation load prediction are improved. The data preprocessing comprises data cleaning, outlier processing, missing value filling and the like, so that the data quality is ensured.
Therefore, the power generation load prediction model training method based on the multi-source data solves the defects of the following prior art:
1. single data source: the prior art may rely solely on historical load data, ignoring meteorological data, calendar information, and other factor data that may affect the power generation load, which may result in models that fail to fully utilize the information of the various data sources when predicting.
2. Data imbalance problem: the prior art may employ a conventional SMOTE algorithm or other simple data expansion methods when dealing with sample imbalance problems, which may not solve the data imbalance problem well, resulting in the prediction performance being affected.
3. The feature extraction effect is limited: the prior art may employ conventional manual feature extraction methods or neural networks based on fixed structures in terms of feature extraction, which may have limited effectiveness in terms of feature extraction, resulting in poor predictive performance.
4. The prediction model is constructed singly: the prior art may provide only one implementation in terms of predictive model construction, which may not adapt well to the dataset in some cases, resulting in poor predictive performance.
5. The parameter optimization method is not enough: in the prior art, traditional optimization algorithms may be adopted in parameter optimization, and the algorithms may have lower efficiency in searching optimal parameters, so that model training time is longer, and meanwhile, the model may be locally optimal, so that the optimal prediction performance cannot be achieved.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a flow chart of a power generation load prediction model training method based on multi-source data according to an embodiment of the present application. As shown in fig. 1, the method may include:
step S110, training data sets corresponding to different power generation load intervals are obtained.
The training samples corresponding to each power generation load interval in the training data set are multi-source power generation load data. The multi-source power generation load data may include data vectors of different data characteristics.
The multi-source power generation load data may include historical load data, meteorological data (such as temperature, humidity, wind speed, meteorological type, etc.), calendar information (such as week, holiday, working day, etc.), and other factor data that may affect the power generation load, where different data features are data attributes in the multi-source power generation load data, and data vectors are vectors composed of feature values of different data features. For example, if the different data characteristics include temperature, wind speed, week, and meteorological type, the n data vectors (i.e., n training samples) corresponding to the multi-source power generation load data may be represented as: x 1= [10 ℃,300m/s, wednesday, cloudy day ]; x 2= [15 ℃,200m/s, tuesday, sunny day ]; … …; xn= [20 ℃,150m/s, saturday, cloudy day ]. Each training sample belongs to one sample class, namely one power generation load interval.
Furthermore, the data preprocessing such as data cleaning, abnormal value processing, missing value filling and the like can be performed on the multi-source power generation load data corresponding to different power generation load sections, so that the data quality is ensured.
Therefore, by integrating historical load data, meteorological data, calendar information and other factor data which can influence the power generation load, the data base of the prediction model is improved, and the model can fully utilize the information of various data sources.
And step 120, performing interpolation calculation on each training sample and the neighbor samples of the corresponding training samples by adopting a preset sample expansion algorithm to obtain new training samples corresponding to the corresponding training samples so as to generate a new training data set.
In a power generation load prediction task, some specific load levels or states may be relatively small in data, resulting in an imbalance in the data set between the different categories. Unbalanced data can cause the machine learning model to pay excessive attention to more categories during training, resulting in poor performance on fewer categories. When the training data volume is small, the machine learning model may have a fitting phenomenon, that is, perform well on the training set, but perform poorly in the test set and the actual application, so that expansion of the training sample is required.
In the specific implementation, a decision tree is adopted, a training sample is divided according to the data characteristics in the multi-source data, so that a plurality of areas are obtained, and each area comprises at least one training sample;
for each region, acquiring the local concentration of each sample in the region and the average local concentration of the region;
determining interpolation coefficients of the corresponding training samples based on the local concentration of each training sample and the average local concentration of the region;
for each training sample, calculating the interpolation coefficient of the training sample and the adjacent training samples of the training sample, and calculating the difference value of the training sample to obtain a new training sample corresponding to the training sample. The training samples and the corresponding new training samples correspond to the same power generation load interval, and Euclidean distances between the neighbor samples of the training samples and the corresponding training samples are shortest.
In a specific embodiment, the sample expansion algorithm preset in the application can be a dynamic weight SMOTE algorithm based on decision tree and local aggregation (abbreviated as L-SMOTE. Wherein the main innovation of L-SMOTE is that the sample expansion is realized more adaptively and efficiently by combining decision tree, local aggregation and dynamic weight:
(1) The decision tree is used to divide the training samples x1-xn in the training dataset. Here, the training samples x1-xn are divided into a plurality of sub-regions using the CART decision tree algorithm. The data set is partitioned using a decision tree. Taking CART decision tree as an example, the dividing process is described in detail:
the CART decision tree builds a tree structure by recursively bipartiting the data sets. In the application, CART uses a coefficient of Kernine as a dividing basis. For classification problems, CART decision trees use the coefficient of base to select the best classification feature (i.e., data feature, such as weather type) and classification point (i.e., feature value, such as cloudy, sunny). The coefficient of kunity reflects the purity of the sample class within a subregion.
Given a featureAnd division Point->The coefficient of the Kennel under the dividing condition can be calculated:
wherein ,representing the total number of training samples in the training dataset, +.>Represents the number of samples in the nth sub-area, < +.>Representing the number of sample categories>The proportion of samples in each sample class in the nth sub-region is represented.
For each feature, traversing all possible division points, calculating the coefficient of the base, selecting the feature with the minimum coefficient of the base, and dividing the training data set by the corresponding division points:
wherein ,representing division points +.>Corresponding features->,Representing division points +.>And corresponding features->Corresponding coefficient of kunity.
Then, the training data set is recursively divided by the method, and a CART decision tree is constructed.
In the process of dividing each step, firstly, all the features and the corresponding coefficient of the foundation or square error of the possible dividing points are calculated, and then, the feature and the dividing point which enable the feature to be minimum are selected for data set division. This process is repeated until the stop condition is satisfied.
Common stop conditions include: 1) The number of samples within the node is less than a predefined threshold; 2) Sample class purity within a node reaches a predefined threshold; 3) The depth of the tree reaches a predefined threshold.
Further, to prevent the decision tree from overfitting, the CART decision tree is pruned at the same time. The purpose of pruning is to remove part of the subtrees in the tree to reduce model complexity. Common pruning methods include pre-pruning and post-pruning.
Pre-pruning: in the process of constructing the decision tree, when the stopping condition is satisfied, the partitioning of the current node is immediately stopped. Pre-pruning can effectively reduce model complexity but may result in under-fitting.
Pruning: the complete decision tree is built, then the verification set evaluates the contribution of each sub-tree to the prediction performance, and the subtrees with smaller contributions are removed. The post pruning may achieve better performance than pre pruning, but with greater computational overhead. Wherein the validation set is a proportion of training samples selected in the training dataset, the validation set having labels and sample categories.
The training data set is divided into a plurality of sub-region-leaf nodes through construction and pruning of the CART decision tree. These sub-regions may serve as the basis for the next decision tree and local aggregation based dynamic weight SMOTE algorithm (L-SMOTE).
(2) The local concentration for each training is calculated.
For each training sampleCalculate it +.>Neighbor sample set->Then calculate the local concentration +.>:
wherein ,the number of the neighbor samples is set manually;Representing class training samples-> andEuclidean distance between them.
(3) Calculating dynamic weights: for each training sampleAccording to its local concentration +.>And average local concentration of training samples +.>Calculate training sample +.>Dynamic weight->:; wherein ,The average local aggregation degree is the aggregation degree average value of each subarea.
Further, in each sub-region, based on training samplesDynamic weights of samples->As training sampleIs used for the interpolation coefficient of the (c). Specifically, in calculating training samples->New sample->When using dynamic weights +.>As interpolation coefficients:
wherein ,representing training samples->Is adjacent to the training sample +.>The training samples with the shortest euclidean distance.
The L-SMOTE algorithm of the application has the following innovation points:
1. dividing the data set by combining the decision tree: the data set is divided into a plurality of sub-regions by dividing the data set using a CART decision tree. In this way, customized sample expansion can be performed for each sub-region based on the characteristics of the sub-region, thereby achieving better performance over the entire data set.
2. Local aggregation considerations are introduced: in the SMOTE algorithm, sample expansion is performed in a neighbor-based manner, which easily results in over-expansion of the high-density region samples. The L-SMOTE algorithm adjusts the sample expansion of each sub-region according to the local aggregation degree, and avoids generating excessive new samples in a high-density region.
3. Using dynamic weight adjustment augmentation strategies: the L-SMOTE algorithm generates a new sample with better representativeness according to different characteristics of the subareas through dynamic weight adjustment expansion strategies. The dynamic weight can enable the algorithm to be more flexible, and the adaptability to different subareas is improved.
The L-SMOTE algorithm solves the following disadvantages of the traditional SMOTE algorithm:
1. solves the problem of excessive expansion: the conventional SMOTE algorithm may generate too many new samples in the high density region, resulting in an overfitting. By introducing local aggregation considerations and dynamic weight adjustments, the L-SMOTE algorithm can avoid over-expanding samples in high density regions.
2. The quality of sample expansion is improved: the new samples generated by the traditional SMOTE algorithm may not reflect the true distribution of the data set well. The L-SMOTE algorithm combines decision tree partitioning and local aggregation information, and can generate a new sample with better representativeness, thereby improving model performance.
3. Better adapt to different regional characteristics: the data set is divided through the CART decision tree, and the L-SMOTE algorithm can carry out customized sample expansion according to the characteristics of the subareas, so that better performance is obtained on the whole data set. The conventional SMOTE algorithm lacks such adaptivity.
Therefore, the sample expansion scheme of the application combines decision trees, local aggregation degree and dynamic weight to realize more self-adaptive and efficient sample expansion, and effectively solves the problem of unbalanced data.
And step 130, performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set.
The current training set may include sample features corresponding to each training sample in the new training data set.
The step introduces innovative technologies such as multi-objective optimization, self-adaptive crossover, variation probability and the like so as to improve the optimization performance. The multi-objective optimization functions and adaptive crossover and mutation probability functions involved in this step are described below.
(1) Multi-objective optimization function
In order to more comprehensively consider the generalization performance, complexity, stability and training time of the model, the application adopts a multi-objective optimization strategy. Specifically, the prediction error, model complexity, prediction stability, and training time of the model were taken as 4 optimization targets:
target 1: minimizing prediction error);
Target 2: minimizing model complexity);
Target 3: maximizing prediction stability);
Target 4: minimize training time)。
Through multi-objective optimization, a balance can be found among different objectives, so that a convolutional neural network structure with higher prediction accuracy, lower complexity, higher stability and shorter training time is obtained.
1. Prediction error [ ]): by minimizing the prediction error, the prediction accuracy of the model can be improved. The prediction error may be measured by a loss function such as Mean Square Error (MSE), and the calculation formula is as follows:
wherein ,representing the number of samples, +.>Indicate->Actual value of individual samples, +.>Indicate->Predicted values of the individual samples.
2. Model complexity [ ]): the complexity of the model embodies the number of parameters and network structure of the model. By minimizing the complexity of the model, the risk of overfitting of the model can be reduced, and the generalization capability can be improved. The book is provided withThe calculation formula of the model complexity in the application is as follows:;
wherein ,representing the number of convolution layers +.>、、 andRespectively represent +.>The width, height, number of convolution kernels and number of channels of each convolution layer.
3. Predictive stability [ ]): the prediction stability represents the robustness of the model to small changes in the input data. By maximizing the prediction stability, it can be ensured that the model behaves stably in the face of different data sets. The prediction stability in the present application is measured by calculating the variance of the prediction error over multiple sub-data sets, as follows:;
wherein ,representing the number of sub-data sets, +.>Indicate->Sample number in the sub-data set, +.>Represents the +.>Actual value of individual samples, +.>Represents the +.>Predictive value of individual samples +.>Representing the average of all sub-data set prediction errors.
4. Training time [ ]): the training time represents the time cost required for model training. By minimizing training time, the model training efficiency can be improved, and the deployment of the model in practical application can be accelerated. Training time can be measured by recording training time of a model under fixed hardware conditions, and is typically measured in seconds(s):; wherein ,representing the training time of the model.
The four objective functions jointly form an optimization target of the convolutional neural network feature extraction method based on the improved evolution algorithm, so that balance in the aspects of prediction accuracy, model complexity, prediction stability, training time and the like is realized.
The fitness value F corresponding to the prediction accuracy, model complexity, prediction stability and training time can be expressed as:=+++;
(2) Adaptive crossover and mutation probability function
In order to improve the searching capability of the genetic algorithm, adaptive crossover and mutation probabilities are introduced. Specifically, the crossover probability is to be determinedAnd mutation probability->Set as variable parameters, they are automatically adjusted by the fitness value F of the model:
wherein , andRespectively representing the minimum and maximum value of the crossover probability, +.> andRespectively representing the minimum and maximum value of the mutation probability, < > >Indicating the fitness value of the current individual, +.>The maximum fitness value in the population is represented, i.e., the maximum fitness value in the fitness values of the individuals in the population.
In the specific implementation, training a convolutional neural network to be trained according to each individual in the initialized population and a new training data set, and obtaining current convolutional neural networks corresponding to different individuals and network evaluation indexes of the current convolutional neural networks; each body comprises a network structure and corresponding network parameters corresponding to the convolutional neural network to be trained; the training data in the new training data set comprises training samples and corresponding sample labels; the network evaluation index may include prediction error, model complexity, prediction stability, and training time;
further, based on a preset multi-objective optimization algorithm and a preset self-adaptive crossover and mutation probability function, different individuals and corresponding network evaluation indexes are processed to generate a new population.
The specific implementation steps of generating the new population are as follows:
optimizing the network evaluation index of the current convolutional neural network based on a preset multi-objective optimization algorithm to obtain a first generation population, wherein the first generation population comprises at least one candidate individual meeting a preset fitness value;
Based on a preset self-adaptive crossover and mutation probability function, crossover and mutation operation is carried out on each candidate individual in the first generation population, so as to obtain a second generation population;
determining the obtained second generation population as a new initialized population, and returning to the execution step: training the convolutional neural network to be trained according to each individual in the initialized population and the new training data set, and obtaining current convolutional neural networks corresponding to different individuals and network evaluation indexes of the current convolutional neural networks; and executing the following steps: optimizing the network evaluation index of the current convolutional neural network based on a preset multi-objective optimization algorithm until a preset iteration termination condition is met; and if the iteration execution meets the preset iteration termination condition, determining the second generation population obtained by the last iteration execution as a new population.
After the new population is obtained, the optimal individual and the trained convolutional neural network can be determined based on the individuals with optimal fitness values in the new population, wherein the optimal fitness values are the maximum sum of all network evaluation indexes meeting target optimization conditions, namely all the individuals in the new population meet the target optimization conditions corresponding to all the network evaluation indexes, and the trained convolutional neural network is the convolutional neural network corresponding to the optimal individual.
In some embodiments, the specific process of obtaining the current training set may include:
(1) Initializing a population, wherein each individual in the population is a structure and corresponding parameters of the convolutional neural network.
(2) Based on the new training data set, training the convolutional neural network to be trained to obtain the prediction error, model complexity, prediction stability and training time of each individual in the population, and using the prediction error, model complexity, prediction stability and training time as evaluation indexes of multi-objective optimization.
(3) And selecting individuals meeting preset optimization criteria through a multi-objective optimization algorithm (improved NSGA-II), and forming a first generation population formed by at least one candidate individual meeting preset fitness value.
(4) And performing crossover and mutation operations on each candidate individual in the first generation population according to the adaptive crossover and mutation probability function to generate a second generation population.
(5) Repeating (2) - (4) until the termination condition is met (e.g., the number of iterations reaches a preset value or the fitness value converges).
(6) And selecting an individual with the optimal fitness value from the final population obtained by the final iterative execution as the structure and the parameters of the optimal convolutional neural network.
Wherein, the multi-objective optimization algorithm adopted in the above (3) is an improved NSGA-II algorithm, and the detailed steps of the algorithm are as follows:
1. Initializing: generating a containingInitial population of individual solutions->Setting the iteration number +.>。
2. Evaluation: computing populationThe objective function values and constraint violations of all solutions in (a) and sorting the solutions according to the non-dominant sorting method of NSGA-II.
3. Selecting: selection out using binary tournament selection strategyThe individual solutions constitute the offspring population->. To achieve this object, first of all from +.>And then determines the winner based on a non-dominant ranking and crowding distance comparison method.
4. Crossover and mutation: for offspring populationThe solution in (2) is subjected to crossover and mutation operations to generate a new population +.>. Wherein, the cross operation adopts SBX cross and the variation operation adopts polynomial variation.
5. Neighborhood search: to the populationIs->With probability->A neighborhood search is performed. The neighborhood searching operation includes the steps of:
1) Randomly selecting a solutionSatisfy->, whereinRepresenting the distance between solution vectors,/o>Is a dynamically adjusted neighborhood search radius.
2) Generating a new solution by crossover and mutation operations。
3) If it isOn the Pareto front (evolutionary algorithm) and is better than +.>D->Replacement->。
In order to realize the neighborhood searching with different granularity at different stages, the invention introduces a dynamic adjustment mechanism to lead the neighborhood searching radius to be Gradually decreasing as the number of iterations increases. Specifically, a parameter ∈>The calculation formula is as follows:;
wherein ,representing the current iteration number, +.>Representing the initial neighborhood search radius. By dynamically adjusting the neighborhood search radius, NSGA-II can achieve an adaptive balance between global and local searches.
6. And (3) environment selection: will father and mother populationAnd offspring population->Combining to form a new population +.>. For->Performing non-dominant ranking and calculating crowding distance, selecting +.>The individuals form a new population->。
7. And (5) checking a termination condition: it is checked whether a termination condition (e.g. a preset maximum number of iterations or convergence criterion) is met. If the termination condition is satisfied, ending the algorithm; otherwise, set upAnd (5) returning to the step (2).
The above embodiment has several advantages:
1. multi-objective optimization: by introducing multi-objective optimization (prediction accuracy, model complexity, prediction stability and training time), the method can comprehensively consider a plurality of performance indexes when searching the convolutional neural network structure and parameters, so that a better balance is found among the indexes. This helps to avoid overfitting, reduce model complexity, improve prediction stability, while shortening training time.
2. Adaptive crossover and mutation probabilities: the method introduces a strategy for adaptively adjusting the cross probability and the variation probability, so that the searching process can dynamically adjust the exploration and development capacity according to the fitness value of the current population at different stages, thereby improving the searching efficiency and the resolution quality of the genetic algorithm.
3. Convolutional neural network structure and parameter optimization based on evolution algorithm: the method optimizes the structure and parameters of the convolutional neural network by adopting an evolution algorithm, so that the model can be automatically adapted to different power generation load data characteristics, and the feature extraction effect and the prediction accuracy are improved.
4. Neighborhood search strategy and dynamically adjusting neighborhood search radius strategy: by introducing a neighborhood searching strategy and dynamically adjusting the neighborhood searching radius, the improved NSGA-II algorithm can better balance the capacity of global searching and local searching, and the convergence speed and the resolution quality of the algorithm are improved. The improved method has stronger innovation, can be applied to the characteristic extraction problem in the field, and is also applicable to other multi-objective optimization problems.
5. Model generalization performance: the convolutional neural network model obtained by optimization of the method has better generalization performance, so that higher prediction accuracy can be still maintained when new power generation load data are faced. This makes the method robust in practical applications.
And 140, training the power generation load prediction model to be trained based on the characteristics of each sample in the current training set and the corresponding sample labels to obtain a trained power generation load prediction model.
The power generation load prediction model to be trained is used for processing each sample characteristic and a corresponding sample label by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; the sample label is a power generation load section corresponding to the corresponding sample characteristic.
A. If the power generation load prediction model to be trained adopts an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm, the step of training the power generation load prediction model to be trained based on each sample feature and corresponding sample label in the current training set to obtain a trained power generation load prediction model may include:
initializing an optimized particle population of the extreme learning machine by adopting a self-adaptive weight strategy, wherein each optimized particle in the initialized optimized particle population corresponds to one particle weight; each optimizing particle comprises a learning machine weight and a corresponding bias;
Aiming at the iterative training of the current round, calculating the significance data of each optimizing particle according to an objective function; wherein the objective function is a function related to the prediction accuracy of the current training set in the previous round of iterative training; when the previous round is the first round, the prediction precision in the iterative training of the previous round is 0;
updating the particle weight of each optimizing particle based on a preset particle weight updating algorithm;
when the iteration ending condition is met, acquiring optimizing particles corresponding to the particle weights obtained by the last iteration execution, and determining the optimizing particles as optimal optimizing particles and an extreme learning machine carrying the optimal optimizing particles;
and determining the extreme learning machine carrying the weight and the corresponding bias of the learning machine in the optimal optimizing particles as a trained power generation load prediction model.
In some embodiments, the objective function may be expressed as:;
wherein ,indicate->The position in space of the individual optimizing particles>Significance data of->Representing initial saliency data,/->Representing the association coefficient->Indicate->Optimizing particle and->Distance between individual optimizing particles, +.>Indicate->Prediction accuracy for the multiple iterations.
In some embodiments, the preset particle weight update algorithm may include a first update algorithm and a second update algorithm;
The first update algorithm may be expressed as:;
wherein ,to express +.>The optimizing particle is at the->Space bit at multiple iterationsPut (I) at>Indicate->The optimizing particle is at the->Spatial position at the time of the iteration ∈ ->Indicate->The optimizing particle is at the->Spatial position at the time of the iteration ∈ ->Representing step size factor->Is->Random perturbation in the secondary iteration;
the second update algorithm may be expressed as:;/>
wherein ,represents the maximum number of iterations, +.>Indicate->The optimizing particle is at the->Sub-stackParticle weight in the generation, +.>Indicate->The optimizing particle is at the->Particle weights in the multiple iterations.
Further, the extreme learning machine is a single hidden layer feedforward neural network. And optimizing the hidden layer weight and bias by combining an adaptive weight optimizing particle algorithm. The optimized ELM contains the following components:
input layer: the feature vector X of the sample is input.
Hidden layer: comprisesHidden layer neurons, each neuron being weighted +.>Bias of +.>The activation function is。
Output layer: outputting the predicted value, whereinIs the number of categories.
Optimizing hidden layer weights using adaptive weight optimizing particle algorithmAnd bias->. Setting the objective function to be the mostMinimizing classification errors by iteratively finding the optimal +. > and。
The output of the optimized ELM can be expressed as:;
wherein ,is hidden layer output matrix, < >>Is the output weight matrix.
(1) Calculating hidden layer output matrix。
The elements of the hidden layer output matrix are defined as:;
(2) Solving for output weight matrix using least square method。
To obtain an optimal output weight matrixSolving using least squares:
wherein ,is a target value matrix->Is a regularization parameter, +.>Is an identity matrix.
(3) Calculating a predicted valueModel performance was evaluated.
ObtainingAfter that, the predictive value +.>The formula is as follows:;
wherein ,hidden layer output matrix determined for training parameters, < ->And determining an output weight matrix for the training parameters.
Therefore, the extreme learning machine power generation load prediction classification algorithm optimized based on the adaptive weight optimizing particle algorithm has the following characteristics and advantages:
1. the self-adaptive weight strategy can improve the flexibility of the optimizing particle algorithm in searching the solution space, thereby improving the optimizing performance.
2. Through optimizing the hidden layer weight and bias, the generalization capability and the prediction accuracy of the ELM model are improved.
B. If the power generation load prediction model to be trained adopts a prediction classification algorithm based on a variation self-encoder, the step of training the power generation load prediction model to be trained based on each sample feature and corresponding sample label in the current training set to obtain a trained power generation load prediction model may include:
(1) First, using the VAE, the data set is assembledCompression to a low-dimensional representation +.>s. Namely: mapping the input data to mean vectors, respectively>Sum of variance vector->In, and sampling a set of random numbers from the mean and variance +.>s。/>
In particular, by minimizing reconstruction errors and mean vectorsSum of variance vector->KL divergence between to train the VAE.
Specifically, a decoder is providedIs a conditional gaussian distribution, namely:
wherein , andAre respectively encoded by the encoder->And outputting a mean vector and a covariance matrix.
For a given sampleEncoder ∈>Map it to potential space->The method comprises the following steps:
wherein , andAre respectively encoded by the encoder->Output mean vector and standard deviation vector, +.>Is the noise vector sampled from the normal distribution.
Then, a decoder is usedWill->Map back to the original data space and calculate the reconstruction error:
wherein ,indicate->Personal characteristics (I)>For the total number of features->Is->Sample number and->Mean vector of individual features>Is->Sample No.)>Standard deviation vector of individual features->Indicate->Reconstruction errors for individual samples.
The final goal is to minimize the sum of reconstruction errors for all samples andKL divergence between:
wherein ,Representation by encoder->Distribution of output->Prior distributionKL divergence between. KL divergence is calculated using a re-parameterization technique:
wherein :、the following formulas are respectively adopted:
wherein ,representing the dimensions of the potential space.
Finally, by combining the output of the encoder andSpliced to obtain a compressed low-dimensional representationWherein each row represents a low-dimensional representation of one sample. In this way, a low-dimensional representation is obtained which enables capturing of the data distribution characteristics +.>s。
(2) For low-dimensional representationss, calculating a priori distribution->And posterior distribution->。
Prior distributionThe probability density function of (2) can be expressed as:>;
wherein ,is a category;Representation->Middle->The weight of each dimension is set by people;Representation->Middle->Probability density function for each dimension.
For posterior distributionEstimation using a similar method。
Specifically, all samples are projected into a low dimensional space, which is then clustered, and the probability density function for each cluster is estimated. Posterior distributionThe probability density function of (2) can be expressed as:
wherein ,representation->Middle->Conditional probability density function for each dimension.
Then, using gaussian distribution to estimateThe method comprises the following steps:
wherein , andRespectively is category- >Mean vector and covariance matrix of (a).
Solving by maximum likelihood estimation and:
wherein ,is of the category->Sample number of>Indicating the function.
Then, for each sampleSelecting the category with the highest probability as the prediction result +.>:
It can be seen that for the construction of the predictive model, the present application provides the two sets of embodiments described above: the method comprises the steps of optimizing parameters of an extreme learning machine by using an adaptive weight optimizing particle algorithm. Secondly, based on a prediction classification algorithm of the improved variation self-encoder, the prior distribution and the posterior distribution of the VAE are constructed through a non-parameter density estimation method so as to better adapt to a data set. The two methods respectively provide different embodiments for the construction of the prediction model, and the prediction performance is improved.
Further, the trained model may be evaluated using a validation data set, and the predictive performance of the model may be evaluated using common evaluation metrics, such as Mean Square Error (MSE), mean Absolute Error (MAE), and the like.
And then, the power generation load can be predicted, namely, the test data set is input into a trained prediction model, and a power generation load prediction result is obtained.
Corresponding to the method, the embodiment of the application also provides a power generation load prediction model training device based on multi-source data, as shown in fig. 2, the device comprises:
an obtaining unit 210, configured to obtain training data sets corresponding to different power generation load intervals, where training samples corresponding to each power generation load interval in the training data sets include multi-source power generation load data, and the multi-source power generation load data includes data vectors with different data features;
the sample expansion unit 220 is configured to perform interpolation calculation on each training sample and a neighboring sample of the corresponding training sample by using a preset sample expansion algorithm, so as to obtain a new training sample corresponding to the corresponding training sample, so as to generate a new training data set, where the training sample corresponds to the same power generation load interval as the corresponding new training sample, and the euclidean distance between the neighboring sample of the training sample and the corresponding training sample is the shortest;
the feature extraction unit 230 is configured to perform feature extraction on each training sample in the new training data set by using a convolutional neural network of an evolution algorithm, so as to obtain a current training set, where the current training set includes sample features corresponding to each training sample in the new training data set;
The training unit 240 is configured to train the power generation load prediction model to be trained based on each sample feature and a corresponding sample label in the current training set, so as to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
The functions of each functional unit of the power generation load prediction model training device based on the multi-source data provided by the embodiment of the application can be realized through the steps of the method, so that the specific working process and the beneficial effects of each unit in the power generation load prediction model training device based on the multi-source data provided by the embodiment of the application are not repeated here.
The embodiment of the application also provides an electronic device, as shown in fig. 3, which includes a processor 310, a communication interface 320, a memory 330 and a communication bus 340, wherein the processor 310, the communication interface 320 and the memory 330 complete communication with each other through the communication bus 340.
A memory 330 for storing a computer program;
the processor 310 is configured to execute the program stored in the memory 330, and implement the following steps:
acquiring training data sets corresponding to different power generation load intervals, wherein training samples corresponding to each power generation load interval in the training data sets are multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics;
performing interpolation calculation on each training sample and a neighboring sample of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample and the corresponding new training sample correspond to the same power generation load interval, and the Euclidean distance between the neighboring sample of the training sample and the corresponding training sample is shortest;
performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set;
based on the sample characteristics and the corresponding sample labels in the current training set, training a power generation load prediction model to be trained to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
The communication bus mentioned above may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The communication interface is used for communication between the electronic device and other devices.
The Memory may include random access Memory (Random Access Memory, RAM) or may include Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor.
The processor may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), etc.; but also digital signal processors (Digital Signal Processing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
Since the implementation manner and the beneficial effects of the solution to the problem of each device of the electronic apparatus in the foregoing embodiment may be implemented by referring to each step in the embodiment shown in fig. 1, the specific working process and the beneficial effects of the electronic apparatus provided by the embodiment of the present application are not repeated herein.
In yet another embodiment of the present application, a computer readable storage medium is provided, in which instructions are stored, which when run on a computer, cause the computer to perform the method for generating load prediction model training based on multi-source data according to any one of the above embodiments.
In yet another embodiment of the present application, a computer program product containing instructions that, when run on a computer, cause the computer to perform the multi-source data based power generation load prediction model training method of any of the above embodiments is also provided.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, embodiments of the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present application may take the form of a computer program product on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, the present embodiments are intended to be construed as including the preferred embodiments and all such alterations and modifications that fall within the scope of the embodiments.
It will be apparent to those skilled in the art that various modifications and variations can be made in the embodiments of the present application without departing from the spirit or scope of the embodiments of the application. Thus, if such modifications and variations of the embodiments in the present application fall within the scope of the embodiments of the present application and the equivalent techniques thereof, such modifications and variations are also intended to be included in the embodiments of the present application.
Claims (10)
1. A method for training a power generation load prediction model based on multi-source data, the method comprising:
acquiring training data sets corresponding to different power generation load intervals, wherein training samples corresponding to each power generation load interval in the training data sets are multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics;
performing interpolation calculation on each training sample and a neighboring sample of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample and the corresponding new training sample correspond to the same power generation load interval, and the Euclidean distance between the neighboring sample of the training sample and the corresponding training sample is shortest;
performing feature extraction on each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set;
based on the sample characteristics and the corresponding sample labels in the current training set, training a power generation load prediction model to be trained to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
2. The method of claim 1, wherein performing interpolation computation on each training sample and a neighboring sample of the corresponding training sample by using a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample, comprises:
dividing the training samples by adopting a decision tree according to the data characteristics in the multi-source data to obtain a plurality of areas, wherein each area comprises at least one training sample;
for each region, acquiring the local concentration of each sample in the region and the average local concentration of the region;
determining interpolation coefficients of the corresponding training samples based on the local concentration of each training sample and the average local concentration of the region;
and aiming at each training sample, carrying out difference value calculation on the interpolation coefficient of the training sample and the neighbor training sample of the training sample to obtain a new training sample corresponding to the training sample.
3. The method of claim 1, wherein performing feature extraction on each training sample in the new training dataset using a convolutional neural network of an evolutionary algorithm to obtain a current training set, comprising:
training the convolutional neural network to be trained according to each individual in the initialized population and the new training data set, and obtaining current convolutional neural networks corresponding to different individuals and network evaluation indexes of the current convolutional neural networks; each body comprises a network structure and corresponding network parameters corresponding to the convolutional neural network to be trained; the network evaluation index comprises a prediction error, model complexity, prediction stability and training time;
Processing the different individuals and corresponding network evaluation indexes based on a preset multi-objective optimization algorithm and a preset self-adaptive cross and variation probability function to generate a new population;
and determining an optimal individual and a trained convolutional neural network based on the individual with the optimal fitness value in the new population, wherein the optimal fitness value is the maximum sum of all network evaluation indexes meeting the target optimization condition, and the trained convolutional neural network is the convolutional neural network corresponding to the optimal individual.
4. The method of claim 3, wherein processing the different individuals and corresponding network evaluation metrics based on a pre-set multi-objective optimization algorithm and a pre-set adaptive crossover and mutation probability function to generate a new population comprises:
optimizing the network evaluation index of the current convolutional neural network based on a preset multi-objective optimization algorithm to obtain a first generation population, wherein the first generation population comprises at least one candidate individual meeting a preset fitness value;
based on a preset self-adaptive crossover and mutation probability function, crossover and mutation operation is carried out on each candidate individual in the first generation population to obtain a second generation population;
Determining the obtained second generation population as a new initialized population, and returning to the execution step: training the convolutional neural network to be trained according to each individual in the initialized population and the new training data set, and obtaining current convolutional neural networks corresponding to different individuals and network evaluation indexes of the current convolutional neural networks; and executing the following steps: optimizing the network evaluation index of the current convolutional neural network based on a preset multi-objective optimization algorithm until a preset iteration termination condition is met;
and if the iteration execution meets the preset iteration termination condition, determining a second generation population obtained by the last iteration execution as the new population.
5. The method of claim 1, wherein if the power generation load prediction model to be trained adopts an extreme learning machine algorithm based on an adaptive weight optimizing particle optimization algorithm, training the power generation load prediction model to be trained based on each sample feature and a corresponding sample label in the current training set to obtain a trained power generation load prediction model, comprising:
initializing an optimized particle population of the extreme learning machine by adopting a self-adaptive weight strategy, wherein each optimized particle in the initialized optimized particle population corresponds to one particle weight The method comprises the steps of carrying out a first treatment on the surface of the Each optimizing particle comprises a learning machine weight and a corresponding bias;
aiming at the iterative training of the current round, calculating the significance data of each optimizing particle according to an objective function; the objective function is a function related to the prediction precision of the current training set in the previous round of iterative training; when the previous round is the first round, the prediction precision in the iterative training of the previous round is 0;
updating the particle weight of each optimizing particle based on a preset particle weight updating algorithm;
when the iteration ending condition is met, acquiring optimizing particles corresponding to the particle weights obtained by the last iteration execution, and determining the optimizing particles as optimal optimizing particles and an extreme learning machine carrying the optimal optimizing particles;
and determining the extreme learning machine carrying the weight and the corresponding bias of the learning machine in the optimal optimizing particles as a trained power generation load prediction model.
6. The method of claim 5, wherein the objective function is represented as:
indicate->The position in space of the individual optimizing particles>Significance data of->Representing initial saliency data,/->Representing the association coefficient->Indicate->Optimizing particle and->Distance between individual optimizing particles, +. >Indicate->Prediction accuracy for multiple iterations。
7. The method of claim 5, wherein the predetermined particle weight update algorithm comprises a first update algorithm and a second update algorithm;
the first update algorithm is expressed as:;
wherein ,to express +.>The optimizing particle is at the->Spatial position at the time of the iteration ∈ ->Indicate->The optimizing particle is at the->Spatial position at the time of the iteration ∈ ->Indicate->The optimizing particle is at the->Spatial position at the time of the iteration ∈ ->Representing step size factor->Is->Random perturbation in the secondary iteration;
the second update algorithm is expressed as:;
wherein ,represents the maximum number of iterations, +.>Indicate->The optimizing particle is at the->The weight of the particles in the second iteration,indicate->The optimizing particle is at the->Particle weights in the multiple iterations.
8. A power generation load prediction model training device based on multi-source data, the device comprising:
the system comprises an acquisition unit, a data processing unit and a data processing unit, wherein the acquisition unit is used for acquiring training data sets corresponding to different power generation load intervals, training samples corresponding to each power generation load interval in the training data sets comprise multi-source power generation load data, and the multi-source power generation load data comprise data vectors with different data characteristics;
The sample expansion unit is used for carrying out interpolation calculation on each training sample and the adjacent samples of the corresponding training sample by adopting a preset sample expansion algorithm to obtain a new training sample corresponding to the corresponding training sample so as to generate a new training data set, wherein the training sample and the corresponding new training sample correspond to the same power generation load interval, and the Euclidean distance between the adjacent samples of the training sample and the corresponding training sample is shortest;
the feature extraction unit is used for extracting features of each training sample in the new training data set by adopting a convolutional neural network of an evolution algorithm to obtain a current training set, wherein the current training set comprises sample features corresponding to each training sample in the new training data set;
the training unit is used for training the power generation load prediction model to be trained based on the characteristics of each sample in the current training set and the corresponding sample labels to obtain a trained power generation load prediction model; the power generation load prediction model to be trained is used for processing the sample features and the corresponding sample labels by adopting a prediction classification algorithm based on a variation self-encoder or an extreme learning machine algorithm based on a self-adaptive weight optimizing particle optimization algorithm to predict a corresponding power generation load interval; and the sample label is a power generation load interval corresponding to the corresponding sample characteristic.
9. An electronic device, characterized in that the electronic device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are in communication with each other through the communication bus;
a memory for storing a computer program;
a processor for implementing the multi-source data-based power generation load prediction model training method of any one of claims 1 to 7 when executing a program stored on a memory.
10. A computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the method for training the power generation load prediction model based on the multi-source data according to any one of claims 1 to 7 is implemented.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878115.XA CN116596044B (en) | 2023-07-18 | 2023-07-18 | Power generation load prediction model training method and device based on multi-source data |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310878115.XA CN116596044B (en) | 2023-07-18 | 2023-07-18 | Power generation load prediction model training method and device based on multi-source data |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116596044A true CN116596044A (en) | 2023-08-15 |
CN116596044B CN116596044B (en) | 2023-11-07 |
Family
ID=87608394
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310878115.XA Active CN116596044B (en) | 2023-07-18 | 2023-07-18 | Power generation load prediction model training method and device based on multi-source data |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116596044B (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116956197A (en) * | 2023-09-14 | 2023-10-27 | 山东理工昊明新能源有限公司 | Deep learning-based energy facility fault prediction method and device and electronic equipment |
CN117318055A (en) * | 2023-12-01 | 2023-12-29 | 山东理工昊明新能源有限公司 | Power load prediction model processing method and device, electronic equipment and storage medium |
CN117807434A (en) * | 2023-12-06 | 2024-04-02 | 中国信息通信研究院 | Communication data set processing method and device |
CN117833243A (en) * | 2024-03-06 | 2024-04-05 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for predicting short-term demand of electric power |
CN118092403A (en) * | 2024-04-23 | 2024-05-28 | 广汽埃安新能源汽车股份有限公司 | Electric control detection model training method, electric control system detection method and device |
CN118211126A (en) * | 2024-05-22 | 2024-06-18 | 国网山东省电力公司蒙阴县供电公司 | Training method and device for photovoltaic power generation device fault prediction model |
CN118296473A (en) * | 2024-06-06 | 2024-07-05 | 广汽埃安新能源汽车股份有限公司 | Motor control system stability evaluation method, device, storage medium and equipment |
CN118349899A (en) * | 2024-06-18 | 2024-07-16 | 国网山东省电力公司青岛供电公司 | Load prediction method and system considering multi-source heterogeneous data fusion |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396234A (en) * | 2020-11-20 | 2021-02-23 | 国网经济技术研究院有限公司 | User side load probability prediction method based on time domain convolutional neural network |
CN112633577A (en) * | 2020-12-23 | 2021-04-09 | 西安建筑科技大学 | Short-term household electrical load prediction method, system, storage medium and equipment |
CN114781692A (en) * | 2022-03-24 | 2022-07-22 | 国网河北省电力有限公司衡水供电分公司 | Short-term power load prediction method and device and electronic equipment |
WO2023035564A1 (en) * | 2021-09-08 | 2023-03-16 | 广东电网有限责任公司湛江供电局 | Load interval prediction method and system based on quantile gradient boosting decision tree |
CN116245259A (en) * | 2023-05-11 | 2023-06-09 | 华能山东发电有限公司众泰电厂 | Photovoltaic power generation prediction method and device based on depth feature selection and electronic equipment |
CN116316599A (en) * | 2023-03-28 | 2023-06-23 | 广东电网有限责任公司东莞供电局 | Intelligent electricity load prediction method |
-
2023
- 2023-07-18 CN CN202310878115.XA patent/CN116596044B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112396234A (en) * | 2020-11-20 | 2021-02-23 | 国网经济技术研究院有限公司 | User side load probability prediction method based on time domain convolutional neural network |
CN112633577A (en) * | 2020-12-23 | 2021-04-09 | 西安建筑科技大学 | Short-term household electrical load prediction method, system, storage medium and equipment |
WO2023035564A1 (en) * | 2021-09-08 | 2023-03-16 | 广东电网有限责任公司湛江供电局 | Load interval prediction method and system based on quantile gradient boosting decision tree |
CN114781692A (en) * | 2022-03-24 | 2022-07-22 | 国网河北省电力有限公司衡水供电分公司 | Short-term power load prediction method and device and electronic equipment |
CN116316599A (en) * | 2023-03-28 | 2023-06-23 | 广东电网有限责任公司东莞供电局 | Intelligent electricity load prediction method |
CN116245259A (en) * | 2023-05-11 | 2023-06-09 | 华能山东发电有限公司众泰电厂 | Photovoltaic power generation prediction method and device based on depth feature selection and electronic equipment |
Non-Patent Citations (1)
Title |
---|
齐庭庭;李建奇;: "基于改进机器学习算法的微电网短期负荷预测", 湖南文理学院学报(自然科学版), no. 03 * |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116956197A (en) * | 2023-09-14 | 2023-10-27 | 山东理工昊明新能源有限公司 | Deep learning-based energy facility fault prediction method and device and electronic equipment |
CN116956197B (en) * | 2023-09-14 | 2024-01-19 | 山东理工昊明新能源有限公司 | Deep learning-based energy facility fault prediction method and device and electronic equipment |
CN117318055A (en) * | 2023-12-01 | 2023-12-29 | 山东理工昊明新能源有限公司 | Power load prediction model processing method and device, electronic equipment and storage medium |
CN117318055B (en) * | 2023-12-01 | 2024-03-01 | 山东理工昊明新能源有限公司 | Power load prediction model processing method and device, electronic equipment and storage medium |
CN117807434A (en) * | 2023-12-06 | 2024-04-02 | 中国信息通信研究院 | Communication data set processing method and device |
CN117833243A (en) * | 2024-03-06 | 2024-04-05 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for predicting short-term demand of electric power |
CN117833243B (en) * | 2024-03-06 | 2024-05-24 | 国网山东省电力公司信息通信公司 | Method, system, equipment and medium for predicting short-term demand of electric power |
CN118092403A (en) * | 2024-04-23 | 2024-05-28 | 广汽埃安新能源汽车股份有限公司 | Electric control detection model training method, electric control system detection method and device |
CN118211126A (en) * | 2024-05-22 | 2024-06-18 | 国网山东省电力公司蒙阴县供电公司 | Training method and device for photovoltaic power generation device fault prediction model |
CN118296473A (en) * | 2024-06-06 | 2024-07-05 | 广汽埃安新能源汽车股份有限公司 | Motor control system stability evaluation method, device, storage medium and equipment |
CN118349899A (en) * | 2024-06-18 | 2024-07-16 | 国网山东省电力公司青岛供电公司 | Load prediction method and system considering multi-source heterogeneous data fusion |
Also Published As
Publication number | Publication date |
---|---|
CN116596044B (en) | 2023-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116596044B (en) | Power generation load prediction model training method and device based on multi-source data | |
Lipu et al. | Artificial intelligence based hybrid forecasting approaches for wind power generation: Progress, challenges and prospects | |
Dong et al. | Wind power prediction based on recurrent neural network with long short-term memory units | |
CN111861013B (en) | Power load prediction method and device | |
CN109359786A (en) | A kind of power station area short-term load forecasting method | |
CN110969290A (en) | Runoff probability prediction method and system based on deep learning | |
CN111027772A (en) | Multi-factor short-term load prediction method based on PCA-DBILSTM | |
CN113469426A (en) | Photovoltaic output power prediction method and system based on improved BP neural network | |
CN106600041A (en) | Restricted Boltzmann machine-based photovoltaic power generation short-term power probability prediction method | |
CN112711896B (en) | Complex reservoir group optimal scheduling method considering multi-source forecast error uncertainty | |
CN114169434A (en) | Load prediction method | |
CN114511132A (en) | Photovoltaic output short-term prediction method and prediction system | |
CN117994986B (en) | Traffic flow prediction optimization method based on intelligent optimization algorithm | |
CN114676923A (en) | Method and device for predicting generated power, computer equipment and storage medium | |
CN111697560B (en) | Method and system for predicting load of power system based on LSTM | |
Fan et al. | Multi-objective LSTM ensemble model for household short-term load forecasting | |
CN115600729A (en) | Grid load prediction method considering multiple attributes | |
Čurčić et al. | Gaining insights into dwelling characteristics using machine learning for policy making on nearly zero-energy buildings with the use of smart meter and weather data | |
CN116454875A (en) | Regional wind farm mid-term power probability prediction method and system based on cluster division | |
CN112036598A (en) | Charging pile use information prediction method based on multi-information coupling | |
CN117060408B (en) | New energy power generation prediction method and system | |
Zhu et al. | Wind power interval and point prediction model using neural network based multi-objective optimization | |
CN103745275A (en) | Photovoltaic system electricity generation power prediction method and device | |
CN113191526A (en) | Short-term wind speed interval multi-objective optimization prediction method and system based on random sensitivity | |
CN117439053A (en) | Method, device and storage medium for predicting electric quantity of Stacking integrated model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20231011 Address after: No.61 Fushan Road, Xintai Economic Development Zone, Tai'an City, Shandong Province 271200 Applicant after: Huaneng Shandong Taifeng New Energy Co.,Ltd. Applicant after: Zhongtai power plant of Huaneng Shandong Power Generation Co.,Ltd. Address before: 271200 No.2 xinkuang Road, Zhangzhuang community, Xinwen street, Xintai City, Tai'an City, Shandong Province Applicant before: Zhongtai power plant of Huaneng Shandong Power Generation Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |