CN112330052A - Distribution transformer load prediction method - Google Patents

Distribution transformer load prediction method Download PDF

Info

Publication number
CN112330052A
CN112330052A CN202011308820.9A CN202011308820A CN112330052A CN 112330052 A CN112330052 A CN 112330052A CN 202011308820 A CN202011308820 A CN 202011308820A CN 112330052 A CN112330052 A CN 112330052A
Authority
CN
China
Prior art keywords
prediction
wavelet
load
selecting
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011308820.9A
Other languages
Chinese (zh)
Inventor
梁朔
欧阳健娜
秦丽文
陈绍南
李珊
周杨珺
李欣桐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Electric Power Research Institute of Guangxi Power Grid Co Ltd
Original Assignee
Electric Power Research Institute of Guangxi Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Electric Power Research Institute of Guangxi Power Grid Co Ltd filed Critical Electric Power Research Institute of Guangxi Power Grid Co Ltd
Priority to CN202011308820.9A priority Critical patent/CN112330052A/en
Publication of CN112330052A publication Critical patent/CN112330052A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Economics (AREA)
  • Strategic Management (AREA)
  • Human Resources & Organizations (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Marketing (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Probability & Statistics with Applications (AREA)
  • Game Theory and Decision Science (AREA)
  • Public Health (AREA)
  • Water Supply & Treatment (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a distribution transformer load prediction method, which comprises the following steps: clustering the distribution transformer historical load curve by adopting a fuzzy C-means algorithm to obtain a historical similar day set; counting the number of the clustered external feature groups; classifying according to similar days to which days to be predicted belong, selecting and constructing a training sample set for wavelet decomposition to obtain a subsequence training sample set; training the SVM model by using a subsequence training sample set; solving each wavelet subsequence of the predicted daily load by using the trained SVM; and superposing the wavelet subsequence prediction results to obtain the prediction value of the distribution transformer prediction daily load. The method fully considers the influence of various factors on the load curve, adopts the wavelet support vector machine algorithm, inherits the advantages of the SVM algorithm and the wavelet analysis method, can realize global optimization in the solving process, has good applicability to small sample data, and can improve the prediction precision by inputting the load curve serving as a training sample into the SVM after wavelet decomposition.

Description

Distribution transformer load prediction method
Technical Field
The invention relates to an electric power system, in particular to a distribution transformer load prediction method.
Background
Management of an electric power system is generally comprehensive management of contents in multiple aspects, but load prediction is always an important content. If the load prediction is accurate, the power generation cost can be reduced by economically and reasonably arranging the start and stop of the unit, and the economic benefit is further improved. Therefore, by continuously improving the means and the method for load prediction, the prediction result of the power load can be more and more accurate, which is significant for the power system. Load forecasting can be divided into short-term, medium-term and long-term forecasts, depending on the required time span and the operational decisions. Generally, energy system planning requires long-term forecasts, maintenance planning and fuel supply medium-term forecasts, and energy system daily operation requires short-term forecasts.
In the existing load prediction method, the expert system method has large workload for constructing a database and poor universality; the artificial neural network has low learning speed and local minimum points; the wavelet analysis method fails to take into account the influence of various factors on the load.
Disclosure of Invention
The invention aims to overcome the defects of the prior art, and provides a distribution transformer load prediction method, which has the following specific technical scheme:
a distribution load prediction method is characterized by comprising the following steps:
clustering the distribution transformer historical load curve by adopting a fuzzy C-means algorithm to obtain a historical similar day set;
counting the number of the clustered external feature groups;
classifying according to similar days to which days to be predicted belong, and selecting and constructing a training sample set;
inputting the training set into training parameters of an SVM model, and solving each wavelet subsequence of the distribution transformer prediction day;
and superposing the wavelet subsequence prediction results to obtain the prediction value of the distribution transformer prediction daily load.
The clustering of the historical load curve of the distribution transformer by adopting the fuzzy C-means algorithm comprises the following steps:
1.1 assume that the dataset containing n load curve objects is X ═ X1,x2,…,xnX object in Xi(i-1, 2, … n) has m characteristic attributes, and the membership matrix of the dataset X is defined as U-Uij]With dimension c × n, c representing the number of clusters, uijJ-th object X representing data set Xj(j ═ 1,2, … c), degree of membership uijThe higher the object, the more likely it is to fall into this class, uijSatisfies the following conditions:
1) for any object j (j ═ 1,2, … c), there are:
Figure BDA0002789131510000025
2)0≤uij≤1,1≤i≤c,1≤j≤n;
1.2 from dataset X ═ X1,x2,……,xnRandomly selecting k objects as initial cluster centers of k cluster clusters and recording the initial cluster centers as c1,c2,……,ckK clusters of data set X are C1,C2,…,Ck
1.1.3 clustering center c with initial1,c2,…,ckFor dividing reference, the data set X is initially divided, and the membership u of each object is calculatedijForming a membership matrix U, wherein the calculation formula is as follows:
Figure BDA0002789131510000021
where m ∈ [1, ∞ ]), represents the blur index.
Figure BDA0002789131510000022
Representing an object xjTo a certain cluster center ci(i ═ 1,2, … c);
1.4 calculating the New Cluster center ci' re-clustering according to the membership degree of each object, and calculating the formula as follows:
Figure BDA0002789131510000023
1.5 calculating the function value of the value function J of the non-similarity index and comparing the function value with a set threshold epsilon, if J is less than epsilon, terminating the algorithm, and recording the value of ci' is the final clustering center, otherwise, 1.1.3 and 1.1.4 are repeated until the conditions are met, and the calculation formula is as follows:
Figure BDA0002789131510000024
the statistical clustering of the number of the extrinsic feature groups comprises:
counting the number of external feature groups consisting of date types, daily average temperature, weather conditions and season types in each cluster;
categorizing the single external feature group, including: and comparing the number of the external feature group in each cluster, and selecting the cluster with the maximum number as the final classification of the external feature group.
The selecting and constructing of the training sample set comprises:
1.1, selecting a prediction day of distribution transformation, and selecting final classification according to the date type, temperature interval, weather condition and season type of the prediction day;
1.2, selecting a training sample set, comprising: taking the historical dates of the same category as the predicted date in the final cluster as a reference, selecting training samples from the historical similar date cluster, and constructing samples X (-21), L (-14), L (-7), L (-3), L (-2) and L (-1), wherein L is load curve data in the similar date cluster, and (-N) is predicted N days in the same category, and if the date does not exist in the same category, replacing the date closest to the date in the same category;
1.3 constructing a training sample set, comprising: taking the selected load data as an input sample X, taking the load curve data of the prediction day as an output sample Y, then reselecting the prediction day, and jumping to the step 1.3.1 until a complete training sample set is formed;
1.4 wavelet decomposition and single-branch reconstruction are carried out on the training sample set to form correspondence [ d1,d2,d3,a3]4 load subsequence training sample sets;
1.4.1 wavelet decomposition and single-tap reconstruction, comprising: selecting wavelet basis functions and decomposition levels, and selecting Daubechies4 as the wavelet basis functions according to the restriction relation among time-frequency tight support, vanishing moment and regularity; for the scale selection of wavelet transformation, if the decomposition scale is too small, the specific information of each frequency component of the load sequence cannot be obtained, and if the decomposition scale is too large, more models are needed to predict each component, each model can introduce a certain error to influence the prediction result, and the calculation rate can be reduced. Considering the number of features selected for the samples of the SVM and the efficiency of prediction, the method selects decomposition 3 layers, and the decomposition formula is expressed as:
Figure BDA0002789131510000031
in the formula: a is a scale factor related to frequency; b is a time-dependent translation factor; wf(a, b) is the wavelet function component with the scale a contained in the original signal f (t) at the time b; the original payload sequence is then decomposed into 4 subsequences denoted as [ d ]1,d2,d3,a3]Wherein d isjFor each layer detail coefficient, a3For layer 3 approximation coefficients, the calculation formula is as follows:
Figure BDA0002789131510000032
Figure BDA0002789131510000033
inputting the training set into training parameters of an SVM model to obtain each wavelet subsequence of the distribution transformation prediction day, wherein the method comprises the following steps:
1.1, selecting a model function and a kernel function, wherein the distribution transformer load data prediction related by the method belongs to a regression problem, and a regression function of an SVM model is utilized:
Figure BDA0002789131510000041
in the formula: k (x)iAnd x) is a kernel function, and the method adopts a Gaussian radial basis function as the kernel function; b is an offset;
1.2, selecting and optimizing training parameters of a model, wherein the number of the training parameters in the SVM is only 3, and the training parameters are a sensitivity coefficient epsilon, a penalty function c and a kernel function parameter g respectively; the method adopts a cross validation method to optimize parameters of the SVM, firstly, a punishment parameter c and a kernel function parameter g are valued in a certain range, the classification accuracy under the combination is obtained for the selected sum, and finally, c and g with the highest classification accuracy are selected as the optimal parameters;
1.3 inputting the load subsequence training sample set, and respectively calculating each wavelet subsequence of the distribution transform prediction day.
The distribution transformer load prediction method provided by the invention adopts a fuzzy C-means algorithm to carry out load similarity day-first classification on load data, and then carries out similarity day classification by combining the number of external feature groups, thereby fully considering the influence of various factors on a load curve; the method has the advantages of inheriting the advantages of an SVM algorithm and a wavelet analysis method by adopting a wavelet support vector machine algorithm, has good applicability to small sample data, and improves the prediction precision of the model by putting the load fluctuation data into an SVM for training after wavelet decomposition. The embodiment provided by the invention has high universality and good classification effect, solves the problem that the classification effect is easily influenced by a quantization coefficient, and improves the prediction precision of the small sample.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a flow chart diagram of a distribution transformer load prediction method.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Examples
Referring to fig. 1, fig. 1 is a schematic flow chart of a distribution transformer load prediction method.
As shown in fig. 1, a distribution transformer load prediction method includes:
s101, clustering the historical load curve of the distribution transformer by adopting a fuzzy C-means algorithm to obtain a historical similar day set.
S102, counting the number of the external feature groups of the cluster.
S103, classifying according to similar days to which the days to be predicted belong, and selecting and constructing a training sample set.
S104, inputting the training set into training parameters of the SVM model, and solving each wavelet subsequence of the distribution transformation prediction day.
And S105, overlapping the sub-sequence prediction results to obtain a prediction daily load prediction value of the distribution transformer.
S101, clustering the distribution transformation historical load curve by adopting a fuzzy C-means algorithm to obtain a historical similar day set;
s101-1 falseDefining a data set containing n load curve objects as X ═ X1,x2,…,xnX object in Xi(i-1, 2, … n) has m characteristic attributes, and the membership matrix of the dataset X is defined as U-Uij]With dimension c × n, c representing the number of clusters, uijJ-th object X representing data set Xj(j ═ 1,2, … c), degree of membership uijThe higher the object, the more likely it is to fall into this class, uijSatisfies the following conditions:
1) for any object j (j ═ 1,2, … c), there are:
Figure BDA0002789131510000051
2)0≤uij≤1,1≤i≤c,1≤j≤n;
s101-2 is selected from the data set X ═ { X1,x2,……,xnRandomly selecting k objects as initial cluster centers of k cluster clusters and recording the initial cluster centers as c1,c2,……,ckK clusters of data set X are C1,C2,…,Ck
S101-3 clustering center c with initial1,c2,…,ckFor dividing reference, the data set X is initially divided, and the membership u of each object is calculatedijForming a membership matrix U, wherein the calculation formula is as follows:
Figure BDA0002789131510000061
where m ∈ [1, ∞ ]), represents the blur index.
Figure BDA0002789131510000062
Representing an object xjTo a certain cluster center ci(i ═ 1,2, … c);
s101-4 calculating a new cluster center ci' re-clustering according to the membership degree of each object, and calculating the formula as follows:
Figure BDA0002789131510000063
s101-5, calculating a function value of a cost function J of the non-similarity index, comparing the function value with a set threshold epsilon, and if J is met<E, the algorithm is terminated, and c is recordedi' is the final clustering center, otherwise, repeating S101-3 and S101-4 until the condition is met, and calculating the formula as follows:
Figure BDA0002789131510000064
s102, counting the number of the external feature groups of the cluster. Counting the number of external feature groups consisting of date types, daily average temperature, weather conditions and season types in each cluster; categorizing the single external feature group, including: comparing the number of the external feature groups in each cluster, and selecting the cluster with the largest number as the final classification of the external feature groups;
TABLE 1 date type tag description
Figure BDA0002789131510000065
TABLE 2 average temperature interval label
Figure BDA0002789131510000066
TABLE 3 weather Condition tag Specification
Figure BDA0002789131510000067
Figure BDA0002789131510000071
TABLE 4 season type Label description
Figure BDA0002789131510000072
And selecting the maximum characteristic value category in the cluster categories as final classification according to the statistical date type, the temperature interval, the weather condition and the number of the season type characteristic groups, and selecting the category with the largest number of samples as final classification when the characteristic groups have a plurality of maximum belonged numbers.
S103, classifying according to similar days to which days to be predicted belong, selecting and constructing a training sample set;
and selecting a prediction day of the distribution transformer, selecting final classification according to the date type, the temperature interval, the weather condition and the season type of the prediction day, and selecting data meeting the conditions from the final classification samples.
The selecting the data meeting the conditions comprises the following steps: and taking the historical date of the same category as the predicted date in the final cluster as a reference, selecting training samples from the historical similar-date cluster, and constructing samples X (-21), L (-14), L (-7), L (-3), L (-2) and L (-1), wherein L is load curve data in the similar-date cluster, and (-N) is predicted N days in the same category, and if the date does not exist in the same category, replacing the date closest to the date in the same category.
The step of classifying according to the similar days to which the days to be predicted belong, selecting and constructing a training sample set comprises the following steps: selecting load data as input sample X, selecting load curve data of prediction day as output sample Y, selecting new training data again until complete training sample set is formed, and finally performing wavelet decomposition and single-branch reconstruction on the training sample set to form corresponding [ d [ d ] ]1,d2,d3,a3]The 4 payload subsequences of training sample set.
The wavelet decomposition and single-branch reconstruction comprise the following steps: wavelet basis functions and decomposition level selection. According to the restriction relation among time-frequency tight support, moment vanishing and regularity, selecting Daubechies4 as a wavelet basis function; for the scale selection of wavelet transformation, if the decomposition scale is too small, the specific information of each frequency component of the load sequence cannot be obtained, and if the decomposition scale is too large, more models are needed to predict each component, each model can introduce a certain error to influence the prediction result, and the calculation rate can be reduced. Considering the number of features selected for the samples of the SVM and the efficiency of prediction, the method selects decomposition 3 layers, and the decomposition formula is expressed as:
Figure BDA0002789131510000081
in the formula: a is a scale factor related to frequency; b is a time-dependent translation factor; wfAnd (a, b) is a wavelet function component with the scale of a contained in the original signal f (t) at the time b. The original payload sequence is then decomposed into 4 subsequences denoted as [ d ]1,d2,d3,a3]Wherein d isjFor each layer detail coefficient, a3Are layer 3 approximation coefficients. The single-branch reconstruction formula is as follows:
Figure BDA0002789131510000082
Figure BDA0002789131510000083
s104, inputting the training set into training parameters of the SVM model, and solving each wavelet subsequence of the distribution transformation prediction day. The prediction of the distribution transformer load data related by the method belongs to a regression problem, and utilizes a regression function of an SVM model:
Figure BDA0002789131510000084
in the formula: k (x)iAnd x) is a kernel function, and the method adopts a Gaussian radial basis function as the kernel function; b is an offset.
Selecting and optimizing training parameters of the model; in the SVM, only 3 training parameters are provided, namely a sensitivity coefficient epsilon, a penalty function c and a kernel function parameter g. The method adopts a cross validation method to optimize parameters of the SVM, firstly values of a punishment parameter c and a kernel function parameter g are taken within a certain range, the classification accuracy under the combination is obtained for the selected sum, and finally c and g with the highest classification accuracy are selected as the optimal parameters.
And inputting a load subsequence training sample set, and respectively calculating each wavelet subsequence of the distribution transform prediction day.
The distribution transformer load prediction method provided by the invention adopts a fuzzy C-means algorithm to carry out load similarity day-first classification on load data, and then carries out similarity day classification by combining the number of external feature groups, thereby fully considering the influence of various factors on a load curve; the method has the advantages of inheriting the advantages of an SVM algorithm and a wavelet analysis method by adopting a wavelet support vector machine algorithm, has good applicability to small sample data, and improves the prediction precision of the model by putting the load fluctuation data into an SVM for training after wavelet decomposition. The embodiment provided by the invention has high universality and good classification effect, solves the problem that the classification effect is easily influenced by a quantization coefficient, and improves the prediction precision of the small sample.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable storage medium, and the storage medium may include: a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic or optical disk, or the like.
In addition, the distribution transformer load prediction method provided by the embodiment of the present invention is described in detail above, and a specific example should be used herein to explain the principle and the implementation of the present invention, and the description of the above embodiment is only used to help understanding the method of the present invention and the core idea thereof; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (5)

1. A distribution load prediction method is characterized by comprising the following steps:
clustering the distribution transformer historical load curve by adopting a fuzzy C-means algorithm to obtain a historical similar day set;
counting the number of the clustered external feature groups;
classifying according to similar days to which days to be predicted belong, and selecting and constructing a training sample set;
inputting the training set into training parameters of an SVM model, and solving each wavelet subsequence of the distribution transformer prediction day;
and superposing the wavelet subsequence prediction results to obtain the prediction value of the distribution transformer prediction daily load.
2. The method of claim 1, wherein clustering the historical load curves of the distribution transformer by using a fuzzy C-means algorithm comprises:
2.1 assume that the dataset containing n load curve objects is X ═ X1,x2,...,xnX object in XiN has m characteristic attributes, and a membership matrix of the dataset X is defined as U ═ U ·ij]With dimension c × n, c representing the number of clusters, uijJ-th object X representing data set Xj(j ═ 1,2, … c), degree of membership uijThe higher the object, the more likely it is to fall into this class, uijSatisfies the following conditions:
1) for any object j (j ═ 1,2, … c), there are:
Figure FDA0002789131500000011
2)0≤uij≤1,1≤i≤c,1≤j≤n;
2.2 from dataset X ═ X1,x2,......,xnRandomly selecting k objects as initial cluster centers of k cluster clusters and recording the initial cluster centers as c1,c2,......,ckK clusters of data set X are C1,C2,...,Ck
2.3 clustering center c with initial1,c2,...,ckFor dividing reference, the data set X is initially divided, and the membership u of each object is calculatedijForming a membership matrix U, wherein the calculation formula is as follows:
Figure FDA0002789131500000012
where m ∈ [1, ∞ ]), represents the blur index.
Figure FDA0002789131500000013
Representing an object xjTo a certain cluster center ci(i ═ 1,2, … c);
2.4 calculating New Cluster center ci' re-clustering according to the membership degree of each object, and calculating the formula as follows:
Figure FDA0002789131500000021
2.5 calculating the function value of the value function J of the non-similarity index and comparing the function value with a set threshold epsilon, if J is less than epsilon, terminating the algorithm, and recording the value of ci' is the final clustering center, otherwise, repeating 2.3 and 2.4 until the condition is met, and calculating the formula as follows:
Figure FDA0002789131500000022
the date type, the daily average temperature, the weather condition, and the season type are labeled.
3. The method of claim 1, wherein the statistical clustering of the number of outlier groups comprises:
counting the number of external feature groups consisting of date types, daily average temperature, weather conditions and season types in each cluster;
categorizing the single external feature group, including: and comparing the number of the external feature group in each cluster, and selecting the cluster with the maximum number as the final classification of the external feature group.
4. The method of claim 1, wherein the selecting and constructing the training sample set comprises:
4.1, selecting a forecast day of the distribution transformer, and selecting final classification according to the date type, the temperature interval, the weather condition and the season type of the forecast day;
4.2 choose training sample set, including: taking the historical dates of the same category as the predicted date in the final cluster as a reference, selecting training samples from the historical similar date cluster, and constructing samples X (-21), L (-14), L (-7), L (-3), L (-2) and L (-1), wherein L is load curve data in the similar date cluster, and (-N) is predicted N days in the same category, and if the date does not exist in the same category, replacing the date closest to the date in the same category;
4.3 constructing a training sample set, comprising: taking the selected load data as an input sample X, taking the load curve data of the prediction day as an output sample Y, then reselecting the prediction day, and jumping to the step 1.3.1 until a complete training sample set is formed;
4.4 wavelet decomposition and single-branch reconstruction are carried out on the training sample set to form correspondence [ d1,d2,d3,a3]4 load subsequence training sample sets;
4.4.1 wavelet decomposition and single-tap reconstruction, comprising: selecting wavelet basis functions and decomposition levels, and selecting Daubechies4 as the wavelet basis functions according to the restriction relation among time-frequency tight support, vanishing moment and regularity; for the scale selection of wavelet transformation, if the decomposition scale is too small, the specific information of each frequency component of the load sequence cannot be obtained, and if the decomposition scale is too large, more models are needed to predict each component, each model can introduce a certain error to influence the prediction result, and the calculation rate can be reduced. Considering the number of features selected for the samples of the SVM and the efficiency of prediction, the method selects decomposition 3 layers, and the decomposition formula is expressed as:
Figure FDA0002789131500000031
in the formula: a is a scale factor related to frequency; b is a time-dependent translation factor; wf(a, b) is the wavelet function component with the scale a contained in the original signal f (t) at the time b; the original payload sequence is then decomposed into 4 subsequences denoted as [ d ]1,d2,d3,a3]Wherein d isjFor each layer detail coefficient, a3For layer 3 approximation coefficients, the calculation formula is as follows:
Figure FDA0002789131500000032
Figure FDA0002789131500000033
5. the method of claim 1, wherein inputting the training set into training parameters of an SVM model to find each wavelet subsequence of the distribution transform prediction day comprises:
5.1, selecting a model function and a kernel function, wherein the distribution transformer load data prediction related by the method belongs to a regression problem, and the regression function of the SVM model is utilized:
Figure FDA0002789131500000034
in the formula: k (x)iAnd x) is a kernel function, and the method adopts a Gaussian radial basis function as the kernel function; b is an offset;
5.2, selecting and optimizing training parameters of the model, wherein the number of the training parameters in the SVM is only 3, and the training parameters are a sensitivity coefficient epsilon, a penalty function c and a kernel function parameter g respectively; the method adopts a cross validation method to optimize parameters of the SVM, firstly, a punishment parameter c and a kernel function parameter g are valued in a certain range, the classification accuracy under the combination is obtained for the selected sum, and finally, c and g with the highest classification accuracy are selected as the optimal parameters;
and 5.3, inputting the load subsequence training sample set, and respectively obtaining each wavelet subsequence of the distribution transform prediction day.
CN202011308820.9A 2020-11-20 2020-11-20 Distribution transformer load prediction method Pending CN112330052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011308820.9A CN112330052A (en) 2020-11-20 2020-11-20 Distribution transformer load prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011308820.9A CN112330052A (en) 2020-11-20 2020-11-20 Distribution transformer load prediction method

Publications (1)

Publication Number Publication Date
CN112330052A true CN112330052A (en) 2021-02-05

Family

ID=74321855

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011308820.9A Pending CN112330052A (en) 2020-11-20 2020-11-20 Distribution transformer load prediction method

Country Status (1)

Country Link
CN (1) CN112330052A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489005A (en) * 2021-07-22 2021-10-08 云南电网有限责任公司昆明供电局 Distribution transformer load estimation method and system for power distribution network load flow calculation
CN113919600A (en) * 2021-12-08 2022-01-11 国网湖北省电力有限公司经济技术研究院 Resident load ultra-short term prediction method
CN114548845A (en) * 2022-04-27 2022-05-27 北京智芯微电子科技有限公司 Distribution network management method, device and system
CN115829152A (en) * 2022-12-21 2023-03-21 杭州易龙电安科技有限公司 Power load prediction method, device and medium based on machine learning algorithm

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263823A (en) * 2019-05-29 2019-09-20 广东工业大学 A kind of short-term load forecasting method based on fuzzy clustering
CN111754029A (en) * 2020-06-08 2020-10-09 深圳供电局有限公司 Community load prediction system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263823A (en) * 2019-05-29 2019-09-20 广东工业大学 A kind of short-term load forecasting method based on fuzzy clustering
CN111754029A (en) * 2020-06-08 2020-10-09 深圳供电局有限公司 Community load prediction system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
杨浩等: "基于自适应模糊C均值算法的电力负荷分类研究", 《电力系统保护与控制》 *
詹仁俊: "基于K-means聚类的小波支持向量机配电网短期负荷预测及应用", 《供用电》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113489005A (en) * 2021-07-22 2021-10-08 云南电网有限责任公司昆明供电局 Distribution transformer load estimation method and system for power distribution network load flow calculation
CN113489005B (en) * 2021-07-22 2023-07-25 云南电网有限责任公司昆明供电局 Distribution transformer load estimation method and system for power flow calculation of distribution network
CN113919600A (en) * 2021-12-08 2022-01-11 国网湖北省电力有限公司经济技术研究院 Resident load ultra-short term prediction method
CN114548845A (en) * 2022-04-27 2022-05-27 北京智芯微电子科技有限公司 Distribution network management method, device and system
CN114548845B (en) * 2022-04-27 2022-07-12 北京智芯微电子科技有限公司 Distribution network management method, device and system
CN115829152A (en) * 2022-12-21 2023-03-21 杭州易龙电安科技有限公司 Power load prediction method, device and medium based on machine learning algorithm
CN115829152B (en) * 2022-12-21 2023-07-07 杭州易龙电安科技有限公司 Power load prediction method, device and medium based on machine learning algorithm

Similar Documents

Publication Publication Date Title
CN113962364B (en) Multi-factor power load prediction method based on deep learning
CN110070145B (en) LSTM hub single-product energy consumption prediction based on incremental clustering
CN112330052A (en) Distribution transformer load prediction method
CN112561156A (en) Short-term power load prediction method based on user load mode classification
CN110782658B (en) Traffic prediction method based on LightGBM algorithm
CN110245783B (en) Short-term load prediction method based on C-means clustering fuzzy rough set
CN114792156B (en) Photovoltaic output power prediction method and system based on curve characteristic index clustering
CN111160626B (en) Power load time sequence control method based on decomposition fusion
CN112734135B (en) Power load prediction method, intelligent terminal and computer readable storage medium
CN111915092B (en) Ultra-short-term wind power prediction method based on long-short-term memory neural network
CN108960488B (en) Saturated load spatial distribution accurate prediction method based on deep learning and multi-source information fusion
CN111008726B (en) Class picture conversion method in power load prediction
CN114862032B (en) XGBoost-LSTM-based power grid load prediction method and device
CN112232561A (en) Power load probability prediction method based on constrained parallel LSTM quantile regression
CN116187835A (en) Data-driven-based method and system for estimating theoretical line loss interval of transformer area
CN114936599A (en) Base station energy consumption abnormity monitoring method and system based on wavelet decomposition and migration discrimination
CN115470962A (en) LightGBM-based enterprise confidence loss risk prediction model construction method
CN116169670A (en) Short-term non-resident load prediction method and system based on improved neural network
CN115600729A (en) Grid load prediction method considering multiple attributes
CN115640901A (en) Small sample load prediction method based on hybrid neural network and generation countermeasure
CN115660725A (en) Method for depicting multi-dimensional energy user portrait
CN116826710A (en) Peak clipping strategy recommendation method and device based on load prediction and storage medium
CN111882114A (en) Short-term traffic flow prediction model construction method and prediction method
CN113762591B (en) Short-term electric quantity prediction method and system based on GRU and multi-core SVM countermeasure learning
Yang Combination forecast of economic chaos based on improved genetic algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210205

RJ01 Rejection of invention patent application after publication