CN111461400A - Load data completion method based on Kmeans and T-L STM - Google Patents
Load data completion method based on Kmeans and T-L STM Download PDFInfo
- Publication number
- CN111461400A CN111461400A CN202010128406.3A CN202010128406A CN111461400A CN 111461400 A CN111461400 A CN 111461400A CN 202010128406 A CN202010128406 A CN 202010128406A CN 111461400 A CN111461400 A CN 111461400A
- Authority
- CN
- China
- Prior art keywords
- data
- load
- day
- load data
- stm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 31
- 238000013499 data model Methods 0.000 claims description 33
- 238000012549 training Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 12
- 238000010606 normalization Methods 0.000 claims description 10
- 238000013528 artificial neural network Methods 0.000 claims description 7
- 238000012795 verification Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 238000007418 data mining Methods 0.000 description 3
- 230000009467 reduction Effects 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000015654 memory Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000007621 cluster analysis Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000003631 expected effect Effects 0.000 description 1
- 238000004880 explosion Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/04—Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/18—Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/10—Pre-processing; Data cleansing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Economics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Strategic Management (AREA)
- Bioinformatics & Computational Biology (AREA)
- Human Resources & Organizations (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Computational Mathematics (AREA)
- Pure & Applied Mathematics (AREA)
- General Business, Economics & Management (AREA)
- Tourism & Hospitality (AREA)
- Mathematical Optimization (AREA)
- Mathematical Analysis (AREA)
- Probability & Statistics with Applications (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Water Supply & Treatment (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
Abstract
The invention discloses a load data completion method based on Kmeans and T-L STM, which relates to a data completion method.
Description
Technical Field
The invention relates to a data completion method, in particular to a load data completion method based on Kmeans and T-L STM.
Background
Under the background of the current era, the rapid development of information industry technology and diversified data acquisition approaches enable the data volume of various industry mechanisms to be increased dramatically, for example, the power load data of national grids has extremely large data storage, and is still increased rapidly at present. Experience shows that many available contents are often contained in the data, and if the contents hidden in the data can be more effectively and completely analyzed, potential data value is extracted, and upper-layer application is carried out, which is very meaningful.
However, most theoretical innovations, developments and technical specific implementations in the data mining field are based on an ideal and complete data set, but load data collected by a real terminal is lost and incomplete due to various reasons such as terminal damage and no communication, and the incomplete load data can distort, invalidate and even draw wrong conclusions as a result of data mining. Therefore, the completion processing of the missing data is an important and non-negligible link in the data mining process.
The current data completion method comprises linear completion, difference method completion and the like, the idea of the linear completion algorithm is to obtain the missing data value by averaging the sum of the data of the previous moment and the data of the later moment of the missing point, the method is simple but has large deviation compared with the real value and often cannot achieve the expected effect of people, moreover, many completion algorithms do not classify historical load data, the model is influenced by sudden change of the load data and can cause overlarge error, in addition, a L STM (L ong ShortTerm Memory) network based on a time sequence has a good completion effect under the condition of continuous and regular time intervals, but the actual condition is that the missing data are random, so the data completion of the L STM network cannot meet the requirements.
Disclosure of Invention
The technical problem to be solved and the technical task to be solved by the invention are to perfect and improve the prior technical scheme, and provide a load data completion method based on Kmeans and T-L STM to achieve the aim of accurately completing data.
A load data completion method based on Kmeans and T-L STM comprises the following steps:
1) constructing a data model;
101) acquiring load data in batches;
102) randomly digging out continuous points in the load data as load data to be supplemented;
103) performing Kmeans clustering on the load data;
104) obtaining an optimal K classification mode through Kmeans clustering, dividing the total sample into K classes according to the K classification mode, wherein each class corresponds to different load intervals, and obtaining load intervals of the K classifications;
105) calculating the average value of the load, and carrying out normalization processing on the load data;
106) determining a load interval according to the load average value, inputting the load data subjected to normalization processing into a T-L STM neural network corresponding to the load interval for training so as to obtain a data model corresponding to the load interval;
2) the load data of the day of the data to be supplemented are taken regularly;
3) calculating the average value of the load data of the current day;
4) obtaining a corresponding data model according to the average value;
5) and inputting the load data to be supplemented into the corresponding data model, and calculating to obtain the supplemented complete load data.
As a preferable technical means: when the data model is constructed:
in step 101), the acquired load data comprises load data of a certain day of a certain unit and 1 day before the certain day and the seventh day;
in step 102), randomly digging out continuous points in the load data of a certain day as load data to be supplemented;
in step 105), the average value of the load on a certain day is calculated, and the load data on the certain day and the previous day and the seventh day are normalized.
As a preferable technical means: in the step 2), load data of the day before and the seventh day before the data to be supplemented are obtained in addition to the load data of the day before the data to be supplemented;
in the step 5), except for inputting the load data to be compensated into the corresponding data model, inputting the load data of the previous day and the seventh day after normalization processing into the corresponding data model; and the data model completes the data according to the load data pairs of the current day, the previous day and the previous seventh day.
As a preferable technical means: and 104) obtaining a K value by using an elbow method when Kmeans clustering is carried out.
As a preferable technical means: when the data model is constructed in the step 1), a verification step is finally included, the data with the loss is input into the corresponding data model after being normalized, and the historical information at the moment is supplemented, the historical data comprise the historical data of yesterday and seven days ago, a complete sequence is finally obtained, then the complete sequence is compared with the real data to obtain an error, and when the error is converged, the training is finished, and a final data model is obtained and stored.
The technical scheme has the advantages that the collected public variable load data are clustered by adopting a Kmeans method, the load data with similar characteristics can be well classified into one class, the interference of different characteristic data is eliminated, the data with the same class are input into the T-L STM neural network, because the T-L STM design considers the missing rule of the load missing data, some missing data are continuous, some are discontinuous, and delta T can be well distinguished, so that the neural network learns interval information and can more accurately reflect the real load value of the missing data.
Drawings
FIG. 1 is a flow chart of the present invention.
FIG. 2 is a graph of the sum of squared clustering errors versus k in accordance with the present invention.
Figure 3 is a diagram of an L STM network architecture in accordance with the present invention.
FIG. 4 is a diagram of the STM structure of T-L of the present invention.
FIG. 5 is a data model training diagram of the present invention.
FIG. 6 is a test flow diagram of the present invention
Detailed Description
The technical scheme of the invention is further explained in detail by combining the drawings in the specification.
As shown in fig. 1, the present invention comprises the steps of:
1) constructing a data model;
101) acquiring load data of a certain unit in batches on a certain day, 1 day before the certain day and the seventh day;
102) randomly digging out continuous points in the load data as load data to be supplemented;
103) performing Kmeans clustering on the load data;
104) obtaining an optimal K classification mode through Kmeans clustering, dividing the total sample into K classes according to the K classification mode, wherein each class corresponds to different load intervals, and obtaining load intervals of the K classifications;
105) calculating the average value of the load of a certain day, and carrying out normalization processing on the load data of the certain day, the previous day and the seventh day;
106) determining a load interval according to the load average value, inputting the load data subjected to normalization processing into a T-L STM neural network corresponding to the load interval for training so as to obtain a data model corresponding to the load interval;
2) the load data of the data to be supplemented on the same day, the load data of the data to be supplemented on the previous day and the load data of the seventh day before are taken regularly;
3) calculating the average value of the load data of the current day;
4) obtaining a corresponding data model according to the average value;
5) and inputting the load data to be supplemented, the load data of the previous day and the load data of the seventh day after normalization processing into corresponding data models, and calculating to obtain the supplemented complete load data.
The following will be further explained with respect to some steps:
kmeans clustering: the K value is obtained by using the elbow method, because the curvature at the elbow is the largest, the clustering effect is the best.
The technical scheme adopts an elbow method to determine the k value (the number of clusters) of the clusters. The core idea of the elbow method is that when K is smaller than a real clustering number, the increase of K can greatly increase the aggregation degree of each cluster, so that the reduction range of the clustering error square sum of all samples can be large, and when K reaches the real clustering number, the return of the aggregation degree obtained by increasing K is rapidly reduced, so that the reduction range of the clustering error square sum can be rapidly reduced, and then the reduction range of the clustering error square sum tends to be gentle along with the continuous increase of the K value, namely, a relation graph of the clustering error square sum and K is in the shape of an elbow, and the K (with the highest curvature) value corresponding to the elbow is the real clustering number of data, and the K value is determined by utilizing the characteristic.
Because the power supply characteristics of different common variations are different, the daily load variations have the characteristics of the common variations and the load absolute values are greatly different, the method for classifying the data by using the cluster analysis is provided, and the interference among different power supply characteristic samples is eliminated. And dividing the total cost into a plurality of categories through Kmeans clustering, and taking the categories as training samples of each data completion network. The method comprises the following specific steps: the characteristics of 96 load values and its load values in one day of 4 thousand common changes of the Kinghua Ministry are taken as samples, input into a Kmeans clustering model, and a relational graph of the clustering error sum of squares (the sum of differences between the load values of the samples and the load value of the central point) and k is drawn as shown in FIG. 2. It is observed that k falls off more rapidly before 3 and becomes more gradual from 3 onwards, so that the number of clusters of kmeans can be set to 3 (with the highest curvature).
T-L STM (long and short time memory network of variant) by using T-L STM neural network, considering the uncertainty of load missing data, which may be the case of continuous multiple point missing, T-L STM can well process the problem of missing data completion.
L STM was originally proposed by Hochreiter et al and improved by Graves, and is an improved version of the recurrent neural network proposed to address the problem of gradient explosion and long-term dependence in native RNN as shown in FIG. 3. the main work of L STM is to modify the internal structure of the RNN network to achieve control over the duration of memory by adding "gates", such as by "forgetting gates" to filter some information to "remember" longer information, as shown in FIG. 3.
The formula is as follows:
gt=tanh(Wgxt+Ught-1+bg)
it=σ(Wixt+Uiht-1+bi)
ft=σ(Wfxt+Ufht-1+bf)
ot=σ(Woxt+Uoht-1+bf)
ct=ft·c{t-1}+it·gt
ht=ot·tanh(ct)
wherein h ist,ct∈RHH is the hidden layer size, and σ (·) is the sigmoid function, i, f, o, g representing the input gate, the forgetting gate, the output gate, and the cell state, respectively.
{Wf,Uf,bf},{Wi,Ui,bi},{Wo,Uo,bo},{Wc,Uc,bcAre the network parameters of the respective parts, respectively. More specifically, an input gate i adjusts the degree to which new value data is fed into the unit, a forgetting gate f adjusts the degree to which a history is forgotten, and an output gate o determines the weights of different parts to calculate an output.
However, for data with missing data, our input is discontinuous, the time interval is irregular, and L STM network cannot have good processing effect, so the technical scheme adopts the T-L STM network considering the time interval, as shown in fig. 4, at the input layer, Δ T is added, other parameters are not changed, and the network learns the time interval information.
The improvement in T-L STM over L STM is as follows:
g(t)=1/log(e+t)
ht=ot·tanh(ct)
whereintIn contrast to L STM, T-L STM takes into account not only the specific value of the current input but also the interval of this input, and solves the problem of inconsistent intervals in the missing time seriest-1And hidden layer state ht-1And the current input value xtAnd time intervaltThe cell state c of this unit is obtainedtAnd hidden layer state htAnd continues on to the next T-L STM unit.
Solving the average value of the load data to be compensated: in order to determine which class of load interval the data to be complemented belongs to.
Training a data model:
as shown in fig. 5, in order to improve accuracy, when the data model is constructed in step 1), a verification step is finally included, data with missing is normalized and then input into a corresponding data model, and by using historical information at this time, historical data before yesterday and seventy days are included, a complete sequence is finally obtained, then the complete sequence is compared with real data to obtain an error, and after the error is converged, training is finished to obtain a final data model and store the final data model.
The following data model training is described by taking the Jinhua Ben data as an example:
1. public variable load data of 5 thousand stations of the Jinhua Ben Ministry from 2018, 11 months to 2019, 5 months and 8 months in total are prepared.
2. And processing the training data set, mainly digging continuous missing points as data to be supplemented and solving the average value of the load of the day to be supplemented.
3. Inputting the processed training data into Kmeans for clustering to obtain K classes.
4. Normalizing the K classification data and the load data of the previous 1 day and the previous 7 th day, and inputting the normalized K classification data into a T-L STM network for coding to obtain Temporal context
5. Inputting the obtained Temporal context into L STM decoder, and comparing with real data to obtain error
6. If the error does not converge, continue training
7. And when the error is converged, finishing the training, obtaining K models and storing the K models.
On the basis of obtaining K models, a flow description of data completion is carried out by taking Jinhua Ben data as an example:
the data set was derived from data from 221 days since 11 months of Kinghua, with a total of 174 users, and 96 load points per day. We manually dig out about 1% of the data of the points (approximating the true data loss rate) and all 5 consecutive points are missing, which is closer to the loss of the true case.
The method comprises the following specific steps:
1. data of 221 days from 11 months of 2018 of Jinhua Ben, which includes 174 public transformation users and 96 load data per day, were prepared.
2. Data to be complemented and load data of the previous day and the previous 7 th day are collected in batches.
3. And manually digging 5 continuous points to be supplemented as verification, and solving the average value of the data of the days to be supplemented.
4. And judging which type of load interval the load average value belongs to.
5. Data normalization
5. Then the historical information of the moment of adding the missing value to which type of trained model is added comprises the historical data of the previous day and the seventh day, and finally the completed load data is obtained
And calculating the supplemented load data and the original data to obtain the average absolute error and the average absolute percentage error of the test data. The data are shown in table one.
The results obtained by the method are shown on the left, and the results obtained by the linear model (missing data values obtained by averaging the sum of the previous point and the next point) are shown on the right, where mae is the mean absolute error and mape is the mean absolute percentage error. It can be seen that the method has better effect than the linear model and the percentage error is about 10% under the condition of larger load value.
Table one: test results
The above load data completion method based on Kmeans and T-L STM shown in FIGS. 1-6 is a specific embodiment of the present invention, has embodied the substantial features and advantages of the present invention, and can be modified equivalently in shape, structure and the like according to the practical use requirements and under the teaching of the present invention, and is within the protection scope of the present invention.
Claims (5)
1. A load data completion method based on Kmeans and T-L STM is characterized by comprising the following steps:
1) constructing a data model;
101) acquiring load data in batches;
102) randomly digging out continuous points in the load data as load data to be supplemented;
103) performing Kmeans clustering on the load data;
104) obtaining an optimal K classification mode through Kmeans clustering, dividing the total sample into K classes according to the K classification mode, wherein each class corresponds to different load intervals, and obtaining load intervals of the K classifications;
105) calculating the average value of the load, and carrying out normalization processing on the load data;
106) determining a load interval according to the load average value, inputting the load data subjected to normalization processing into a T-L STM neural network corresponding to the load interval for training so as to obtain a data model corresponding to the load interval;
2) the load data of the day of the data to be supplemented are taken regularly;
3) calculating the average value of the load data of the current day;
4) obtaining a corresponding data model according to the average value;
5) and inputting the load data to be supplemented into the corresponding data model, and calculating to obtain the supplemented complete load data.
2. The method for completing load data based on Kmeans and T-L STM as claimed in claim 1, wherein:
when the data model is constructed:
in step 101), the acquired load data comprises load data of a certain day of a certain unit and 1 day before the certain day and the seventh day;
in step 102), randomly digging out continuous points in the load data of a certain day as load data to be supplemented;
in step 105), the average value of the load on a certain day is calculated, and the load data on the certain day and the previous day and the seventh day are normalized.
3. The load data completion method based on Kmeans and T-L STM according to claim 2, wherein in step 2), load data of the previous day and the seventh day of the data to be completed are obtained in addition to the load data of the current day of the data to be completed;
in the step 5), except for inputting the load data to be compensated into the corresponding data model, inputting the load data of the previous day and the seventh day after normalization processing into the corresponding data model; and the data model completes the data according to the load data pairs of the current day, the previous day and the previous seventh day.
4. The method for completing load data based on Kmeans and T-L STM as claimed in claim 3, wherein step 104) is performed by using elbow method to obtain K value during Kmeans clustering.
5. The load data completion method based on Kmeans and T-L STM as claimed in claim 2, wherein when the data model is constructed in step 1), the method further comprises a verification step, the data with the missing data is normalized and then input into the corresponding data model, historical information at the moment is supplemented, the historical data comprises yesterday and seven days ago, a complete sequence is finally obtained, then the sequence is compared with real data to obtain an error, and when the error converges, the training is finished, and a final data model is obtained and stored.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010128406.3A CN111461400B (en) | 2020-02-28 | 2020-02-28 | Kmeans and T-LSTM-based load data completion method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010128406.3A CN111461400B (en) | 2020-02-28 | 2020-02-28 | Kmeans and T-LSTM-based load data completion method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111461400A true CN111461400A (en) | 2020-07-28 |
CN111461400B CN111461400B (en) | 2023-06-23 |
Family
ID=71682448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010128406.3A Active CN111461400B (en) | 2020-02-28 | 2020-02-28 | Kmeans and T-LSTM-based load data completion method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111461400B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107833153A (en) * | 2017-12-06 | 2018-03-23 | 广州供电局有限公司 | A kind of network load missing data complementing method based on k means clusters |
CN109598381A (en) * | 2018-12-05 | 2019-04-09 | 武汉理工大学 | A kind of Short-time Traffic Flow Forecasting Methods based on state frequency Memory Neural Networks |
CN109754113A (en) * | 2018-11-29 | 2019-05-14 | 南京邮电大学 | Load forecasting method based on dynamic time warping Yu length time memory |
US20190143517A1 (en) * | 2017-11-14 | 2019-05-16 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision |
CN109934375A (en) * | 2018-11-27 | 2019-06-25 | 电子科技大学中山学院 | Power load prediction method |
CN110245801A (en) * | 2019-06-19 | 2019-09-17 | 中国电力科学研究院有限公司 | A kind of Methods of electric load forecasting and system based on combination mining model |
CN110334726A (en) * | 2019-04-24 | 2019-10-15 | 华北电力大学 | A kind of identification of the electric load abnormal data based on Density Clustering and LSTM and restorative procedure |
CN110674999A (en) * | 2019-10-08 | 2020-01-10 | 国网河南省电力公司电力科学研究院 | Cell load prediction method based on improved clustering and long-short term memory deep learning |
-
2020
- 2020-02-28 CN CN202010128406.3A patent/CN111461400B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190143517A1 (en) * | 2017-11-14 | 2019-05-16 | Arizona Board Of Regents On Behalf Of Arizona State University | Systems and methods for collision-free trajectory planning in human-robot interaction through hand movement prediction from vision |
CN107833153A (en) * | 2017-12-06 | 2018-03-23 | 广州供电局有限公司 | A kind of network load missing data complementing method based on k means clusters |
CN109934375A (en) * | 2018-11-27 | 2019-06-25 | 电子科技大学中山学院 | Power load prediction method |
CN109754113A (en) * | 2018-11-29 | 2019-05-14 | 南京邮电大学 | Load forecasting method based on dynamic time warping Yu length time memory |
CN109598381A (en) * | 2018-12-05 | 2019-04-09 | 武汉理工大学 | A kind of Short-time Traffic Flow Forecasting Methods based on state frequency Memory Neural Networks |
CN110334726A (en) * | 2019-04-24 | 2019-10-15 | 华北电力大学 | A kind of identification of the electric load abnormal data based on Density Clustering and LSTM and restorative procedure |
CN110245801A (en) * | 2019-06-19 | 2019-09-17 | 中国电力科学研究院有限公司 | A kind of Methods of electric load forecasting and system based on combination mining model |
CN110674999A (en) * | 2019-10-08 | 2020-01-10 | 国网河南省电力公司电力科学研究院 | Cell load prediction method based on improved clustering and long-short term memory deep learning |
Non-Patent Citations (2)
Title |
---|
LUNTIAN MOU: "T-LSTM: A Long Short-Term Memory Neural Network Enhanced by Temporal Information for Traffic Flow Prediction", IEEE ACCESS * |
许芳芳: "基于ST-LSTM 网络的位置预测模型", 计算机工程 * |
Also Published As
Publication number | Publication date |
---|---|
CN111461400B (en) | 2023-06-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111723738B (en) | Coal rock chitin group microscopic image classification method and system based on transfer learning | |
CN110020682B (en) | Attention mechanism relation comparison network model method based on small sample learning | |
CN108764471B (en) | Neural network cross-layer pruning method based on feature redundancy analysis | |
CN110084610B (en) | Network transaction fraud detection system based on twin neural network | |
CN111950656B (en) | Image recognition model generation method and device, computer equipment and storage medium | |
CN113593611B (en) | Voice classification network training method and device, computing equipment and storage medium | |
WO2020215560A1 (en) | Auto-encoding neural network processing method and apparatus, and computer device and storage medium | |
CN112183742B (en) | Neural network hybrid quantization method based on progressive quantization and Hessian information | |
CN110020712B (en) | Optimized particle swarm BP network prediction method and system based on clustering | |
CN110264407B (en) | Image super-resolution model training and reconstruction method, device, equipment and storage medium | |
CN111652264B (en) | Negative migration sample screening method based on maximum mean value difference | |
CN111062444A (en) | Credit risk prediction method, system, terminal and storage medium | |
CN109492816B (en) | Coal and gas outburst dynamic prediction method based on hybrid intelligence | |
CN116542701A (en) | Carbon price prediction method and system based on CNN-LSTM combination model | |
CN109214444B (en) | Game anti-addiction determination system and method based on twin neural network and GMM | |
CN113095484A (en) | Stock price prediction method based on LSTM neural network | |
CN115423008A (en) | Method, system and medium for cleaning operation data of power grid equipment | |
CN111368648A (en) | Radar radiation source individual identification method and device, electronic equipment and storage medium thereof | |
CN114626606A (en) | MI-BILSTM prediction method considering characteristic importance value fluctuation | |
CN111783688B (en) | Remote sensing image scene classification method based on convolutional neural network | |
CN110288002B (en) | Image classification method based on sparse orthogonal neural network | |
CN111461400A (en) | Load data completion method based on Kmeans and T-L STM | |
CN116415989A (en) | Gigabit potential customer prediction method, gigabit potential customer prediction device, computer equipment and storage medium | |
CN110837853A (en) | Rapid classification model construction method | |
WO2023273171A1 (en) | Image processing method and apparatus, device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |