CN113485986B - Electric power data restoration method - Google Patents
Electric power data restoration method Download PDFInfo
- Publication number
- CN113485986B CN113485986B CN202110717117.1A CN202110717117A CN113485986B CN 113485986 B CN113485986 B CN 113485986B CN 202110717117 A CN202110717117 A CN 202110717117A CN 113485986 B CN113485986 B CN 113485986B
- Authority
- CN
- China
- Prior art keywords
- power data
- data
- gate
- output
- missing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000013528 artificial neural network Methods 0.000 claims abstract description 30
- 238000004364 calculation method Methods 0.000 claims description 13
- 210000002569 neuron Anatomy 0.000 claims description 9
- 210000004027 cell Anatomy 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000007476 Maximum Likelihood Methods 0.000 claims description 3
- 238000004422 calculation algorithm Methods 0.000 claims description 3
- 230000001413 cellular effect Effects 0.000 claims description 3
- 238000010219 correlation analysis Methods 0.000 claims description 3
- 210000004205 output neuron Anatomy 0.000 claims description 3
- 230000008569 process Effects 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 12
- 230000014759 maintenance of location Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000013145 classification model Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000001537 neural effect Effects 0.000 description 2
- 238000012216 screening Methods 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 238000010248 power generation Methods 0.000 description 1
- 230000008439 repair process Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/21—Design, administration or maintenance of databases
- G06F16/215—Improving data quality; Data cleansing, e.g. de-duplication, removing invalid entries or correcting typographical errors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Quality & Reliability (AREA)
- Supply And Distribution Of Alternating Current (AREA)
Abstract
The invention discloses a power data restoration method, which utilizes an SOM neural network to classify and process power data in a historical power data set; obtaining influence factors of the electric power data types meeting the relevance threshold by using the Pearson correlation coefficient theory; inputting influencing factors of the missing data into a trained LSTM neural network to obtain the electric power data type of the missing data; and repairing the data by adopting different methods according to the type of the power data of the missing data. According to the invention, the complex nonlinearity of the power data is considered, the characteristic of nonlinear problem can be processed by utilizing the strong learning ability of the neural network, the restoration of the power data is realized, and the classification efficiency and accuracy can be effectively improved.
Description
Technical Field
The invention relates to a power data restoration method, and belongs to the technical field of power data detection restoration.
Background
Digital technology is continually evolving, such that a large amount of power data is generated in the power system. However, the power data is often in the case of data loss due to external interference, transmission errors, equipment abnormality, network delay and the like, so that the accuracy and timeliness of processing the data in the power system can be affected.
The traditional machine learning method is mostly adopted in the existing repairing method, but the existing repairing method faces to the current power grid structure with increasingly complex structure, especially the application of a new energy power generation system and an electric automobile access and demand response mechanism, so that the traditional machine learning method cannot cope with the repairing of high-randomness power data.
In order to ensure the optimized stable operation of the intelligent power system, the complete and correct power data is required to provide support, so that the influence on the safe and stable operation of the power system is avoided. Those skilled in the art are in urgent need of repairing data of the power system with higher accuracy.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides a power data restoration method.
The technical scheme is as follows: in order to solve the technical problems, the invention adopts the following technical scheme:
A method of power data restoration, comprising the steps of:
and S1, acquiring a historical power data set, and classifying and processing the power data in the historical power data set by utilizing the SOM neural network to obtain a power data type.
And S2, performing correlation analysis on the electric power data type and the influence factors by utilizing a Pearson correlation coefficient theory to obtain the influence factors of which the electric power data type meets the correlation threshold, and taking the influence factors corresponding to the electric power data type as characteristic values.
And step S3, taking the electric power data type and the corresponding characteristic value as training samples, and training the LSTM neural network to obtain a trained LSTM neural network.
And S4, inputting influence factors of the missing data into the trained LSTM neural network to obtain the power data type of the missing data.
And S5, repairing the data by adopting different methods according to the type of the power data of the missing data.
As a preferable scheme, the SOM neural network consists of an input layer and an output layer, and the input layer and the output layer are fully connected. When classifying input power data, neurons of the input layer cooperate to compete for response opportunities of the input power data, respectively, thereby obtaining output neurons. The weight values around the neurons of the output layer are correspondingly adjusted through updating the weight values, and each neuron of the input layer learns specific categories after the power data are adjusted, so that the output layer learns and classifies the input power data. The weight value range is usually [0,1].
As a preferred scheme, the Pearson correlation coefficient theory has the following calculation formula:
Wherein: ρ X.Y is the correlation coefficient, ranging from [0,1]; x and Y are continuous variables respectively; n is the number of continuous variable samples; And Average values of continuous variables; ρ X.Y =0, representing inter-variable independence; ρ X.Y >0, representing a positive correlation between variables; ρ X.Y <0, representing the negative correlation between the variables; the larger ρ X,Y illustrates the stronger the correlation between variables, and vice versa.
Preferably, the LSTM neural network is constructed as follows:
the LSTM neural network includes 3 gate structures to implement predictions of power data types, respectively, a forget gate, an input gate, and an output gate.
(1) Forget to leave the door. The method is mainly used for screening input data, calculating the retention degree of the data, outputting a value in a range of [0,1] after the sigmoid neural layer is processed, wherein the larger the value is, the more the retention components are, and the smaller the reverse is. The forgetting gate calculation formula is as follows:
ft=σ(Wxfxt+Whfht-1+Bf)
Wherein: f t is the output of the forget gate; w xf and W hf are network parameters which need to be learned by the forget gate; b f is the bias of the forgetting gate; sigma is a sigmoid function:
(2) An input gate. The method is mainly used for updating the data state and mainly comprises two parts of information, wherein one part is the data which needs to be stored by the sigmoid function selection; the other part is new information generated by the tanh function through the current input x t, and the C' t combines the two parts of information to generate a new memory state, and the calculation formula is as follows:
it=δ(Wxixt+Whiht-1+Bi)
C′t=tanh(WxCxt+WhCht-1+BC)
Ct=ft×Ct-1+it×C′t
Wherein: i t is the output of the input gate; w xi and W hi are network parameters to be learned by the input gate; b i is the bias of the input gate; c t' is the input of the tanh function after updating; w xc and W hc are network parameters that the cell state needs to learn; b c is bias of cell state; c t is the cellular state.
(3) And outputting a door. The method is mainly used for calculating the output degree of the input information x t. The final hidden unit h t is obtained by multiplying the information output in the cell state by a sigmoid function after passing through a tanh function, and the calculation formula is as follows:
ot=σ(Wxoxt+Whoht-1+Bo)
ht=ot×tanh(Ct)
Wherein: o t is the output of the output gate; h t is an output hidden unit; w xo and W ho are network parameters to be learned of the output gate; b o is the bias of the output gate.
Preferably, the repairing of the data by different methods according to the type of the power data of the missing data comprises the following steps:
When the type of the power data of the missing data is the power data, layering the power data according to the same month of each year by adopting a layering mean value filling method, and taking the mean value of each layer to replace the missing value.
When the power data type of the missing data is load data, the maximum likelihood estimation is carried out on the missing data by adopting a maximum expected algorithm, iteration is carried out continuously, and the final output value replaces the missing value.
When the power data type of the missing data is current data, voltage data or frequency data, a mean value filling method is adopted to replace the missing value by the mean value of the front and rear observation values or the mean value of the non-missing data.
The beneficial effects are that: according to the electric power data restoration method provided by the invention, the complex nonlinearity of the electric power data is considered, the characteristics of nonlinearity problems can be processed by utilizing the strong learning ability of the neural network, the restoration of the electric power data is realized, and the classification efficiency and the accuracy can be effectively improved.
Drawings
FIG. 1 is a schematic flow chart of the method of the present invention.
FIG. 2 is a graph showing performance accuracy of a classification model SOM-LSTM neural network according to an embodiment of the present invention.
FIG. 3 is a graph showing performance accuracy of a classification model SOM-LSTM neural network according to another embodiment of the present invention.
Detailed Description
The invention will be further described with reference to specific examples.
The following describes the method for repairing electric power data in detail by referring to the drawings and the detailed description. The advantages and features of the present invention will become more apparent from the following description. It should be noted that the drawings are in a very simplified form and are all to a non-precise scale, merely for the purpose of facilitating and clearly aiding in the description of embodiments of the invention. For a better understanding of the invention with objects, features and advantages, refer to the drawings. It should be understood that the structures, proportions, sizes, etc. shown in the drawings are for illustration purposes only and should not be construed as limiting the invention to the extent that any modifications, changes in the proportions, or adjustments of the sizes of structures, proportions, or otherwise, used in the practice of the invention, are included in the spirit and scope of the invention which is otherwise, without departing from the spirit or essential characteristics thereof.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Referring to fig. 1, the method for repairing electric power data provided in this embodiment includes the following steps:
and S1, acquiring a historical power data set, and classifying and processing the power data in the historical power data set by utilizing the SOM neural network to obtain a power data type.
And S2, performing correlation analysis on the electric power data type and the influence factors by utilizing a Pearson correlation coefficient theory to obtain the influence factors of which the electric power data type meets the correlation threshold, and taking the influence factors corresponding to the electric power data type as characteristic values.
And step S3, taking the electric power data type and the corresponding characteristic value as training samples, and training the LSTM neural network to obtain a trained LSTM neural network.
And S4, inputting influence factors of the missing data into the trained LSTM neural network to obtain the power data type of the missing data.
And S5, repairing the data by adopting different methods according to the type of the power data of the missing data.
The SOM neural network consists of an input layer and an output layer, and the input layer and the output layer are connected completely. When classifying input power data, neurons of the input layer cooperate to compete for response opportunities of the input power data, respectively, thereby obtaining output neurons. The weight values around the neurons of the output layer are correspondingly adjusted through updating the weight values, and each neuron of the input layer learns specific categories after the power data are adjusted, so that the output layer learns and classifies the input power data. The weight value range is usually [0,1].
The calculation formula of the Pearson correlation coefficient theory is as follows:
Wherein: ρ X.Y is the correlation coefficient, ranging from [0,1]; x and Y are continuous variables respectively; n is the number of continuous variable samples; And Average values of continuous variables; ρ X.Y =0, representing inter-variable independence; ρ X.Y >0, representing a positive correlation between variables; ρ X.Y <0, representing the negative correlation between the variables; the larger ρ X,Y illustrates the stronger the correlation between variables, and vice versa.
The LSTM neural network is constructed as follows:
the LSTM neural network includes 3 gate structures to implement predictions of power data types, respectively, a forget gate, an input gate, and an output gate.
(1) Forget to leave the door. The method is mainly used for screening input data, calculating the retention degree of the data, outputting a value in a range of [0,1] after the sigmoid neural layer is processed, wherein the larger the value is, the more the retention components are, and the smaller the reverse is. The forgetting gate calculation formula is as follows:
ft=σ(Wxfxt+Whfht-1+Bf)
Wherein: f t is the output of the forget gate; w xf and W hf are network parameters which need to be learned by the forget gate; b f is the bias of the forgetting gate; sigma is a sigmoid function:
(2) An input gate. The method is mainly used for updating the data state and mainly comprises two parts of information, wherein one part is the data which needs to be stored by the sigmoid function selection; the other part is new information generated by the tanh function through the current input x t, and the C' t combines the two parts of information to generate a new memory state, and the calculation formula is as follows:
it=δ(Wxixt+Whiht-1+Bi)
C′t=tanh(WxCxt+WhCht-1+BC)
Ct=ft×Ct-1+it×C′t
Wherein: i t is the output of the input gate; w xi and W hi are network parameters to be learned by the input gate; b i is the bias of the input gate; c t' is the input of the tanh function after updating; w xc and W hc are network parameters that the cell state needs to learn; b c is bias of cell state; c t is the cellular state.
(3) And outputting a door. The method is mainly used for calculating the output degree of the input information x t. The final hidden unit h t is obtained by multiplying the information output in the cell state by a sigmoid function after passing through a tanh function, and the calculation formula is as follows:
ot=σ(Wxoxt+Whoht-1+Bo)
ht=ot×tanh(Ct)
Wherein: o t is the output of the output gate; h t is an output hidden unit; w xo and W ho are network parameters to be learned of the output gate; b o is the bias of the output gate.
The method for repairing the data by adopting different methods according to the type of the power data of the missing data comprises the following steps:
When the type of the power data of the missing data is the power data, layering the power data according to the same month of each year by adopting a layering mean value filling method, and taking the mean value of each layer to replace the missing value.
When the power data type of the missing data is load data, the maximum likelihood estimation is carried out on the missing data by adopting a maximum expected algorithm, iteration is carried out continuously, and the final output value replaces the missing value.
When the power data type of the missing data is current data, voltage data or frequency data, a mean value filling method is adopted to replace the missing value by the mean value of the front and rear observation values or the mean value of the non-missing data.
As shown in fig. 2 and 3, in order to verify the effectiveness of the proposed SOM-LSMT-based power data restoration method, the computer experimentally simulated herein was configured as INTER INTEL (R) Core (TM) i5-8300H CPU@2.30GHz2.30GHz and NVIDIA GTX 1050Ti 4g video memory, and simulation calculations were performed on MATLAB R2019a platform.
The household power data of a certain area in China in 2018 and 8 and the temperature data acquired by corresponding weather stations are selected, and the data acquisition interval is 15 minutes. And (3) analyzing the electric power data restoration result by using an average absolute error ((mean absolute error, MAE) and root mean square error (root mean square error, RMSE) evaluation index.
For analysis of the proposed accuracy based on the SOM-LSTM model, the continuous missing power data SOM-LSTM model on typical working days and non-working days are compared with the repair result drawing curves of the Extreme learning machine (Extreme LEARNING MACHINE, ELM) model and the LSTM model, and the results are shown in fig. 2 and 3.
As can be seen from fig. 2 and 3, when repairing the continuous missing of the user voltage data on typical working days and non-working days, the error of the LSTM neural network is the largest, and the accuracy is significantly reduced with the increase of the data missing, which indicates that the fitting repairing effect of the neural network is insufficient. Compared with an LSTM model and an ELM model, the SOM-LSTM neural network for predicting and repairing data classified by the SOM neural network can effectively solve the problem of low accuracy of repairing continuously missing data, and still ensure a higher repairing rate under the condition of more data missing.
While the present invention has been described in detail through the foregoing description of the preferred embodiment, it should be understood that the foregoing description is not to be considered as limiting the invention. Many modifications and substitutions of the present invention will become apparent to those of ordinary skill in the art upon reading the foregoing. Accordingly, the scope of the invention should be limited only by the attached claims.
Claims (7)
1. The electric power data restoration method is characterized in that: the method comprises the following steps:
step S1, acquiring a historical power data set, and classifying and processing power data in the historical power data set by utilizing an SOM neural network to obtain a power data type;
s2, performing correlation analysis on the power data type and the influence factors by using a Pearson correlation coefficient theory to obtain the influence factors of which the power data type meets a correlation threshold, and taking the influence factors corresponding to the power data type as characteristic values;
Step S3, taking the electric power data type and the corresponding characteristic value as training samples, and training the LSTM neural network to obtain a trained LSTM neural network;
s4, inputting influence factors of the missing data into the trained LSTM neural network to obtain the electric power data type of the missing data;
and S5, repairing the data by adopting different methods according to the type of the power data of the missing data.
2. A method of power data restoration according to claim 1, wherein: the SOM neural network consists of an input layer and an output layer, and the input layer and the output layer are connected completely; when classifying input power data, each neuron of the input layer works cooperatively to compete for response opportunities of the input power data respectively, so that output neurons are obtained; the weight values around the neurons of the output layer are correspondingly adjusted through updating the weight values, and each neuron of the input layer learns specific categories after the power data are adjusted, so that the output layer learns and classifies the input power data.
3. A method of power data restoration according to claim 2, wherein: the weight value range is [0,1].
4. A method of power data restoration according to claim 1, wherein: the calculation formula of the Pearson correlation coefficient theory is as follows:
Wherein: ρ X,Y is the correlation coefficient, ranging from [0,1]; x and Y are continuous variables respectively; n is the number of continuous variable samples; And Average values of continuous variables; ρ X,Y =0, representing inter-variable independence; ρ X,Y > 0, representing the positive correlation between the variables; ρ X,Y <0, representing the negative correlation between the variables; the larger ρ X,Y illustrates the stronger the correlation between variables, and vice versa.
5. A method of power data restoration according to claim 1, wherein: the LSTM neural network comprises 3 gate structures for realizing the prediction of the power data type, namely a forgetting gate, an input gate and an output gate.
6. The method for repairing electrical data according to claim 5, wherein: the calculation formula of the forgetting gate is as follows:
ft=σ(Wxfxt+Whfht-1+Bf)
Wherein: f t is the output of the forget gate; w xf and W hf are network parameters which need to be learned by the forget gate; b f is the bias of the forgetting gate; sigma is a sigmoid function:
the input gate calculation formula is as follows:
it=δ(Wxixt+Whiht-1+Bi)
Ct ′=tanh(WxCxt+WhCht-1+BC)
Ct=ft×Ct-1+it×Ct ′
Wherein: i t is the output of the input gate; w xi and W hi are network parameters to be learned by the input gate; b i is the bias of the input gate; c t' is the input of the tanh function after updating; w xC and W hC are network parameters that the cell state needs to learn; b C is bias of cell state; c t is the cellular state;
The output gate calculation formula is as follows:
ot=σ(Wxoxt+Whoht-1+Bo)
ht=ot×tanh(Ct)
wherein: o t is the output of the output gate; h t is an output hidden unit; w xo and W ho are network parameters to be learned of the output gate; b o is the bias of the output gate.
7. A method of power data restoration according to claim 1, wherein: the method for repairing the data by adopting different methods according to the type of the power data of the missing data comprises the following steps:
When the type of the power data of the missing data is the power data, layering the power data according to the same month of each year by adopting a layering mean value filling method, and taking the mean value of each layer to replace the missing value;
When the power data type of the missing data is load data, carrying out maximum likelihood estimation on the missing data by adopting a maximum expected algorithm, and continuously iterating, wherein the missing value is replaced by a final output value;
When the power data type of the missing data is current data, voltage data or frequency data, a mean value filling method is adopted to replace the missing value by the mean value of the front and rear observation values or the mean value of the non-missing data.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110717117.1A CN113485986B (en) | 2021-06-25 | 2021-06-25 | Electric power data restoration method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110717117.1A CN113485986B (en) | 2021-06-25 | 2021-06-25 | Electric power data restoration method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113485986A CN113485986A (en) | 2021-10-08 |
CN113485986B true CN113485986B (en) | 2024-08-02 |
Family
ID=77936302
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110717117.1A Active CN113485986B (en) | 2021-06-25 | 2021-06-25 | Electric power data restoration method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113485986B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015158198A1 (en) * | 2014-04-17 | 2015-10-22 | 北京泰乐德信息技术有限公司 | Fault recognition method and system based on neural network self-learning |
CN110097920A (en) * | 2019-04-10 | 2019-08-06 | 大连理工大学 | A kind of metabolism group shortage of data value fill method based on neighbour's stability |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3514908B1 (en) * | 2018-01-22 | 2022-02-09 | Hitachi Energy Switzerland AG | Methods and devices for condition classification of power network assets |
CN109091867B (en) * | 2018-07-26 | 2023-04-07 | 深圳市腾讯网络信息技术有限公司 | Operation control method, device, equipment and storage medium |
CN109820525A (en) * | 2019-01-23 | 2019-05-31 | 五邑大学 | A kind of driving fatigue recognition methods based on CNN-LSTM deep learning model |
CN110083699B (en) * | 2019-03-18 | 2021-01-12 | 中国科学院自动化研究所 | News popularity prediction model training method based on deep neural network |
CN110334726A (en) * | 2019-04-24 | 2019-10-15 | 华北电力大学 | A kind of identification of the electric load abnormal data based on Density Clustering and LSTM and restorative procedure |
CN111597080A (en) * | 2020-05-22 | 2020-08-28 | 广东省生态环境技术研究所 | Method for repairing underground water level missing data based on ground statistics and neural network |
-
2021
- 2021-06-25 CN CN202110717117.1A patent/CN113485986B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2015158198A1 (en) * | 2014-04-17 | 2015-10-22 | 北京泰乐德信息技术有限公司 | Fault recognition method and system based on neural network self-learning |
CN110097920A (en) * | 2019-04-10 | 2019-08-06 | 大连理工大学 | A kind of metabolism group shortage of data value fill method based on neighbour's stability |
Also Published As
Publication number | Publication date |
---|---|
CN113485986A (en) | 2021-10-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108256697B (en) | Prediction method for short-term load of power system | |
CN112488415A (en) | Power load prediction method based on empirical mode decomposition and long-and-short-term memory network | |
Niu et al. | Uncertainty modeling for chaotic time series based on optimal multi-input multi-output architecture: Application to offshore wind speed | |
CN108445752B (en) | Random weight neural network integrated modeling method for self-adaptively selecting depth features | |
CN112434848B (en) | Nonlinear weighted combination wind power prediction method based on deep belief network | |
CN111091233A (en) | Wind power plant short-term wind power prediction modeling method based on wavelet analysis and multi-model AdaBoost depth network | |
CN109242212A (en) | A kind of wind-powered electricity generation prediction technique based on change Mode Decomposition and length memory network | |
CN112381673B (en) | Park electricity utilization information analysis method and device based on digital twin | |
CN109146162A (en) | A kind of probability wind speed forecasting method based on integrated Recognition with Recurrent Neural Network | |
CN109412161B (en) | Power system probability load flow calculation method and system | |
CN113449919B (en) | Power consumption prediction method and system based on feature and trend perception | |
CN113361803A (en) | Ultra-short-term photovoltaic power prediction method based on generation countermeasure network | |
CN111931983A (en) | Precipitation prediction method and system | |
CN109583588A (en) | A kind of short-term wind speed forecasting method and system | |
CN117150409A (en) | Power consumption abnormality detection method | |
CN115564155A (en) | Distributed wind turbine generator power prediction method and related equipment | |
CN116432812A (en) | New energy power prediction method for optimizing LSTM (least squares) by using Zun sea squirt algorithm | |
Zhang et al. | Remaining useful life prediction of lithium-ion batteries based on TCN-DCN fusion model combined with IRRS filtering | |
CN118040678A (en) | Short-term offshore wind power combination prediction method | |
CN117521501A (en) | NSGA-based LSTM prediction energy power generation method and system | |
CN113191526A (en) | Short-term wind speed interval multi-objective optimization prediction method and system based on random sensitivity | |
CN112418564A (en) | Charging and battery replacing load prediction method for charging and battery replacing station based on LSTM and related components thereof | |
CN108038518A (en) | A kind of photovoltaic generation power based on meteorological data determines method and system | |
CN113485986B (en) | Electric power data restoration method | |
CN116663745A (en) | LSTM drainage basin water flow prediction method based on PCA_DWT |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |