CN110119816A - A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring - Google Patents
A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring Download PDFInfo
- Publication number
- CN110119816A CN110119816A CN201910303389.XA CN201910303389A CN110119816A CN 110119816 A CN110119816 A CN 110119816A CN 201910303389 A CN201910303389 A CN 201910303389A CN 110119816 A CN110119816 A CN 110119816A
- Authority
- CN
- China
- Prior art keywords
- layer
- self
- input
- hidden layer
- encoding encoder
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 20
- 238000012544 monitoring process Methods 0.000 title claims abstract description 14
- 239000010410 layer Substances 0.000 claims abstract description 92
- 230000009467 reduction Effects 0.000 claims abstract description 24
- 238000013507 mapping Methods 0.000 claims abstract description 18
- 238000012549 training Methods 0.000 claims abstract description 18
- 210000002569 neuron Anatomy 0.000 claims abstract description 16
- 238000004364 calculation method Methods 0.000 claims abstract description 4
- 239000011229 interlayer Substances 0.000 claims abstract description 4
- 230000004913 activation Effects 0.000 claims description 3
- 239000000284 extract Substances 0.000 abstract description 2
- 241000208340 Araliaceae Species 0.000 description 4
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 4
- 235000003140 Panax quinquefolius Nutrition 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 235000008434 ginseng Nutrition 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 238000001514 detection method Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000001052 transient effect Effects 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 240000002853 Nelumbo nucifera Species 0.000 description 1
- 235000006508 Nelumbo nucifera Nutrition 0.000 description 1
- 235000006510 Nelumbo pentapetala Nutrition 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005086 pumping Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01R—MEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
- G01R31/00—Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Alarm Systems (AREA)
Abstract
The present invention provides a kind of load characteristic self-learning methods suitable for non-intrusion type electric power monitoring, comprising: obtains the load data sequence after load event occurs as input sample;Noise is added at random in the sample;The input layer number of noise reduction self-encoding encoder is determined according to data sample sequence length, generates input layer and output layer;It determines self-encoding encoder hidden layer neuron number and generates hidden layer;Set the training error limit of noise reduction self-encoding encoder;Initialize noise reduction self-encoding encoder interlayer mapping parameters;According to the mapping parameters sequence of calculation for the reconstructed error of list entries;Judge that reconstructed error extracts abstract characteristics of the hidden layer nodal value as load event if being less than training error limit, updates input layer and hidden layer using gradient descent algorithm if more than training error limit, the mapping parameters between hidden layer and output layer.Present invention realization has carried out global explanation to load event data and curves to realize the study of abstract characteristics to the compressed sensing of data sequence.
Description
Technical field
The invention belongs to technical field of power systems, be related to a kind of load characteristic suitable for non-intrusion type electric power monitoring from
Learning method.
Background technique
Non-intrusion type load monitoring (NILM) technology includes four big basic contents: 1) data and pretreatment acquire;2) event
Detection;3) feature extraction;4) load identifies.Its principle for collectively forming non-intrusion type load monitor system is as shown in Figure 1.System
At work, data acquisition acquires first with preprocessing module and calculates total load data (active power, reactive power, electricity
Pressure, electric current etc.), pass to event checking module;Event checking module can detect which moment load event has occurred at (bears
Lotus investment or excision);Characteristic extracting module is according to event detection as a result, extracting load event feature after load event generation
(including steady state characteristic and transient characteristic);Final load identification module is known according to the load event feature extracted by classification
Other algorithm carries out Classification and Identification to load event.Wherein load characteristic extraction module plays an important role in NILM, only mentions
Correct, effective load characteristic is got, further load could be carried out by load classification recognizer using these features
Identification.
Research currently for load characteristic focuses primarily upon the stable state after load switching event occurs and transient state physics is special
Sign, comprising: active, active and reactive, electric current, voltage and its residual quantity, electric current-voltage trace and higher hamonic wave feature etc..This
The characteristic quantity extracted a bit all has clear physical significance, needs artificially to go to be set when carrying out feature extraction, then lead to
It crosses and the collected electricity data of data acquisition module is calculated.When calculating these features, often it is
Based on local data's point, by taking characteristic quantity power peak as an example, it has only used a data point after load event occurs, and
This feature of active residual quantity has also only used two data points before and after certain moment.The data point of part is for corresponding load thing
Part data and curves have centainly explanatory, can reflect the substantially feature of curve, but still lack bent to load event data
The overall situation of line is explanatory.
Summary of the invention
To solve the above problems, the present invention proposes a kind of load characteristic self study side suitable for non-intrusion type electric power monitoring
Method goes study that can reflect without being manually set which specific physical features is characteristic extracting module need to extract, but independently
The abstract characteristics of load event essential characteristic, the data source of feature learning is the load thing of switching event module calibration in this method
Part moment corresponding load data sequence.
In order to achieve the above object, the invention provides the following technical scheme:
A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring, includes the following steps:
Step 1: obtaining the load data sequence after load event occurs as input sample;
Step 2: noise is added at random in input sample;
Step 3: determining the input layer number of noise reduction self-encoding encoder according to the length of data sample sequence, and generate
Input layer and output layer;
Step 4: determining noise reduction self-encoding encoder hidden layer neuron number, and generate hidden layer;
Step 5: the training error limit of setting noise reduction self-encoding encoder;
Step 6: initialization noise reduction self-encoding encoder input layer and hidden layer, the mapping parameters of hidden layer and output interlayer, ginseng
Number includes weight and biasing;
Step 7: calculating output sequence for list entries according to the mapping parameters between input data sequence and each layer
Reconstructed error;
Step 8: the training error limit for whether being less than setting to reconstructed error judges, if reconstructed error is less than training
The limits of error then goes to step ten, goes to step nine if reconstructed error is greater than training error limit;
Step 9: input layer and hidden layer are updated using gradient descent algorithm, the mapping ginseng between hidden layer and output layer
Number;
Step 10: extracting abstract characteristics of the hidden layer nodal value as load event.
Further, output layer is identical as input layer structure in the step 3, the neuron number of output layer and input
Layer is identical.
Further, in the step 4, hidden layer neuron number is less than input layer and the specified neuron of output layer
Number.
Further, in the step 6, the mapping function between input layer and hidden layer is defined as:
Y=fθ(X')=S (WX'+b) (1)
S (X) is the activation primitive of noise reduction self-encoding encoder in formula (1), and θ is coding parameter, is made of weight W and biasing b;
Mapping function between hidden layer and output layer is defined as:
Z=fθ'(Y)=S (W'Y+b') (2)
θ ' is decoding parametric in formula (2), is made of weight W' and biasing b'.
Further, in the step 7, the calculation formula of the reconstructed error are as follows:
Wherein, l is the input layer number of self-encoding encoder.
Compared with prior art, the invention has the advantages that and the utility model has the advantages that
The present invention utilizes noise reduction self-encoding encoder model, carries out coding further decoding to the load data sequence of input, to realize
To the compressed sensing of data sequence, to realize the study of abstract characteristics.The data source of feature self study is in the method for the present invention
Load event data and curves are realized the overall situation by the load event moment corresponding load data sequence of switching event module calibration
It explains.
Detailed description of the invention
Fig. 1 is the basic schematic diagram for being non-intrusion type load monitor system.
Fig. 2 is self-encoding encoder structure chart.
Fig. 3 is the flow chart of the load characteristic self-learning method suitable for non-intrusion type electric power monitoring.
Specific embodiment
Technical solution provided by the invention is described in detail below with reference to specific embodiment, it should be understood that following specific
Embodiment is only illustrative of the invention and is not intended to limit the scope of the invention.
The present invention utilizes noise reduction self-encoding encoder model realization, and self-encoding encoder is a kind of special neural network, i.e., output with
Input identical, model passes through training adjusting parameter, so that input restores former as much as possible by way of feature coding further decoding
The input signal come, these are the abstract characteristics for indicating input signal through the transformed numerical value of feature coding, general self-editing
Code device structure is as shown in Figure 2.
Using the different corresponding time serieses of load switching event as the input of self-encoding encoder, with a certain specific load
For switching event sample, it is assumed that corresponding sample size is k, then sample set is x={ x(1),x(2)...x(k), any one
Sample x(i)It is the time series that length is l, i.e. x(i)It is l dimensional vector, the input layer number for designing self-encoding encoder is l,
The neuron number for designing intermediate hidden layers is m, since self-encoding encoder uses the reconstruct of back-propagation algorithm optimization input data
Error, even if target exports y(i)→x(i), force neural network to remove the compression expression of study input data, i.e., must be tieed up from m
Hidden neuron activity vector α(i)∈RmIn reconstruct x(i).If the arbitrary sample in sample set is completely random, than
Such as the x of each input(i)It is all the independent identically distributed Gaussian random variable completely irrelevant with other input variables, this study
Process would become hard to carry out, but if the sample data of input all implies some specific structures, then this algorithm can be sent out
Correlation between existing input sample data.After network training, each input sample x(i)Corresponding hidden layer activity
Vector α(i)Abstract characteristics vector after being equivalent to dimensionality reduction (study).Some random noises are added in the input of self-encoding encoder, this
When self-encoding encoder will obtain the ability that abstract characteristics are extracted from the input data being disturbed, the at this time robustness of self-encoding encoder
Enhancing.
A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring proposed by the present invention, process is as schemed
Shown in 3, comprising the following steps:
Step 1: obtaining the load data sequence after load event occurs as input sample:
With active power data instance, the initial time of load event and corresponding has been demarcated by incident Detection Algorithm
Stable state and transient process, using from initial time to the active power data sequence of steady-state process as the input sample of self-encoding encoder
This.
Step 2: noise is added at random in input sample:
The step for purpose be that input sample X is made to become noisy sample X ', to simulate the disturbance pair that may occur at random
The influence of self-encoding encoder feature learning ability, if self-encoding encoder can have very list entries in the presence of noise
Small reconstructed error, then it is assumed that the robustness of its feature learning ability is enhanced.For the self-encoding encoder of noise is artificially added
Referred to as noise reduction self-encoding encoder.
Step 3: determining the input layer number of noise reduction self-encoding encoder according to the length of data sample sequence, and generate
Input layer and output layer:
If the sequence length of the input sample X ' after adding and making an uproar is l, the neuron number for setting input layer is also l, i.e.,
There is the relationship mapped one by one in input sample sequence and input layer.According to the principle of self-encoding encoder feature learning, need to use up
It may reduce to the reconstructed error of input data sequence, therefore output layer should keep structure identical with input layer, i.e. output layer
Neuron number is identical as input layer.
Step 4: determining the hidden layer neuron number of noise reduction self-encoding encoder and generating hidden layer:
Have determined that input layer and the specified neuron number of output layer are l in step 3, for hidden layer neuron
The selection of number k should follow the principle of k < l, this is to meet the pumping that high-dimensional input vector is compressed into more low dimensional
As feature vector, to realize that the compression to data characteristics is extracted.
Step 5: the training error limit of setting noise reduction self-encoding encoder.
Training error limit may be considered the upper limit of the received reconstructed error of energy.
Step 6: initialization noise reduction self-encoding encoder input layer and hidden layer, the mapping parameters of hidden layer and output interlayer, ginseng
Number includes weight and biasing.
Mapping function between input layer and hidden layer can be with is defined as:
Y=fθ(X')=S (WX'+b) (1)
S (X) is the activation primitive of noise reduction self-encoding encoder in formula (1), and θ is coding parameter, is made of weight W and biasing b;
Mapping function between hidden layer and output layer can be with is defined as:
Z=fθ'(Y)=S (W'Y+b') (2)
θ ' is decoding parametric in formula (2), is made of weight W' and biasing b';
Step 7: calculating output sequence for list entries according to the mapping parameters between input data sequence and each layer
Reconstructed error.
The calculation formula of reconstructed error are as follows:
Pay attention in formula (3), output sequence Z is calculated on the basis of the list entries X' for the processing that has already passed through plus make an uproar
Arrive, but participate in reconstructed error calculate be still original list entries X.
Step 8: the training error limit for whether being less than setting to reconstructed error judges, if reconstructed error is less than training
The limits of error then thinks that noise reduction self-encoding encoder model has learnt to have good explanatory abstract characteristics to input data, turns step
Rapid ten.
If reconstructed error is limited greater than training error, then it is assumed that noise reduction self-encoding encoder model not yet arrive to input data by study
With good explanatory abstract characteristics, nine are gone to step.
Step 9: input layer and hidden layer are updated using gradient descent algorithm, the mapping ginseng between hidden layer and output layer
Number.
Step 10 extracts abstract characteristics of the hidden layer nodal value as load event.
The technical means disclosed in the embodiments of the present invention is not limited only to technological means disclosed in above embodiment, further includes
Technical solution consisting of any combination of the above technical features.It should be pointed out that for those skilled in the art
For, various improvements and modifications may be made without departing from the principle of the present invention, these improvements and modifications are also considered as
Protection scope of the present invention.
Claims (5)
1. a kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring, which comprises the steps of:
Step 1: obtaining the load data sequence after load event occurs as input sample;
Step 2: noise is added at random in input sample;
Step 3: determining the input layer number of noise reduction self-encoding encoder according to the length of data sample sequence, and generate input
Layer and output layer;
Step 4: determining noise reduction self-encoding encoder hidden layer neuron number, and generate hidden layer;
Step 5: the training error limit of setting noise reduction self-encoding encoder;
Step 6: initialization noise reduction self-encoding encoder input layer and hidden layer, the mapping parameters of hidden layer and output interlayer, parameter packet
Include weight and biasing;
Step 7: calculating reconstruct of the output sequence for list entries according to the mapping parameters between input data sequence and each layer
Error;
Step 8: the training error limit for whether being less than setting to reconstructed error judges, if reconstructed error is less than training error
Limit then goes to step ten, goes to step nine if reconstructed error is greater than training error limit;
Step 9: input layer and hidden layer are updated using gradient descent algorithm, the mapping parameters between hidden layer and output layer;
Step 10: extracting abstract characteristics of the hidden layer nodal value as load event.
2. the load characteristic self-learning method according to claim 1 suitable for non-intrusion type electric power monitoring, feature exist
In: output layer is identical as input layer structure in the step 3, and the neuron number of output layer is identical as input layer.
3. the load characteristic self-learning method according to claim 1 suitable for non-intrusion type electric power monitoring, feature exist
In: in the step 4, hidden layer neuron number is less than input layer and the specified neuron number of output layer.
4. the load characteristic self-learning method according to claim 1 suitable for non-intrusion type electric power monitoring, feature exist
In: the mapping function in the step 6, between input layer and hidden layer is defined as:
Y=fθ(X')=S (WX'+b) (1)
S (X) is the activation primitive of noise reduction self-encoding encoder in formula (1), and θ is coding parameter, is made of weight W and biasing b;
Mapping function between hidden layer and output layer is defined as:
Z=fθ'(Y)=S (W'Y+b') (2)
θ ' is decoding parametric in formula (2), is made of weight W' and biasing b'.
5. the load characteristic self-learning method according to claim 1 suitable for non-intrusion type electric power monitoring, feature exist
In: in the step 7, the calculation formula of the reconstructed error are as follows:
Wherein, l is the input layer number of self-encoding encoder.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910303389.XA CN110119816A (en) | 2019-04-15 | 2019-04-15 | A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910303389.XA CN110119816A (en) | 2019-04-15 | 2019-04-15 | A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110119816A true CN110119816A (en) | 2019-08-13 |
Family
ID=67521012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910303389.XA Pending CN110119816A (en) | 2019-04-15 | 2019-04-15 | A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110119816A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325234A (en) * | 2019-12-29 | 2020-06-23 | 杭州拓深科技有限公司 | Method for screening key features in non-invasive load identification |
CN114069853A (en) * | 2021-11-10 | 2022-02-18 | 天津大学 | Multi-energy load data online compression and reconstruction method based on segmented symbolic representation |
CN114910742A (en) * | 2022-05-05 | 2022-08-16 | 湖南腾河智慧能源科技有限公司 | Single-phase fault grounding monitoring method and system, electronic equipment and storage medium |
CN115201615A (en) * | 2022-09-15 | 2022-10-18 | 之江实验室 | Non-invasive load monitoring method and device based on physical constraint neural network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330517A (en) * | 2017-06-14 | 2017-11-07 | 华北电力大学 | One kind is based on S_Kohonen non-intrusion type resident load recognition methods |
CN108960488A (en) * | 2018-06-13 | 2018-12-07 | 国网山东省电力公司经济技术研究院 | A kind of accurate prediction technique of saturation loading spatial distribution based on deep learning and Multi-source Information Fusion |
-
2019
- 2019-04-15 CN CN201910303389.XA patent/CN110119816A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330517A (en) * | 2017-06-14 | 2017-11-07 | 华北电力大学 | One kind is based on S_Kohonen non-intrusion type resident load recognition methods |
CN108960488A (en) * | 2018-06-13 | 2018-12-07 | 国网山东省电力公司经济技术研究院 | A kind of accurate prediction technique of saturation loading spatial distribution based on deep learning and Multi-source Information Fusion |
Non-Patent Citations (1)
Title |
---|
黎鹏: "基于降噪自动编码器特征学习的音乐自动标注算法", 《华东理工大学学报》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111325234A (en) * | 2019-12-29 | 2020-06-23 | 杭州拓深科技有限公司 | Method for screening key features in non-invasive load identification |
CN114069853A (en) * | 2021-11-10 | 2022-02-18 | 天津大学 | Multi-energy load data online compression and reconstruction method based on segmented symbolic representation |
CN114069853B (en) * | 2021-11-10 | 2024-04-02 | 天津大学 | Multi-energy charge data online compression and reconstruction method based on segmented symbol representation |
CN114910742A (en) * | 2022-05-05 | 2022-08-16 | 湖南腾河智慧能源科技有限公司 | Single-phase fault grounding monitoring method and system, electronic equipment and storage medium |
CN114910742B (en) * | 2022-05-05 | 2024-05-28 | 湖南腾河智慧能源科技有限公司 | Single-phase fault grounding monitoring method and monitoring system, electronic equipment and storage medium |
CN115201615A (en) * | 2022-09-15 | 2022-10-18 | 之江实验室 | Non-invasive load monitoring method and device based on physical constraint neural network |
CN115201615B (en) * | 2022-09-15 | 2022-12-20 | 之江实验室 | Non-invasive load monitoring method and device based on physical constraint neural network |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110119816A (en) | A kind of load characteristic self-learning method suitable for non-intrusion type electric power monitoring | |
Negi et al. | Event detection and its signal characterization in PMU data stream | |
Zhang et al. | Fault diagnosis of power grid based on variational mode decomposition and convolutional neural network | |
Saini et al. | Detection and classification of power quality disturbances in wind‐grid integrated system using fast time‐time transform and small residual‐extreme learning machine | |
CN112396087B (en) | Method and device for analyzing power consumption data of solitary old people based on intelligent ammeter | |
Liu et al. | PV generation forecasting with missing input data: A super-resolution perception approach | |
CN110555515A (en) | Short-term wind speed prediction method based on EEMD and LSTM | |
CN105572501A (en) | Power quality disturbance identification method based on SST conversion and LS-SVM | |
Wang et al. | Synchrophasor data compression under disturbance conditions via cross-entropy-based singular value decomposition | |
CN108830411A (en) | A kind of wind power forecasting method based on data processing | |
CN109993346A (en) | Micro-capacitance sensor voltage safety evaluation method based on chaos time sequence and neural network | |
CN117992741B (en) | CVT error state evaluation method and system based on wide-area phasor measurement data | |
Chen et al. | Day-ahead forecasting of non-stationary electric power demand in commercial buildings: hybrid support vector regression based | |
Wang et al. | Adaptive data recovery model for PMU data based on SDAE in transient stability assessment | |
Wei et al. | Short-term forecasting for wind speed based on wavelet decomposition and LMBP neural network | |
Hong et al. | Deep‐belief‐Networks based fault classification in power distribution networks | |
Zhu et al. | Wind Speed Short-Term Prediction Based on Empirical Wavelet Transform, Recurrent Neural Network and Error Correction | |
CN112505452A (en) | Wide-area system broadband oscillation monitoring method | |
Wei et al. | Deep Belief network based faulty feeder detection of single-phase ground fault | |
CN111193254A (en) | Residential daily electricity load prediction method and device | |
Zhuang et al. | Data completion for power load analysis considering the low-rank property | |
CN115983347A (en) | Non-invasive load decomposition method, device and storage medium | |
Wang et al. | Stockwell‐transform and random‐forest based double‐terminal fault diagnosis method for offshore wind farm transmission line | |
Zhou et al. | Wind power prediction based on random forests | |
Reaz et al. | VHDL modeling for classification of power quality disturbance employing wavelet transform, artificial neural network and fuzzy logic |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20190802 Address after: Four pailou Nanjing Xuanwu District of Jiangsu Province, No. 2 211189 Applicant after: SOUTHEAST University Applicant after: STATE GRID JIANGSU ELECTRIC POWER Co.,Ltd. SUZHOU BRANCH Address before: Four pailou Nanjing Xuanwu District of Jiangsu Province, No. 2 211189 Applicant before: Southeast University |
|
TA01 | Transfer of patent application right | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190813 |
|
RJ01 | Rejection of invention patent application after publication |