CN113902102A - Non-invasive load decomposition method based on seq2seq - Google Patents
Non-invasive load decomposition method based on seq2seq Download PDFInfo
- Publication number
- CN113902102A CN113902102A CN202111215244.8A CN202111215244A CN113902102A CN 113902102 A CN113902102 A CN 113902102A CN 202111215244 A CN202111215244 A CN 202111215244A CN 113902102 A CN113902102 A CN 113902102A
- Authority
- CN
- China
- Prior art keywords
- model
- seq2seq
- power
- lstm
- load
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000354 decomposition reaction Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 22
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 18
- 230000006870 function Effects 0.000 claims abstract description 17
- 238000011176 pooling Methods 0.000 claims abstract description 14
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 239000013598 vector Substances 0.000 claims description 23
- 230000004913 activation Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 12
- 238000012549 training Methods 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 5
- 238000012360 testing method Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000012546 transfer Methods 0.000 claims description 3
- 238000013136 deep learning model Methods 0.000 abstract description 5
- 238000005070 sampling Methods 0.000 abstract description 3
- 238000012544 monitoring process Methods 0.000 description 6
- 238000001514 detection method Methods 0.000 description 3
- 238000013528 artificial neural network Methods 0.000 description 2
- 230000008034 disappearance Effects 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 238000004880 explosion Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 230000000306 recurrent effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q50/00—Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
- G06Q50/06—Energy or water supply
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Business, Economics & Management (AREA)
- Economics (AREA)
- Public Health (AREA)
- Water Supply & Treatment (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Primary Health Care (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The invention provides a non-invasive load decomposition method based on seq2seq, which comprises the following steps: the first step is as follows: designing a seq2seq model; the second step is that: function extraction; performing convolution and pooling on the power sequence on a one-dimensional scale by using Conv1D, and extracting power features by means of a plurality of convolution kernels with the same weight; the third step: (3) load identification based on LSTM; the fourth step: seq2seqBCL load decomposition. Aiming at the problem that the decomposition accuracy of the existing non-invasive load decomposition method is low under the low-frequency sampling condition (1Hz or below), the invention provides a non-invasive load decomposition algorithm (seq2seq base on CNN and LSTM, seq2seq BCL) of seq2seq Based on the combination of a Convolutional Neural Network (CNN) and a long-short term memory network (LSTM). The deep learning model takes the power time series as the input of the network and performs characteristic extraction through CNN. In consideration of the time sequence of the power data, an LSTM layer is added for electrical appliance identification, and compared with a seq2seq model in NILMTK, the number of network layers is reduced, and the network structure is simplified.
Description
Technical Field
The invention belongs to the field of non-invasive load detection, and relates to a non-invasive load decomposition method based on seq2 seq.
Background
The development of Non-invasive Load Monitoring (NILM) is roughly divided into three stages: a proposing stage, a machine learning stage and a deep learning stage. Non-invasive load monitoring was first proposed by professor Hart in the 80's 20 th century, and in 1992 professor Hart first proposed a non-invasive load monitoring system. Until 2008, no scholars have proposed an integer-based planning method. Kolter et al used FHMM model for non-invasive load splitting in 2011, which achieved the best monitoring performance at the time by testing on the REDD data set. Non-intuitive Load Monitoring Toolkit (NILTK) issued in 2014, which is an open source tool specifically designed to compare energy decomposition algorithms in a repeatable manner, was the first study to compare multiple decomposition methods across multiple publicly available data sets. In 2015, a deep learning model is applied to the NILM field, students do not perform load decomposition according to the conventional four steps of data processing, event detection, feature extraction and load identification, and the steps of event detection and feature extraction are omitted through self-learning of the deep learning model. Mauch et al propose a two-layer bidirectional recurrent neural network (LSTM) architecture and a scheme combining HMM and Deep Neural Network (DNN) for load decomposition, which are improved compared with conventional FHMM, but because the data used in training the model is not sufficient, the generalization capability of the algorithm is not fully verified. The scholars in 2019 proposed a Convolutional Neural Network (CNN) based architecture that takes inputs and outputs as data sequences, while taking into account the previous state of the device to better estimate its current state. Furthermore, to better capture the correlation of the energy signal, the model gives the CNN model a recursive property. By adopting the multichannel CNN structure, additional variables related to power consumption (current, reactive power and apparent power) are added on the basis of the multichannel CNN structure, and the overall performance, the anti-noise capability and the convergence time of the system are improved. The NILM load identification algorithm has high requirement on the quality of the data set, and researchers use ENERTAK to research and discover that the performance of the NILM is greatly influenced and the decomposition precision is low when the sampling frequency of the data set is lower than 1-3 Hz.
Disclosure of Invention
1. The technical problem to be solved is as follows:
the existing non-invasive load monitoring research method only focuses on a certain point in a characteristic and time relation when a deep learning model is applied, does not consider the characteristics of characteristic extraction and electric power data based on a time sequence at the same time, and has lower decomposition precision under low-frequency data.
2. The technical scheme is as follows:
in order to solve the above problems, the present invention provides a seq2 seq-based non-intrusive load decomposition method, comprising the following steps: the first step is as follows: designing a seq2seq model; the second step is that: function extraction; performing convolution and pooling on the power sequence on a one-dimensional scale by using Conv1D, and extracting power features by means of a plurality of convolution kernels with the same weight; the third step: (3) load identification based on LSTM; the fourth step: seq2seqBCL load decomposition.
The method for designing the seq2seq model in the first step comprises the following steps: firstly, inputting the total power of the household power into a one-dimensional convolutional neural network (Conv1D) for feature self-extraction, placing the extracted distributed power features in a full connection layer with a fixed length for storage, and outputting the features integrated into a sample space to the next layer through an activation function, wherein the next layer is used for carrying out load identification on an electric appliance.
In the second step, the convolution operation is shown in the following formula.
Wherein Xi represents the input vector of the ith layer; f represents an activation function, and the introduction of the activation function can enable the model to have nonlinear processing and enhance the expression capability of the model.Represents a convolution operation; wi represents a weight matrix of the ith layer of convolution kernel; bi represents the bias value of the weight matrix in the i-th layer convolution kernel.The distributed features are further pooled and mapped to a full connection layer to obtain a final feature vector, and the mathematical formula of the pooling operation is as follows:
Xi=Maxpooling(Xi-1)
wherein XiRepresenting the pooled vectors; xi-1Representing the vector before pooling; maxpooling stands for maximum pooling operation.
In the third step, the LSTM has two transfer states: cell State (C)t) And hidden state (H)t) The calculation formula for each state is as follows:
Ct=Zf⊙Ct-1+Zi⊙Z
Ht=Zo⊙tanh(Ct)
Yt=σ(W′Ht),Xtrepresenting the total power vector input at the time t; y istRepresenting the load output at the time t to identify an electric appliance vector; htAnd Ht-1Respectively representing the hidden state at the time t and the hidden state at the previous time; ctAnd Ct-1Respectively representing the cell state at time t and the cell state at the previous time, Zf,Zi, ZoIs three gates and Z is a new candidate vector.
Z isf,Zi,ZoThe mathematical formula for Z is:
Zf=σ(Wf⊙[Xt,Ht-1]+bf)
Zt=σ(Wi⊙[Xt,Ht-1]+bi)
Zo=σ(Wo⊙[Xt,Ht-1]+bo)
Z=tanh(W⊙[Xt,Ht-1]+ b) wherein Wf,Wi,WoW represents a weight matrix; an operation representing multiplication of two matrices; [ X ]t,Ht-1]Represents XtAnd Ht-1Forming a splicing matrix; σ and tanh represent activation functions. bf,bi,boAnd b represents an offset value.
In the fourth step, the specific steps of seq2seqBCL load decomposition are as follows: preparing data: dividing the general table data and the sub table data according to different electrical appliances, and dividing respective training sets and test sets according to the electrical appliances; training a model: training the prepared data on a seq2seqBCL model, and storing the trained model for load identification and prediction; application model: and inputting the total power into the trained seq2seqBCL model aiming at a certain electric appliance to obtain the recognition result. Beneficial effects are that:
aiming at the problem that the decomposition accuracy of the existing non-invasive load decomposition method is low under the low-frequency sampling condition (1Hz and below), the invention provides a non-invasive load decomposition algorithm (seq2seq Based on CNN and LSTM, seq2seq BCL) of seq2seq Based on the combination of a Convolutional Neural Network (CNN) and a long-short term memory network (LSTM). The deep learning model takes the power time series as the input of the network and performs characteristic extraction through CNN. In consideration of the time sequence of the power data, an LSTM layer is added for electrical appliance identification, and compared with a seq2seq model in NILMTK, the number of network layers is reduced, and the network structure is simplified.
Detailed Description
The present invention will be described in detail below.
The invention provides a non-invasive load decomposition method based on seq2seq, which comprises the following steps: the first step is as follows: designing a seq2seq model; the second step is that: function extraction; performing convolution and pooling on the power sequence on a one-dimensional scale by using Conv1D, and extracting power features by means of a plurality of convolution kernels with the same weight; the third step: (3) load identification based on LSTM; the fourth step: seq2seqBCL load decomposition.
The method specifically comprises the following steps: the first step is as follows: designing a seq2seq model; a non-intrusive load decomposition algorithm (seq2seq Based on CNN and LSTM, seq2seq BCL) Based on sequence-to-sequence of CNN and LSTM is designed, firstly, the total power of household electricity is input into a one-dimensional convolutional neural network (Conv1D) for feature self-extraction, the extracted distributed power features are placed in a full connection layer (Dense) with a fixed length for storage, and the features integrated into a sample space are output to the next layer through an activation function, so that the non-linear expression capability of the algorithm is enhanced. And the LSTM is adopted by the lower layer to carry out load identification on the electric appliance, and the value information hidden in the power data in the time relation can be mined.
The second step is that: the Conv1D is used for performing convolution and pooling on the power sequence on a one-dimensional scale, the power features are extracted by means of a plurality of convolution kernels with the same weight, the Cony1D is used for avoiding traditional manual feature extraction, and the structure has strong robustness.
The household electricity consumption data are preprocessed to obtain input vectors, convolution operation is conducted on the input vectors through convolution cores, and then distributed characteristics of the power data are obtained through an activation function. The convolution operation is shown in equation (1).
Wherein Xi represents the input vector of the ith layer; f represents an activation function, and the introduction of the activation function can enable the model to have nonlinear processing and enhance the expression capability of the model.Represents a convolution operation; wi represents a weight matrix of the ith layer of convolution kernel; bi represents the bias value of the weight matrix in the i-th layer convolution kernel. The distributed features are further pooled and mapped to a full connection layer to obtain a final feature vector, and the mathematical formula of the pooling operation is as follows:
Xi=Maxpooling(Xi-1)
wherein XiRepresenting the pooled vectors; xi-1Representing the vector before pooling; maxpooling stands for maximum pooling operation.
The third step: load identification based on the LSTM has strong relevance to data before and after a certain time when load decomposition is performed on a power time series. In order to mine valuable information hidden in the relevance, LSTM is adopted to carry out load identification processing, and the problems of gradient disappearance and gradient explosion generated when long sequence power data are trained are solved.
In contrast to normal RNNs, LSTM has two transitive states: the method comprises a cell state (Ct) and a hidden state (Ht), wherein the Ct changes slowly in the transfer process, and the Ht can be greatly different at different layer nodes, which is a key point for solving the problems of gradient disappearance and gradient explosion when a long power sequence is trained.
XtRepresenting the total power vector input at the time t; y istRepresenting the load output at the time t to identify an electric appliance vector; htAnd Ht-1Respectively representing the hidden state at the time t and the hidden state at the previous time; ctAnd Ct-1Respectively representing the cell state at time t and the cell state at the previous time. At time t, the calculation formula for each state is as follows:
Ct=Zf⊙Ct-1+Zi⊙Z
Ht=Zo⊙tanh(Ct)
Yt=σ(W′Ht)
Zf,Zi,Zois three gates and Z is a new candidate vector. The mathematical formula is as follows:
Zf=σ(Wf⊙[Xt.Ht-1]+bf)
Zi=σ(Wi⊙[Xt,Ht-1]+bi)
Zo=σ(Wo⊙[Xt,Ht-1]+bo)
Z=tanh(W⊙[Xt,Ht-1]+b)
wherein Wf,Wi,WoW represents a weight matrix; an operation representing multiplication of two matrices; [ x ] oft,Ht-1]Represents XtAnd Ht-1Forming a splicing matrix; σ and tanh represent activation functions. bf,bi,boAnd b represents an offset value.
The fourth step: the seq2seqBCL load decomposition comprises the following specific steps: data preparation. And dividing the general table data and the sub table data according to different electrical appliances, and dividing respective training sets and test sets according to the electrical appliances.
And secondly, training the model. And training the prepared data on a seq2seqBCL model, and storing the trained model for load identification and prediction.
And thirdly, applying the model. And inputting the total power into the trained seq2seqBCL model aiming at a certain electric appliance to obtain the recognition result.
Claims (6)
1. A seq2 seq-based non-intrusive load decomposition method comprises the following steps: the first step is as follows: designing a seq2seq model; the second step is that: function extraction; performing convolution and pooling on the power sequence on a one-dimensional scale by using Conv1D, and extracting power features by means of a plurality of convolution kernels with the same weight; the third step: (3) load identification based on LSTM; the fourth step: seq2seqBCL load decomposition.
2. The method of claim 1, wherein: the method for designing the seq2seq model in the first step comprises the following steps: firstly, inputting the total power of the household power into a one-dimensional convolutional neural network (Conv1D) for feature self-extraction, placing the extracted distributed power features in a full connection layer with a fixed length for storage, and outputting the features integrated into a sample space to the next layer through an activation function, wherein the next layer is used for carrying out load identification on an electric appliance.
3. The method of claim 1, wherein: in the second step, the convolution operation is represented by the following formula,
wherein Xi represents the input vector of the ith layer; f represents an activation function, the introduction of the activation function can enable the model to have nonlinear processing, enhance the expression capability of the model,represents a convolution operation;wi represents a weight matrix of the ith layer of convolution kernel; bi represents the offset value of the weight matrix in the convolution kernel of the ith layer, the distributed features are further pooled and mapped to the full-connected layer to obtain the final feature vector, and the mathematical formula of the pooling operation is as follows:
Xi=Maxpooling (Xi-1)
wherein XiRepresenting the pooled vectors; xi-1Representing the vector before pooling; maxpooling stands for maximum pooling operation.
4. The method of claim 1, wherein: in the third step, the LSTM has two transfer states: cell State (C)t) And hidden state (H)t) The calculation formula for each state is as follows:
Ct=Zf⊙Ct-1+Zi⊙Z
Ht=Zo⊙tanh(Ct)
Yt=σ(W′Ht),Xtrepresenting the total power vector input at the time t; y istRepresenting the load output at the time t to identify an electric appliance vector; htAnd Ht-1Respectively representing the hidden state at the time t and the hidden state at the previous time; ctAnd Ct-1Respectively representing the cell state at time t and the cell state at the previous time, Zf,Zi,ZoIs three gates and Z is a new candidate vector.
5. The method of claim 4, wherein: zf,Zi,ZoThe mathematical formula for Z is: zf=σ(Wf⊙[Xt,Ht-1]+bf)
Zi=σ(Wi⊙[Xt,Ht-1]+bi)
Zo=σ(Wo⊙[Xt,Ht-1]+bo)
Z=tanh(W⊙[Xt,Ht-1]+ b) wherein Wf,Wi,WoW represents a weight matrix; an operation representing multiplication of two matrices; [ X ]t,Ht-1]Represents XtAnd Ht-1Forming a splicing matrix; σ and tanh represent activation functions, bf,bi,boAnd b represents an offset value.
6. The method of any one of claims 1 to 5, wherein: in the fourth step, the specific steps of seq2seqBCL load decomposition are as follows: preparing data: dividing the general table data and the sub table data according to different electrical appliances, and dividing respective training sets and test sets according to the electrical appliances; training a model: training the prepared data on a seq2seqBCL model, and storing the trained model for load identification and prediction; application model: and inputting the total power into the trained seq2seqBCL model aiming at a certain electric appliance to obtain the recognition result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111215244.8A CN113902102A (en) | 2021-10-19 | 2021-10-19 | Non-invasive load decomposition method based on seq2seq |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111215244.8A CN113902102A (en) | 2021-10-19 | 2021-10-19 | Non-invasive load decomposition method based on seq2seq |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113902102A true CN113902102A (en) | 2022-01-07 |
Family
ID=79192746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111215244.8A Withdrawn CN113902102A (en) | 2021-10-19 | 2021-10-19 | Non-invasive load decomposition method based on seq2seq |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113902102A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745299A (en) * | 2022-03-16 | 2022-07-12 | 南京工程学院 | Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network |
CN115130830A (en) * | 2022-06-08 | 2022-09-30 | 山东科技大学 | Non-intrusive load decomposition method based on cascade width learning and sparrow algorithm |
CN115330553A (en) * | 2022-06-22 | 2022-11-11 | 四川大学 | Equipment feature based multi-layer optimal selection non-intrusive load decomposition method |
CN116644320A (en) * | 2023-07-27 | 2023-08-25 | 浙江大有实业有限公司配电工程分公司 | Building migration non-invasive load monitoring method based on seq2seq |
-
2021
- 2021-10-19 CN CN202111215244.8A patent/CN113902102A/en not_active Withdrawn
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114745299A (en) * | 2022-03-16 | 2022-07-12 | 南京工程学院 | Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network |
CN114745299B (en) * | 2022-03-16 | 2023-06-13 | 南京工程学院 | Non-invasive load monitoring method based on sequence delay reconstruction CSP convolutional neural network |
CN115130830A (en) * | 2022-06-08 | 2022-09-30 | 山东科技大学 | Non-intrusive load decomposition method based on cascade width learning and sparrow algorithm |
CN115130830B (en) * | 2022-06-08 | 2024-05-14 | 山东科技大学 | Non-invasive load decomposition method based on cascade width learning and sparrow algorithm |
CN115330553A (en) * | 2022-06-22 | 2022-11-11 | 四川大学 | Equipment feature based multi-layer optimal selection non-intrusive load decomposition method |
CN115330553B (en) * | 2022-06-22 | 2023-04-11 | 四川大学 | Non-invasive load decomposition method based on equipment characteristic multi-layer optimization |
CN116644320A (en) * | 2023-07-27 | 2023-08-25 | 浙江大有实业有限公司配电工程分公司 | Building migration non-invasive load monitoring method based on seq2seq |
CN116644320B (en) * | 2023-07-27 | 2023-11-07 | 浙江大有实业有限公司配电工程分公司 | Building migration non-invasive load monitoring method based on seq2seq |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113902102A (en) | Non-invasive load decomposition method based on seq2seq | |
CN109685314B (en) | Non-intrusive load decomposition method and system based on long-term and short-term memory network | |
CN111368904B (en) | Electrical equipment identification method based on electric power fingerprint | |
CN110119854A (en) | Voltage-stablizer water level prediction method based on cost-sensitive LSTM Recognition with Recurrent Neural Network | |
CN112115648B (en) | Transformer top layer oil temperature prediction method based on improved deep learning method | |
CN108647809A (en) | A kind of exhaust enthalpy of turbine real-time computing technique based on least square method supporting vector machine | |
CN113177357B (en) | Transient stability assessment method for power system | |
CN114006370B (en) | Power system transient stability analysis and evaluation method and system | |
CN113988215B (en) | Power distribution network metering cabinet state detection method and system | |
CN115758246A (en) | Non-invasive load identification method based on EMD and AlexNet | |
CN111123894A (en) | Chemical process fault diagnosis method based on combination of LSTM and MLP | |
CN115358437A (en) | Power supply load prediction method based on convolutional neural network | |
CN110674791B (en) | Forced oscillation layered positioning method based on multi-stage transfer learning | |
CN117151770A (en) | Attention mechanism-based LSTM carbon price prediction method and system | |
CN116843012A (en) | Time sequence prediction method integrating personalized context and time domain dynamic characteristics | |
CN109459609B (en) | Distributed power supply frequency detection method based on artificial neural network | |
CN112183848B (en) | Power load probability prediction method based on DWT-SVQR integration | |
Du et al. | Gearbox fault diagnosis method based on improved MobileNetV3 and transfer learning | |
CN115238951A (en) | Power load prediction method and device | |
CN112016684A (en) | Electric power terminal fingerprint identification method of deep parallel flexible transmission network | |
Xueping et al. | Fraud Prediction of Credit Card Customers Based on Xgboost Model and Multi-Layer Perception Model | |
CN109934156A (en) | A kind of user experience evaluation method and system based on ELMAN neural network | |
Ma et al. | Research on Line Loss Prediction Method Based on Improved DBN Model | |
CN112133941B (en) | Rapid fault diagnosis method for locomotive proton exchange membrane fuel cell system | |
Xie et al. | Data-driven based method for power system time-varying composite load modeling |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20220107 |