CN113657149A - Electric energy quality analysis and identification method based on deep learning - Google Patents

Electric energy quality analysis and identification method based on deep learning Download PDF

Info

Publication number
CN113657149A
CN113657149A CN202110763356.0A CN202110763356A CN113657149A CN 113657149 A CN113657149 A CN 113657149A CN 202110763356 A CN202110763356 A CN 202110763356A CN 113657149 A CN113657149 A CN 113657149A
Authority
CN
China
Prior art keywords
output
node
training
layer
lstm model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110763356.0A
Other languages
Chinese (zh)
Inventor
王倩
梁雪
朱龙辉
李宁
李贺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN202110763356.0A priority Critical patent/CN113657149A/en
Publication of CN113657149A publication Critical patent/CN113657149A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0639Performance analysis of employees; Performance analysis of enterprise or organisation operations
    • G06Q10/06395Quality analysis or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/82Energy audits or management systems therefor

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Resources & Organizations (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Economics (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Development Economics (AREA)
  • General Business, Economics & Management (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Marketing (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Primary Health Care (AREA)
  • Signal Processing (AREA)
  • Water Supply & Treatment (AREA)
  • Public Health (AREA)
  • Game Theory and Decision Science (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an electric energy quality analysis and identification method based on deep learning, which comprises the steps of collecting electric energy signals to be detected, randomly dividing the electric energy signals to be detected into a training sample set and a testing sample set, inputting the training sample set into a long-short term memory network (LSTM) model for training to obtain the trained LSTM model, and inputting the testing sample set into the trained LSTM model for testing the electric energy quality disturbance classification condition. According to the method, the long-time memory network is used as a model for electric energy signal classification, the model is trained through a Softmax function and a back propagation algorithm, convergence is rapidly achieved, feature extraction through human intervention is avoided, electric energy quality signal classification is directly achieved, errors are reduced, identification accuracy is improved, and the method is high in practicability.

Description

Electric energy quality analysis and identification method based on deep learning
Technical Field
The invention belongs to the field of electric energy quality analysis and identification of an electric power system, and relates to an electric energy quality analysis and identification method based on deep learning.
Background
With the continuous development of society and economy, the complexity and diversity of modern power systems are also continuously improved, and the problem of power quality is more and more prominent under the influence of loads including impact, volatility, nonlinearity and the like. For example, some non-linear devices inject various interference signals into the power system during use, and these interference signals easily cause serious consequences such as device overheating, motor stalling, protection failure, metering inaccuracy and the like, so as to cause serious economic loss and social influence, and have many adverse effects on the normal operation of the power grid.
The traditional power quality analysis and identification mainly divides the classification of power quality into two steps to solve the problems: extracting the characteristics of the electric energy signals and classifying according to the extracted characteristics.
The current methods for feature extraction include: fourier transform, short-time Fourier transform, wavelet transform, S transform and other digital signal processing methods. Some new techniques such as hilbert yellow transform and ensemble empirical mode decomposition have also emerged in recent years. The fourier transform is the most basic and most common method for extracting the characteristics of the power signal, but because the fourier transform is a global transform, the specific position of the power disturbance cannot be determined. The short-time fourier transform is a method of selecting a window function in the time domain and analyzing a signal by moving a window, but a time window needs to be selected manually, and the size of the time window is greatly related to the effect of feature extraction. When the wavelet transform method is used for feature extraction, the most appropriate wavelet basis needs to be artificially selected. If an inappropriate wavelet basis is selected, the efficiency of feature extraction is greatly reduced. Namely, certain human intervention is needed in the extraction process to extract the features, which inevitably causes incompleteness of feature extraction, influences the subsequent classification process and has great influence on the classification result. While the hilbert yellow transform is not suitable for wideband signals, the ensemble empirical mode decomposition increases the running time of the algorithm in order to reduce errors caused by white noise.
In the classification stage, the current methods include expert systems, artificial neural networks, decision trees, convolutional neural networks, and the like. The expert system relies on human experience, there are limitations objectively, and the efficiency is limited and feature extraction is difficult to perform when the method is used for classification. The artificial neural network may be trapped in local extrema during the training process of the small sample, which results in unsuccessful training and longer training time. Decision trees are easily over-fitted during the training process. The convolutional neural network is mainly applied to the field of image recognition, so in the recognition of power quality disturbance, the previous research converts a power signal into a two-dimensional image in a certain mode and transmits the two-dimensional image into the network for learning.
The above situations show that, in the existing several methods for analyzing and identifying the power quality, the power signal needs to be preprocessed, and certain human intervention is needed to extract the features, so that incomplete feature extraction is inevitably caused, and the subsequent classification process is influenced.
Disclosure of Invention
The invention aims to provide an electric energy quality analysis and identification method based on deep learning, and solves the problems that the existing electric energy quality analysis and identification method needs manual intervention to extract features, has large errors and can not directly process time-sequenced electric energy signals.
The technical scheme adopted by the invention is that the electric energy quality analysis and identification method based on deep learning comprises the steps of collecting electric energy signals to be detected, randomly dividing the electric energy signals to be detected into a training sample set and a test sample set, inputting the training sample set into a long-short term memory network LSTM model for training to obtain a trained LSTM model, and inputting the test sample set into the trained LSTM model for testing the disturbance classification condition of the electric energy quality.
The method specifically comprises the following steps:
step 1, collecting electric energy signals to be detected in a plurality of time periods, randomly dividing the collected signals into a training sample set and a testing sample set, splicing the signals in the training sample set into a large signal according to the time sequence, namely a training set signal, and splicing the signals in the testing sample set into a large signal according to the time sequence, namely a testing set signal;
step 2, constructing a long-short term memory neural network (LSTM) model, wherein the LSTM model comprises an input layer, a hidden layer and an output layer;
step 3, training an LSTM model, selecting sampling points from training set signals, inputting the sampling points into the LSTM model for training, correcting the weight of a matrix in the LSTM model, updating parameters in the LSTM model by using the difference between a sample output value and a target value, and obtaining the trained LSTM model when the training times reach a preset value;
and 4, inputting the test set signals into the trained LSTM model for testing, and outputting the power quality disturbance classification result by the LSTM model.
The step 3 comprises the following steps:
step 3.1, initializing a hidden layer and a state layer in the constructed LSTM model into random numbers;
step 3.2, selecting a sampling point (X, Y) from the training set signallabel) Inputting the data into an input layer of the LSTM model, calculating the data through a hidden layer, and outputting the data to an output layer to obtain Yout;;
3.3, processing the output layer result by using a Softmax function, outputting and processing multi-neuron output, mapping the output to a (0,1) interval by using a Softmax classifier after the output of a plurality of neurons is obtained, endowing a probability value for each output classification result, and selecting a probability maximum node as a prediction target when an output node is selected at last;
step 3.4, mixing YlabelAnd YoutComparing to obtain error term, inputting the error term into back propagation algorithm for back propagation training, optimizing LSTM model, outputting once in each time order, calculating error E between sample output value and target valuedAnd updating the weight of the neural network by using the error to obtain the trained LSTM model.
In step 3.4, in the back propagation algorithm, the model is divided into an input layer, a hidden layer and an output layer, and the error term of the node j of the output layer is as follows:
δj=yj(1-yj)(tj-yj)
wherein, deltajError term, y, for output layer node jjIs the output value of the output layer node j, tjIs the target value of output layer node j;
the error term of the hidden layer node i is:
δi=ai(1-ai)∑k∈outputwjiδj
wherein, aiFor hiding the output value of layer node i, wjiFor the weight of the hidden layer node i to the output layer node j, δjError term, δ, for output layer node jiAn error term which is a hidden layer node i;
after the node error term is obtained, updating the weight:
wji←wji+ηδjxji
wherein, wjiIs the weight from hidden layer node i to output layer node j, eta is the learning rate, deltajIs the error term, x, of the output layer node jjiIs the output of hidden layer node i and is also the input of output layer node j, and the bias term of the input is always 1.
In step 3.4, when the neural network is trained, the error E between the sample output value and the target value is utilized through a back propagation algorithm and a gradient descent methoddUpdating various parameters in the neural network:
Figure BDA0003149836810000051
in the formula, EdIs the error of sample d, yiIs the output value of the node, ti is the target value of the node;
after obtaining the sample error, the weights are updated by a gradient descent method:
Figure BDA0003149836810000052
in the formula, wjiThe weight from the node i to the next layer of the node j is obtained, and eta is the learning rate;
the weight correction quantity of the neural network is obtained by using a transfer derivation rule as follows:
Figure BDA0003149836810000053
netjand the output of the j-th network is obtained, and after the errors of all the nodes are calculated once, the weight of the whole network is updated.
In the step 1, a mutual inductor or a Hall sensor is adopted to collect an electric energy signal to be detected, and the electric energy signal is output and simultaneously the mark of the electric energy signal is output.
The method has the advantages that the long-time memory network is used as a model for electric energy signal classification, the model is trained through a Softmax function and a back propagation algorithm, convergence is rapidly achieved, feature extraction through human intervention is avoided, electric energy quality signal classification is directly achieved, errors are reduced, identification precision is improved, the application range is wide, and practicability is high.
Drawings
FIG. 1 is a schematic flow chart of a deep learning-based power quality analysis and identification method according to the present invention;
FIG. 2 is a flow chart of electrical energy signal acquisition in an embodiment of the present invention;
FIG. 3 is a schematic diagram of the time sequence expansion of the long term and short term memory network according to the present invention;
FIG. 4 is a diagram of a long term short term memory network element according to the present invention;
FIG. 5 is a flow chart of the training of the long-short term memory network according to the present invention;
fig. 6 is a schematic diagram of a cost function value and a signal type after the first training is finished in the embodiment of the present invention;
fig. 7 is a diagram illustrating a test result of the 10 th input power signal according to the embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
The invention discloses an electric energy quality analysis and identification method based on deep learning, which comprises the following steps of:
step 1, collecting electric energy signals
In order to detect the accuracy of the deep learning-based power quality analysis and identification method of the invention when acquiring power signals, the power signals acquired in this embodiment are simulated power signals, six different power quality disturbance models are established according to IEEE1159-2019 related documents, power quality definitions and disturbance classifications and standards, and Python is used to simulate and output the power signals, as shown in fig. 2, several disturbance power signals of voltage sag, voltage interruption, voltage pulse, voltage fluctuation and harmonic wave are randomly generated. The time length of any generated signal is 0.22 seconds, which can better describe the power disturbance signal with longer duration, and the power signal is output and simultaneously the mark of the power signal is output.
The electric energy signal output by simulation is as follows:
normal voltage signal: v. oft=sin(ωx)
In the formula, x is a sampling time point, ω is a fundamental wave angular frequency, ω is 2 pi f, f is an alternating voltage frequency, and is 50Hz in China.
Voltage ramp signal: v. oft={1+a[u(x-t1)-u(x-t2)]}sin(ωx)
In the formula
Figure BDA0003149836810000071
t1As a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
voltage sag signal: v. oft={1-a[u(x-t1)-u(x-t2)]}sin(ωx)
In the formula
Figure BDA0003149836810000072
t1As a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
voltage interrupt signal: v. oft=1-a[u(x-t1)-u(x-t2)]}sin(ωx)
In the formula
Figure BDA0003149836810000073
t1As a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
transient impulse signal: v. oft=sin(ωx)+a[u(x-t1)-u(x-t2)]
In the formula
Figure BDA0003149836810000074
t1As a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
transient oscillation signal:
Figure BDA0003149836810000076
wherein,
Figure BDA0003149836810000075
t1as a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
harmonic signals:
Figure BDA0003149836810000081
wherein,
Figure BDA0003149836810000082
t1as a starting time point, t2The unit of the time point is millisecond, and u is a unit step function;
randomly dividing the collected electric energy signals into a training sample set and a testing sample set;
and 2, constructing a long-short term memory network (LSTM) model, wherein the whole network structure comprises an input layer, a hidden layer and an output layer, the input layer is responsible for directly receiving the sampled data, each layer of neurons in the hidden layer is completely connected with the next layer of neurons, the neurons belonging to the layer are not connected with each other, the input of each layer is determined by the output of the previous layer and a certain weight, the output is obtained, and the size of the output layer is determined by the task to be completed.
In the method, an input electric energy signal needs to be classified into one of seven categories, namely six disturbance signals and normal signals;
as shown in fig. 3, there are a total of three inputs in the long-short term memory network: cell state, hidden layer and input data. As shown in fig. 4, LSTM uses a forget gate, an input gate to control the content of cell state c. Forgetting the door to determine the state c of the cell at the previous momentt-1At the current moment ctHow many to keep, the input gate determines how many network inputs x are currently availabletSave to current cell state ctAnd the output gate controls the output value. The calculation of the forgetting gate, the input gate and the output gate are respectively shown as the following formula in sequence:
Figure BDA0003149836810000083
in the formula (f)tTo forget the result of the current state operation of the door, itAs a result of the operation on the current state of the input gate, otThe result of the current state operation of the output gate is obtained; wfWeight matrix for forgetting gate, WiIs a weight matrix of input gates, WoIs a weight matrix of the output gate; bfTo forget the doorBias term of (b)iAs an offset term for the input gate, boFor the offset term of the output gate, XtIs the current input, htIs the current output value.
In the whole long-short term memory network, the unit state output and the output of the hidden layer are represented by the following formulas:
Figure BDA0003149836810000091
wherein, c'tCell state input for time t, bcAs an offset term for the state of the input cell, WcIs a weight matrix of the input cell states, the open circles are multiplied by the elements of the matrix, htIs the current output value, ctIs the current cell state.
Step 3, training the constructed long-short term memory network LSTM model, referring to FIG. 5, specifically comprising the following steps:
step 3.1, initializing a hidden layer and a state layer of the long-term and short-term memory network into random numbers so as to ensure the network learning effect and avoid premature saturation;
step 3.2, selecting a sampling point (X, Y) from the training sample set collected in step 1label) Inputting the input layer of the long-short term memory network, calculating by two hidden layers of 32 neurons, and outputting to the output layer to obtain Yout
Step 3.3, processing multi-neuron output by using a Softmax function as an algorithm for processing output layer results, mapping the output to a (0,1) interval by a Softmax classifier after obtaining the output of a plurality of neurons, and selecting a node with the maximum probability (namely, the node with the maximum value) as a prediction target when an output node is selected at last; the final classification result is represented by probability, the probability represents the possibility that the input signal belongs to each category and also represents the credibility of the corresponding category, in order to determine which type the disturbance signal belongs to, the model selects the category with the highest probability, and when the conditional probability of the category to which each sample belongs is the highest, the recognition rate of Softmax is highest, so that the trained model is more accurate;
the expression of the Softmax function is:
Figure BDA0003149836810000101
where P (y ═ j | z) ∈ (0,1), ΣjP (y ═ j | z) ═ 0,1, and j ═ 7, which correspond to the six power quality disturbance phenomena and normal power quality signals defined by the present algorithm.
Step 3.4, mixing YlabelAnd YoutComparing to obtain errors, performing back propagation training, optimizing the model, as shown in fig. 6, performing output once in each time step, calculating errors of output values, and correcting weights of matrices in the network during back propagation algorithm training;
in the back propagation algorithm, the model is divided into an input layer, a hidden layer and an output layer. The error term for output layer node j is:
δj=yj(1-yj)(tj-yj)
wherein, deltajError term, y, for output layer node jjIs the output value of the output layer node j, tjIs the target value of the output layer node j.
The error term of the hidden layer node i is:
δi=ai(1-ai)∑k∈outputwjiδj
wherein, aiFor hiding the output value of layer node i, wjiFor the weight of the hidden layer node i to the output layer node j, δjError term, δ, for output layer node jiTo hide the error term of layer node i.
And calculating error terms of all hidden layer nodes by using the error terms of the output layer nodes.
After the node error term is obtained, updating the weight:
wji←wji+ηδjxji
wherein node i is hiddenLayer node, node j being the output layer node, wjiIs the weight from node i to node j, η is the learning rate, δjIs the error term, x, of node jjiIs the output of node i and is the input of node j, and the bias term of the input is always 1.
When the neural network is trained, the error E of the sample output value and the target value is utilized through a back propagation algorithm in combination with a gradient descent methoddUpdating various parameters in the neural network:
Figure BDA0003149836810000111
Edis the error of sample d, yiIs the output value of the node, tiIs a target value of a node, wherein
Figure BDA0003149836810000112
Are constant parameters set for the sake of convenience in subsequent calculations.
After the sample error is calculated, the weights are updated by a gradient descent method:
Figure BDA0003149836810000113
the weight correction quantity of the neural network is obtained by using a transfer derivation rule as follows:
Figure BDA0003149836810000114
and after the errors of all the nodes are calculated once, updating the weight of the whole network.
And 3.5, changing the input of the electric energy signal into an electric energy disturbing signal, wherein the disturbing signal accounts for more than a normal signal, and repeatedly executing the step 3.1, the step 3.2, the step 3.3 and the step 3.4 to enable the model to learn the duration of the electric energy disturbing signal more quickly and sufficiently.
And 4, inputting the test sample set into the trained LSTM model for inspection, wherein the result is shown in FIG. 6, the accuracy is recorded as the accurate number of model prediction divided by the total number of samples, further, the electric energy signal containing a large amount of disturbances is input for inspection of the model, the test result is shown in FIG. 7, the display accuracy is high, the loss function is low, and the trained model has a good level for classification of the electric energy signal.

Claims (6)

1. A power quality analysis and identification method based on deep learning is characterized by comprising the steps of collecting power signals to be detected, randomly dividing the power signals to be detected into a training sample set and a testing sample set, inputting the training sample set into a long-short term memory network (LSTM) model for training to obtain a trained LSTM model, and inputting the testing sample set into the trained LSTM model for testing power quality disturbance classification conditions.
2. The deep learning-based electric energy quality analysis and identification method according to claim 1, characterized by comprising the following steps:
step 1, collecting electric energy signals to be detected in a plurality of time periods, randomly dividing the collected signals into a training sample set and a testing sample set, splicing the signals in the training sample set into a large signal according to the time sequence, namely a training set signal, and splicing the signals in the testing sample set into a large signal according to the time sequence, namely a testing set signal;
step 2, constructing a long-short term memory neural network (LSTM) model, wherein the LSTM model comprises an input layer, a hidden layer and an output layer;
step 3, training an LSTM model, selecting sampling points from training set signals, inputting the sampling points into the LSTM model for training, correcting the weight of a matrix in the LSTM model, updating parameters in the LSTM model by using the difference between a sample output value and a target value, and obtaining the trained LSTM model when the training times reach a preset value;
and 4, inputting the test set signals into the trained LSTM model for testing, and outputting the power quality disturbance classification result by the LSTM model.
3. The deep learning-based power quality analysis and identification method according to claim 2, wherein the step 3 comprises the following steps:
step 3.1, initializing a hidden layer and a state layer in the constructed LSTM model into random numbers;
step 3.2, selecting a sampling point (X, Y) from the training set signallabel) Inputting the data into an input layer of the LSTM model, calculating the data through a hidden layer, and outputting the data to an output layer to obtain Yout;;
3.3, processing the output layer result by using a Softmax function, outputting and processing multi-neuron output, mapping the output to a (0,1) interval by using a Softmax classifier after the output of a plurality of neurons is obtained, endowing a probability value for each output classification result, and selecting a probability maximum node as a prediction target when an output node is selected at last;
step 3.4, mixing YlabelAnd YoutComparing to obtain error term, inputting the error term into back propagation algorithm for back propagation training, optimizing LSTM model, outputting once in each time order, calculating error E between sample output value and target valuedAnd updating the weight of the neural network by using the error to obtain the trained LSTM model.
4. The deep learning-based power quality analysis and identification method according to claim 3, wherein in step 3.4, in the back propagation algorithm, the model is divided into an input layer, a hidden layer and an output layer, and the error term of the node j of the output layer is as follows:
δj=yj(1-yj)(tj-yj)
wherein, deltajError term, y, for output layer node jjIs the output value of the output layer node j, tjIs the target value of output layer node j;
the error term of the hidden layer node i is:
δi=ai(1-ai)∑k∈outputwjiδj
wherein, aiFor hiding the output value of layer node i, wjiFor the weight of the hidden layer node i to the output layer node j, δjError term, δ, for output layer node jiAn error term which is a hidden layer node i;
after the node error term is obtained, updating the weight:
wji←wji+ηδjxji
wherein, wjiIs the weight from hidden layer node i to output layer node j, eta is the learning rate, deltajIs the error term, x, of the output layer node jjiIs the output of hidden layer node i and is also the input of output layer node j, and the bias term of the input is always 1.
5. The deep learning-based power quality analysis and identification method according to claim 4, wherein in the step 3.4, the error E between the sample output value and the target value is utilized by a back propagation algorithm in combination with a gradient descent method during the training of the neural networkdUpdating various parameters in the neural network:
Figure FDA0003149836800000031
in the formula, EdIs the error of sample d, yiIs the output value of the node, ti is the target value of the node;
after obtaining the sample error, the weights are updated by a gradient descent method:
Figure FDA0003149836800000032
in the formula, wjiThe weight from the node i to the next layer of the node j is obtained, and eta is the learning rate;
the weight correction quantity of the neural network is obtained by using a transfer derivation rule as follows:
Figure FDA0003149836800000033
netjand the output of the j-th network is obtained, and after the errors of all the nodes are calculated once, the weight of the whole network is updated.
6. The deep learning-based power quality analysis and identification method according to claim 2, wherein in the step 1, a mutual inductor or a hall sensor is used for collecting the power signal to be detected, and the electric signal is output and simultaneously the mark of the power signal is output.
CN202110763356.0A 2021-07-06 2021-07-06 Electric energy quality analysis and identification method based on deep learning Pending CN113657149A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110763356.0A CN113657149A (en) 2021-07-06 2021-07-06 Electric energy quality analysis and identification method based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110763356.0A CN113657149A (en) 2021-07-06 2021-07-06 Electric energy quality analysis and identification method based on deep learning

Publications (1)

Publication Number Publication Date
CN113657149A true CN113657149A (en) 2021-11-16

Family

ID=78477157

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110763356.0A Pending CN113657149A (en) 2021-07-06 2021-07-06 Electric energy quality analysis and identification method based on deep learning

Country Status (1)

Country Link
CN (1) CN113657149A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116187248A (en) * 2023-03-13 2023-05-30 华能新能源股份有限公司河北分公司 Relay protection fixed value analysis and verification method and system based on big data

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830487A (en) * 2018-06-21 2018-11-16 王芊霖 Methods of electric load forecasting based on long neural network in short-term
CN109614885A (en) * 2018-11-21 2019-04-12 齐鲁工业大学 A kind of EEG signals Fast Classification recognition methods based on LSTM
CN110222953A (en) * 2018-12-29 2019-09-10 北京理工大学 A kind of power quality hybrid perturbation analysis method based on deep learning
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network
EP3832553A1 (en) * 2019-10-12 2021-06-09 United Microelectronics Center Co., Ltd. Method for identifying energy of micro-energy device on basis of bp neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830487A (en) * 2018-06-21 2018-11-16 王芊霖 Methods of electric load forecasting based on long neural network in short-term
CN109614885A (en) * 2018-11-21 2019-04-12 齐鲁工业大学 A kind of EEG signals Fast Classification recognition methods based on LSTM
CN110222953A (en) * 2018-12-29 2019-09-10 北京理工大学 A kind of power quality hybrid perturbation analysis method based on deep learning
EP3832553A1 (en) * 2019-10-12 2021-06-09 United Microelectronics Center Co., Ltd. Method for identifying energy of micro-energy device on basis of bp neural network
CN111079906A (en) * 2019-12-30 2020-04-28 燕山大学 Cement product specific surface area prediction method and system based on long-time and short-time memory network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
唐赛;何荇兮;张家悦;尹爱军;: "基于长短期记忆网络的轴承故障识别", 汽车工程学报, no. 04, 31 August 2018 (2018-08-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116187248A (en) * 2023-03-13 2023-05-30 华能新能源股份有限公司河北分公司 Relay protection fixed value analysis and verification method and system based on big data
CN116187248B (en) * 2023-03-13 2023-08-25 华能新能源股份有限公司河北分公司 Relay protection fixed value analysis and verification method and system based on big data

Similar Documents

Publication Publication Date Title
CN109164343B (en) Transformer fault diagnosis method based on characteristic information quantization and weighted KNN
CN114006826B (en) Network traffic prediction method fusing traffic characteristics
CN111310968A (en) LSTM neural network circulation hydrological forecasting method based on mutual information
CN113919448A (en) Method for analyzing influence factors of carbon dioxide concentration prediction at any time-space position
CN108447057B (en) SAR image change detection method based on significance and depth convolution network
CN105427138A (en) Neural network model-based product market share analysis method and system
CN112232543B (en) Multi-station prediction method based on graph convolution network
CN110070102B (en) Method for establishing sequence-to-sequence model for identifying power quality disturbance type
CN113962314A (en) Non-invasive enterprise load decomposition method based on federal learning
CN108596327A (en) A kind of seismic velocity spectrum artificial intelligence pick-up method based on deep learning
CN113780242A (en) Cross-scene underwater sound target classification method based on model transfer learning
CN114676822A (en) Multi-attribute fusion air quality forecasting method based on deep learning
CN109145685A (en) Fruits and vegetables EO-1 hyperion quality detecting method based on integrated study
CN113988210A (en) Method and device for restoring distorted data of structure monitoring sensor network and storage medium
CN117171702A (en) Multi-mode power grid fault detection method and system based on deep learning
CN114021424B (en) PCA-CNN-LVQ-based voltage sag source identification method
CN116819423A (en) Method and system for detecting abnormal running state of gateway electric energy metering device
CN110046756B (en) Short-term weather forecasting method based on wavelet denoising and Catboost
CN117407707A (en) Odor identification model training method and device
CN113657149A (en) Electric energy quality analysis and identification method based on deep learning
CN114218988A (en) Method for identifying unidirectional ground fault feeder line under unbalanced samples
CN109061544B (en) Electric energy metering error estimation method
CN117056678A (en) Machine pump equipment operation fault diagnosis method and device based on small sample
CN111931738A (en) Neural network model pre-training method and device for remote sensing image
CN109187898B (en) Soft measurement method and device for ammonia nitrogen content of water in aquaculture environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination