CN109596942A - A kind of voltage sag reason recognition methods based on depth confidence network - Google Patents

A kind of voltage sag reason recognition methods based on depth confidence network Download PDF

Info

Publication number
CN109596942A
CN109596942A CN201811488320.0A CN201811488320A CN109596942A CN 109596942 A CN109596942 A CN 109596942A CN 201811488320 A CN201811488320 A CN 201811488320A CN 109596942 A CN109596942 A CN 109596942A
Authority
CN
China
Prior art keywords
training
network
voltage
voltage sag
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811488320.0A
Other languages
Chinese (zh)
Inventor
王红
郑智聪
齐林海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North China Electric Power University
Original Assignee
North China Electric Power University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North China Electric Power University filed Critical North China Electric Power University
Priority to CN201811488320.0A priority Critical patent/CN109596942A/en
Publication of CN109596942A publication Critical patent/CN109596942A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/08Locating faults in cables, transmission lines, or networks
    • G01R31/081Locating faults in cables, transmission lines, or networks according to type of conductors
    • G01R31/086Locating faults in cables, transmission lines, or networks according to type of conductors in power transmission or distribution networks, i.e. with interconnected conductors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere
    • G01R31/08Locating faults in cables, transmission lines, or networks
    • G01R31/088Aspects of digital computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

A kind of voltage sag reason recognition methods based on depth confidence network, belongs to power quality analysis method and technology field.Method includes carrying out data prediction to voltage dip recorder data and its voltage sag reason label;Build limited Boltzmann machine network;Unsupervised pre-training is carried out to limited Boltzmann machine;Depth confidence network is built with the limited Boltzmann machine that training generates;Softmax layers of addition;Training is carried out to whole network;The verifying of accuracy rate is carried out to the model of generation;The probability of all categories of model output is judged, the corresponding voltage sag reason type of automatic identification input data.Using history voltage dip recording and its temporarily, drop reason label is iterated training to deep neural network to the present invention, the model that the voltage dip recorder data input training that monitoring point is likely to occur generates, so that it may obtain its corresponding voltage sag reason type.The present invention is a big supplement to existing electric energy quality monitoring system, is of great practical significance.

Description

A kind of voltage sag reason recognition methods based on depth confidence network
Technical field
The present invention relates to a kind of voltage sag reason recognition methods based on depth confidence network, belong to power quality analysis Method and technology field.
Background technique
With the rapid development of industry and information-intensive society, requirement of the various circles of society to power quality is also continuously improved.Voltage Temporarily drop be difficult to avoid that and be easiest in electric system cause sensitive industrial user's economic loss electrical energy power quality disturbance event it One.With a large amount of uses of the sensitive equipments such as adjustable speed motor and accurate control equipment, loss caused by voltage dip even can To be mentioned in the same breath with power outage.In recent years, domestic and international brainstrust surrounds the feature extraction of voltage dip, causality classification, reason Positioning, monitoring, sensitive equipment tolerance etc. expand numerous studies and achieve many achievements.In existing research base On plinth, this method mainly furthers investigate voltage sag reason identification.Correct Analysis simultaneously identifies voltage sag reason not only Help to propose targetedly Anti jamming Scheme, so that the influence of its bring be effectively reduced, and helps to define voltage dip For the respective responsibility of electricity consumption both sides in accident, as the reasonable important evidence for solving related dispute.
Cause voltage dip basic reason be electric reason impedance partial pressure increase, cause the disturbing cause master of voltage dip Have the single voltage sag reasons such as system short-circuit fault, large-scale induction conductivity starting, transformer switching and its between it is mutual Reason temporarily drops in the composite voltage for influencing and constituting, and the signal that reason temporarily drops in different voltages has visibly different characteristic.Therefore, such as Fruit can history recorder data to voltage dip and its voltage sag reason label analyze, excavate wherein hiding variation Rule can accurately identify the temporary drop reason of voltage dip event.
Currently, scholars are to the identification of voltage sag reason primarily directed to single voltage sag reason, identification process packet Include two links of feature extraction and pattern-recognition.Wherein, feature extraction be using the signal processings such as time domain, frequency domain and transform domain or Mathematical statistics are converted and are reconstructed to voltage dip signal, and therefrom extract effective feature, and common method includes fast Fast Fourier transformation, wavelet transformation, Hilbert-Huang, S-transformation etc.;Pattern-recognition utilizes the relevant classification of machine learning Algorithm designs classifier, for determining that temporary drop reason classification, main method belonging to disturbing signal sample include principal component analysis Reduction, support vector machines, neural network, genetic algorithm etc..
In actual electric network, composite voltage made of reason interaction is temporarily dropped as multiple voltage and temporarily drops reason and identifies becomes A big difficulty.The research for temporarily dropping causality classification about composite voltage both at home and abroad is not much, main single temporarily using tradition solution Drop causality classification knows method for distinguishing and temporarily drops problem to handle composite voltage.Chen Li etc. is using the method for improvement S-transformation to compound electric Temporarily drop reason is identified pressure, although effect is more preferable in voltage sag reason Classification and Identification than standard S-transformation, this method Voltage dip caused by reason temporarily drops only for part composite voltage, and generalization ability is insufficient.
Meanwhile the complication with grid equipment and the compartmentalization with power mode, when being difficult again to complexity in electric system Between sequence data establish accurate and general mathematical statistical model, the information of characteristic extraction procedure lose and disaggregated model it is excessive Complexity is but also the defect of existing method is increasingly prominent.Under big data era background, the method based on data-driven is then by blueness It looks at, the characteristic for not depending on physical model effectively compensates for the deficiency based on mathematical model.Deep learning is that one kind is based on To the method that data carry out representative learning, the highly effective algorithm that feature learning and layered characteristic extract can overcome artificial acquisition special The problem of sign.By the actual motion of many years, power quality intelligent information system has accumulated which a large amount of power quality voltage Temporarily drop logout, this be deep learning in power quality using providing good data supporting.
It is different from the mode that traditional algorithm extracts feature, the depth confidence network pair that will be introduced into deep learning algorithm herein Voltage sag reason carries out Classification and Identification.Consolidating for each voltage sag reason waveform is automatically extracted by constructing limited Boltzmann machine There is abstract characteristics parameter, the identification of multi-tag classification is carried out using softmax function, then determines loss function as network mesh Mark carrys out guidance learning process, and is changed with the history voltage dip recorder data of the temporary drop reason label of band to the network put up Generation training.The model ultimately generated can according to its corresponding voltage sag reason of the recorder data automatic identification of temporary drop event, Capable of also overcoming manual features selection to be easy simultaneously, original loss information, disaggregated model are excessively complicated and traditional neural network is trained When convergence rate is slow, is easy the problem of being limited to local optimum.
Summary of the invention
It is an object of the present invention to propose a kind of voltage sag reason recognition methods based on depth confidence network, pass through The recorder data of history voltage dip event and the corresponding relationship of voltage sag reason are excavated, reaches corresponding to voltage dip event Temporary drop reason accurately identify.
Firstly, the voltage dip recorder data and its temporary drop reason label to electric energy quality monitoring point carry out data and locate in advance Reason.The data prediction carries out continuous sampling to voltage dip recorder data first, then does and returns to the voltage value sampled One change processing, and by voltage sag reason label vector, finally by pretreated voltage dip data and its corresponding electricity Temporarily drop reason label is divided into training set and test set to pressure.
Then, build limited Boltzmann machine network, and with training set data to the limited Boltzmann machine put up into The unsupervised pre-training of row.The limited Boltzmann machine is two layers of neural network, is connected with each other between the node of adjacent layer, same layer Between node be mutually not attached to, wherein first layer is known as aobvious layer perhaps the input layer second layer becomes hidden layer or output layer.Institute Unsupervised pre-training is stated first with the continuous sampling voltage value of treated training set as the first limited glass put up The input of the graceful machine of Wurz, and unsupervised pre-training is carried out to it;Hidden layer followed by first limited Boltzmann machine is defeated Unsupervised pre-training is continued to the next limited Boltzmann machine put up out, and so on.Wherein, the limited glass In the unsupervised pre-training of the graceful machine of Wurz, limited Boltzmann machine introduces energy function, and training objective is the energy for making network Function reaches minimum state of value, i.e. the best stabilized state of network;In training process, first obtained from the aobvious layer of limited Boltzmann machine Input data is taken, the conditional probability of aobvious layer and hidden layer is derived by energy function;Then by be derived by Hidden unit and The probability that aobvious layer unit is activated, to obtain the solving result of network parameter;Finally during minimizing energy function, Limited Boltzmann machine updates the weight and biasing of network by back-propagation algorithm iteration, is fitted to maximum probability input number According to automatically extract the abstract characteristics of voltage dip recorder data, and in hidden layer output.
Secondly, the limited Boltzmann machine model buildings depth confidence network generated using training.The depth confidence net Network is formed by multiple traditional limited Boltzmann machine series stacks, and the hidden layer of a upper limited Boltzmann machine is exported and is made It is inputted for the aobvious layer of next limited Boltzmann machine, and so on.
Again, softmax layers are added behind the output layer of depth confidence network, and with training set data to whole network Training is carried out, the model of generation is saved.Described softmax layers is mapped to the output of network in (0,1) section, thus Obtain the probability that network inputs data belong to each voltage sag reason classification.The Training continuous electricity using treated Pressure value and its corresponding voltage sag reason label are iterated training to whole network, defeated by softmax layers in training process Network parameter is updated by back-propagation algorithm iteration as loss function with the cross entropy of corresponding label out, realizes network damage Lose the minimum target of function.
Finally, carrying out the verifying of accuracy rate to model with test set data, and utilize the voltage of model automatic identification input The temporarily corresponding voltage sag reason type of drop recorder data.The verifying of the accuracy rate is by the continuous sampling of treated test set Voltage value inputs the model that repetitive exercise generates, and the corresponding probability value of all categories exported to softmax layers of model judges, When the corresponding probability value of certain classification is greater than 0.5, assert that the test data of input belongs to the voltage sag reason classification, finally will The single voltage dip classification or composite voltage of judgement temporarily drop classification voltage sag reason label corresponding with input data and carry out Matching, thus the recognition accuracy of Self -adaptive model.Finally, the model generated by training can be according to the record of temporary drop event Its corresponding voltage sag reason of wave data automatic identification.
Feasibility analysis: first, meet practical application request;Electric energy quality monitoring has had been established in multiple provinces and cities, China at present Network realizes the real-time monitoring to power quality state substantially.Further control and improvement electrical energy power quality disturbance problem, especially It is Problem of Voltage Temporary-Drop, is the emphasis of current power quality field concern;Second, have data supporting condition;Investment makes at present Power quality monitoring device has had accumulated a large amount of voltage dip event and its temporarily record of drop reason, this is based on number Reliable data, which are provided, according to the voltage sag reason recognition methods of driving carrys out reason;Third, the feasibility of technology;When big data For under background, deep learning is current one of research topic of greatest concern, and the algorithm of many maturations has been proposed at present.And And in the fields such as Image Engineering, natural language processing, speech recognition, deep learning algorithm is successfully applied to classification and is known In other model.
Beneficial effects of the present invention: providing a kind of voltage sag reason recognition methods based on limited Boltzmann machine, should Method changes to the deep neural network put up using history voltage dip recorder data and its voltage sag reason label Generation training, the model that the recorder data input training for the voltage dip event that electric energy quality monitoring point is likely to occur generates, just Its available corresponding voltage sag reason type.Facilitate the timely formulation of electrical energy power quality disturbance resolution, Neng Gouyou Effect reduces economic loss, is a big supplement to existing electric energy quality monitoring system, is of great practical significance.
Detailed description of the invention
The structural schematic diagram of voltage sag reason recognition methods of the Fig. 1 based on depth confidence network.
Fig. 2 is limited the structural schematic diagram of Boltzmann machine.
The structural schematic diagram of Fig. 3 depth confidence network.
Fig. 4 flow chart of the present invention.
Specific embodiment
The present invention will be further described with reference to the accompanying drawing.
The structural schematic diagram of voltage sag reason recognition methods of the Fig. 1 based on depth confidence network.Fig. 2 is limited Boltzmann The structural schematic diagram of machine.The structural schematic diagram of Fig. 3 depth confidence network.Fig. 4 flow chart of the present invention.As shown in Figure 1,2,3, 4, base In the voltage sag reason recognition methods of depth confidence network, the method steps are as follows:
Step 1: voltage dip recorder data and its voltage sag reason label to electric energy quality monitoring point carry out data and locate in advance Reason;
The data prediction of this patent carries out continuous sampling to voltage dip recorder data first, then to the voltage value sampled Normalized is done, and by voltage sag reason label vector, finally by pretreated voltage dip data and its correspondence Voltage sag reason label be divided into training set and test set;
Step 2: building limited Boltzmann machine network;
The limited Boltzmann machine is the undirected probability graph model being made of visible layer v and hidden layer h, and data are from can See that layer inputs, the value of hidden layer is obtained by calculation.As shown in Fig. 2, visible layer and hidden layer are mutually disjunct by several respectively Neural unit is constituted, and two layers of neural unit is interconnected by weight w.I-th of unit of visual layers of limited Boltzmann machine network Weight parameter w between j-th of unit of hidden layerijIt indicates, visual layer unit is biased to parameter b=(b1, b2..., bm), hidden layer is biased to parameter c=(c1, c2..., cn);
Step 3: unsupervised pre-training being carried out to the network that step 2 is put up with training set data;
The unsupervised pre-training is built first with the continuous sampling voltage value of step 1 treated training set as step 2 The input of first good limited Boltzmann machine, and unsupervised pre-training is carried out to it;Followed by first limited Bohr Hereby the hidden layer output of graceful machine continues unsupervised pre-training to next limited Boltzmann machine that step 2 is put up, with This analogizes.Wherein, limited Boltzmann machine introduces energy function, and training objective is that the energy function of network is made to reach minimum State, i.e. the best stabilized state of network.In training process, input data first is obtained from the aobvious layer of limited Boltzmann machine, then The conditional probability of aobvious layer and hidden layer is derived with energy function, be then derived by Hidden unit and aobvious layer unit be activated it is general Rate, to obtain the solving result of network biasing.The last difference in correlation according between Hidden unit and aobvious layer unit comes more New weightwAnd biasingb, more new formula is as follows:
Subscript in formula represents sampling procedure, (v (0) h j (0)) the first sub-sampling based on aobvious layer and hidden layer is represented, after repeated multiple times It is suitable to obtainw, so that it is determined that the hidden layer output information that aobvious layer input is corresponding;
Step 4: the limited Boltzmann machine model buildings depth confidence network generated using step 3 training;
The depth confidence network is formed by multiple traditional limited Boltzmann machine series stacks, as shown in figure 3, by upper one The hidden layer output of limited Boltzmann machine is inputted as the aobvious layer of next limited Boltzmann machine, and so on;
Step 5: softmax layers are added behind the output layer of depth confidence network;
The softmax layers of calling softmax function is classified for multi-tag.Assuming that havingKA classification,S i What is represented is theiIt is a The output of unit, softmax layers of calculating process are as follows:
Softmax layers are mapped to the output of network in (0,1) section, so that obtaining network inputs data belongs to each voltage dip The probability of reason classification.Overall network after softmax layers of addition is as shown in Figure 1;
Step 6: Training and preservation model being carried out to whole network with training set data;
The Training are as follows: using step 1 treated continuous voltage value and its corresponding voltage sag reason label pair The network that step 6 is put up is iterated training, in training process using the cross entropy of softmax layers of output and corresponding label as Loss functionc, it is assumed thaty i Indicate true classification results,cCalculating process are as follows:
Network parameter is updated by back-propagation algorithm iteration, realizes the minimum target of network losses function;
Step 7: carrying out the verifying of accuracy rate to the model that step 6 generates with test set data;
The verifying of the accuracy rate are as follows: by 6 repetitive exercise of continuous sampling voltage value input step of step 1 treated test set The model of generation, the corresponding probability value of all categories exported to softmax layers of model judges, when the corresponding probability of certain classification When value is greater than 0.5, assert that the test data of input belongs to the voltage sag reason classification, finally by the single voltage dip of judgement Classification or composite voltage temporarily drop classification voltage sag reason label corresponding with input data and are matched, thus Self -adaptive mould The recognition accuracy of type;
Step 8: recorder data, its corresponding voltage sag reason type of model automatic identification temporarily drop in input voltage.
Embodiment 1
The temporary drop data of the sample of the present embodiment saves in electric energy quality monitoring platform 1400 between 2012 to 2016 years the reason is that choosing certain Voltage dip record includes singlephase earth fault C1, large-scale induction conductivity start C2, transformer switching C3Three kinds of single electricity Pressure temporarily drops multilevel voltage caused by reason and short trouble and temporarily drops C4, single-phase earthing and the starting of large-scale induction conductivity compound C5、 The compound C of single-phase earthing and transformer switching6, large-scale induction conductivity starting and transformer switching compound C7Four kinds of compound electrics Each 200 groups of sample datas of recording signal of reason temporarily drop in pressure.By building deep neural network and repetitive exercise, may learn Different voltages temporarily drop the abstract characteristic parameter of the corresponding recording signal of reason and generate model.It is assumed that several groups of voltage dip recordings Data, the available corresponding voltage sag reason type of input model.
It is as follows based on the sample implementation steps:
1. pair sample carries out data prediction: for the Data Input Interface requirement of matching network, the sampling of every group of sample data Point is set as 200, and pretreatment is normalized to sample voltage value.Voltage sag reason label is carried out at vectorization simultaneously Reason finally takes 50 samples as test set in every class sample, and remaining 150 samples are training set;
2. building traditional limited Boltzmann machine network on deep learning frame TensorFlow: comprehensively considering sample number The factors such as amount, training time, models fitting degree set the hidden layer number of plies as 3, and in the network built, every layer using non- The Relu activation primitive of saturation nonlinearity, to promote the ability to express of network.After the output layer of each limited Boltzmann machine Batch standardization is added, to accelerate the learning rate of network.The hidden layer neuron of three limited Boltzmann machines Number is respectively 100,50 and 3, after experiment adjustment, determines that the first two hidden layer neuron number is respectively 120 and 40;
3. carrying out unsupervised pre-training to limited Boltzmann machine with training set data: first by the company of pretreated training set Continuous input of the sample voltage value as first limited Boltzmann machine, and unsupervised pre-training is carried out to it;Followed by The hidden layer output of one limited Boltzmann machine continues unsupervised pre-training to next limited Boltzmann machine, with this Analogize.During the unsupervised pre-training of limited Boltzmann machine, it is 0.5 that the study of experimental setup hidden layer, which jumps rate, study Rate is 0.01.The input data of training set is first obtained from aobvious layer, then the conditional probability of aobvious layer and hidden layer is derived with energy function, Then hidden layer is derived by and probability that aobvious layer unit is activated, thus the solving result biased.Last foundation hidden layer list Difference in correlation between first and aobvious layer unit updates the weight and biasing of network, can be to enable flow function after repeated multiple times Reach minimum state of value, i.e. the best stabilized state of network, so that it is determined that the hidden layer output information that aobvious layer input is corresponding;
4. building depth confidence network: depth confidence network is formed by three limited Boltzmann machine series stacks, by upper one The hidden layer output of limited Boltzmann machine is inputted as the aobvious layer of next limited Boltzmann machine, and so on;
The last one is limited the hidden of Boltzmann machine 5. adding softmax layers: softmax function behind the output layer of network Layer output is mapped in (0,1) section, probability of all categories is obtained, to carry out multi-tag classification;
6. carrying out Training and preservation model to whole network with training set data: being divided into propagated forward and backpropagation two A process.Training set data is inputted in training process, using the cross entropy of softmax layers of output and corresponding label as loss letter Number updates network parameter by back-propagation algorithm iteration, realizes the minimum target of network losses function.In trim process In, output layer learning rate is 0.09, and with the increase of the number of iterations, the loss of network continues to decline and tend towards stability, and generates model Accuracy rate be gradually increasing, and tend to 98%;
7. the verifying of accuracy rate is carried out to the model of generation with test set data: the continuous voltage value of test set data is inputted instruction Practice the model generated, the probability of all categories exported to softmax layers of model judges, when certain class probability is greater than 0.5, recognizes Surely the test data inputted belongs to the voltage sag reason classification.It is finally that the classification of judgement or category combinations and corresponding voltage are temporary Drop reason label is matched, thus the recognition accuracy of test model.As a result the identification of each voltage sag reason classification is quasi- in True rate:
Singlephase earth fault C1: 99%;
Large-scale induction conductivity starts C2: 97%;
Transformer switching C3:98%;
C temporarily drops in multilevel voltage caused by short trouble4: 98%;
The compound C of single-phase earthing and large-scale induction conductivity starting5: 96%;
The compound C of single-phase earthing and transformer switching6:97%;
The compound C of large-scale induction conductivity starting and transformer switching7: 97%;
8. recorder data temporarily drops in input voltage, its corresponding voltage sag reason type of model automatic identification is simultaneously exported.
The present invention provides a kind of voltage sag reason recognition methods based on depth confidence network, and this method uses depth The depth confidence network that the limited Boltzmann machine in algorithm is connected in series is practised, history voltage dip recorder data and its electricity are utilized Pressure temporarily drop reason label training is iterated to the deep neural network put up, can autonomous learning its abstract characteristic parameter And generate model.The recorder data for the voltage dip event that electric energy quality monitoring point is likely to occur is input to the mould of training generation Type, so that it may obtain its corresponding voltage sag reason type.This method facilitates the timely system of power quality disturbance rejection scheme It is fixed, it can be effectively reduced economic loss, be a big supplement to existing power quality intelligent monitor system, have highly important Realistic meaning.
Those skilled in the art can make many variations example to the present invention according to the above description.Thus, embodiment In specific implementation should not constitute restriction of the invention, the present invention will be using the range that attached claims define as this hair Bright protection scope.

Claims (9)

1. a kind of voltage sag reason recognition methods based on depth confidence network, which is characterized in that this method includes following step It is rapid:
Step 1: data prediction is carried out to voltage dip recorder data and its voltage sag reason label;
Step 2: building limited Boltzmann machine network;
Step 3: unsupervised pre-training being carried out to the network that step 2 is put up with training set data;
Step 4: the limited Boltzmann machine model buildings depth confidence network generated using step 3 training;
Step 5: softmax layers are added behind the output layer of depth confidence network;
Step 6: Training being carried out to whole network with training set data and saves the model of generation;
Step 7: carrying out the verifying of accuracy rate to the model that step 6 generates with test set data;
Step 8: recorder data, its corresponding voltage sag reason type of model automatic identification temporarily drop in input voltage.
2. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In data prediction described in step 1 carries out continuous sampling to voltage dip recorder data first, then to the voltage value sampled Normalized is done, and by voltage sag reason label vector, finally by pretreated voltage dip data and its correspondence Voltage sag reason label be divided into training set and test set.
3. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In being limited Boltzmann machine described in step 2 is two layers of neural network, is connected with each other between the node of adjacent layer, between same layer Node is mutually not attached to, and wherein perhaps the input layer second layer becomes hidden layer or output layer to the referred to as aobvious layer of first layer.
4. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In unsupervised pre-training described in step 3 are as follows: taken using the continuous sampling voltage value of step 1 treated training set as step 2 The input for the first limited Boltzmann machine built up, and unsupervised pre-training is carried out to it;Followed by first limited glass The hidden layer output of the graceful machine of Wurz carries out unsupervised pre-training to next limited Boltzmann machine that step 2 is put up, with this Analogize.
5. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 4, feature exist In the unsupervised pre-training of the limited Boltzmann machine are as follows: limited Boltzmann machine introduces energy function, and training objective is The energy function of network is set to reach minimum value, the i.e. stable state of network;In training process, first from limited Boltzmann machine Aobvious layer obtains input data, and the conditional probability of aobvious layer and hidden layer is derived by energy function;Then it is derived by Hidden unit The probability being activated with aobvious layer unit, to obtain the solving result of network parameters;Boltzmann machine is finally limited to pass through Back-propagation algorithm iteration updates the weight and biasing of network, is fitted input data to maximum probability, realizes energy function most Smallization to automatically extract the abstract feature of voltage dip recorder data, and is exported from hidden layer.
6. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In depth confidence network described in step 4 is formed by multiple traditional limited Boltzmann machine series stacks, by a upper limited glass The hidden layer output of the graceful machine of Wurz is inputted as the aobvious layer of next limited Boltzmann machine, and so on.
7. a kind of voltage sag reason recognition methods based on limited Boltzmann machine according to claim 1, feature It is, softmax described in step 5 layers is called the softmax function in deep learning that the output of network is mapped to (0,1) section It is interior, to obtain the probability that network inputs data belong to each voltage sag reason classification.
8. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In Training described in step 6 are as follows: continuous voltage value and its corresponding voltage using the pretreated training set of step 1 Temporarily drop reason label is iterated training to the network that step 5 is put up, and by softmax layer output and inputs number in training process According to the cross entropy of corresponding label as loss function, the ginseng of network is updated by the back-propagation algorithm iteration in deep learning Number, to realize the minimum target of network losses function.
9. a kind of voltage sag reason recognition methods based on depth confidence network according to claim 1, feature exist In the verifying of accuracy rate described in step 7 are as follows: the model for generating step 1 treated test set data input step 6 training, The corresponding probability value of each voltage sag reason classification exported to softmax layers of model judges, when certain classification is corresponding general When rate value is greater than 0.5, assert that the test data of input belongs to the voltage sag reason classification, it is finally that the single voltage of judgement is temporary Drop classification or composite voltage temporarily drop classification voltage sag reason label corresponding with input data and are matched, thus Self -adaptive The recognition accuracy of model.
CN201811488320.0A 2018-12-06 2018-12-06 A kind of voltage sag reason recognition methods based on depth confidence network Pending CN109596942A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811488320.0A CN109596942A (en) 2018-12-06 2018-12-06 A kind of voltage sag reason recognition methods based on depth confidence network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811488320.0A CN109596942A (en) 2018-12-06 2018-12-06 A kind of voltage sag reason recognition methods based on depth confidence network

Publications (1)

Publication Number Publication Date
CN109596942A true CN109596942A (en) 2019-04-09

Family

ID=65962184

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811488320.0A Pending CN109596942A (en) 2018-12-06 2018-12-06 A kind of voltage sag reason recognition methods based on depth confidence network

Country Status (1)

Country Link
CN (1) CN109596942A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245584A (en) * 2019-05-28 2019-09-17 全球能源互联网研究院有限公司 A kind of voltage sag reason recognition methods and system
CN110989363A (en) * 2019-12-27 2020-04-10 广东电网有限责任公司电力科学研究院 Electric energy quality control method and device based on deep learning
CN112766702A (en) * 2021-01-13 2021-05-07 广东能源集团科学技术研究院有限公司 Distributed power station fault analysis method and system based on deep belief network
CN112766060A (en) * 2020-12-31 2021-05-07 郑州地铁集团有限公司 Identification method and identification device for voltage sag occurrence reasons and electronic equipment
CN113009279A (en) * 2021-03-05 2021-06-22 四川大川云能科技有限公司 Neo4 j-based power distribution network voltage sag fault positioning and visualization system thereof
CN113378880A (en) * 2021-05-08 2021-09-10 国网浙江省电力有限公司嘉兴供电公司 5G-based power grid voltage sag event detection method and system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2544125A1 (en) * 2011-07-04 2013-01-09 Sabirmedical, S.L. Methods and systems for non-invasive measurement of blood pressure
CN107330519A (en) * 2017-06-26 2017-11-07 西北工业大学 Fault Locating Method based on deep neural network
CN108089099A (en) * 2017-12-18 2018-05-29 广东电网有限责任公司佛山供电局 The diagnostic method of distribution network failure based on depth confidence network
CN108170994A (en) * 2018-01-29 2018-06-15 河海大学 A kind of oil-immersed electric reactor method for diagnosing faults based on two-way depth network
CN108171200A (en) * 2018-01-12 2018-06-15 西安电子科技大学 SAR image sorting technique based on SAR image statistical distribution and DBN
CN108846410A (en) * 2018-05-02 2018-11-20 湘潭大学 Power Quality Disturbance Classification Method based on sparse autocoding deep neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2544125A1 (en) * 2011-07-04 2013-01-09 Sabirmedical, S.L. Methods and systems for non-invasive measurement of blood pressure
CN107330519A (en) * 2017-06-26 2017-11-07 西北工业大学 Fault Locating Method based on deep neural network
CN108089099A (en) * 2017-12-18 2018-05-29 广东电网有限责任公司佛山供电局 The diagnostic method of distribution network failure based on depth confidence network
CN108171200A (en) * 2018-01-12 2018-06-15 西安电子科技大学 SAR image sorting technique based on SAR image statistical distribution and DBN
CN108170994A (en) * 2018-01-29 2018-06-15 河海大学 A kind of oil-immersed electric reactor method for diagnosing faults based on two-way depth network
CN108846410A (en) * 2018-05-02 2018-11-20 湘潭大学 Power Quality Disturbance Classification Method based on sparse autocoding deep neural network

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110245584A (en) * 2019-05-28 2019-09-17 全球能源互联网研究院有限公司 A kind of voltage sag reason recognition methods and system
CN110989363A (en) * 2019-12-27 2020-04-10 广东电网有限责任公司电力科学研究院 Electric energy quality control method and device based on deep learning
CN110989363B (en) * 2019-12-27 2022-01-25 广东电网有限责任公司电力科学研究院 Electric energy quality control method and device based on deep learning
CN112766060A (en) * 2020-12-31 2021-05-07 郑州地铁集团有限公司 Identification method and identification device for voltage sag occurrence reasons and electronic equipment
CN112766702A (en) * 2021-01-13 2021-05-07 广东能源集团科学技术研究院有限公司 Distributed power station fault analysis method and system based on deep belief network
CN113009279A (en) * 2021-03-05 2021-06-22 四川大川云能科技有限公司 Neo4 j-based power distribution network voltage sag fault positioning and visualization system thereof
CN113009279B (en) * 2021-03-05 2024-03-22 四川大川云能科技有限公司 Neo4 j-based power distribution network voltage sag fault positioning and visualization system
CN113378880A (en) * 2021-05-08 2021-09-10 国网浙江省电力有限公司嘉兴供电公司 5G-based power grid voltage sag event detection method and system

Similar Documents

Publication Publication Date Title
CN109596942A (en) A kind of voltage sag reason recognition methods based on depth confidence network
CN109635928B (en) Voltage sag reason identification method based on deep learning model fusion
CN109829236A (en) A kind of Compressor Fault Diagnosis method based on XGBoost feature extraction
CN112838946B (en) Method for constructing intelligent sensing and early warning model based on communication network faults
CN108256482A (en) A kind of face age estimation method that Distributed learning is carried out based on convolutional neural networks
CN116245033B (en) Artificial intelligent driven power system analysis method and intelligent software platform
CN109034054A (en) Harmonic wave multi-tag classification method based on LSTM
CN109766853A (en) Voltage Sag Disturbance classification method based on LSTM
CN112131783A (en) Power distribution station area big data-based household transformer topology relation identification method
CN107817404A (en) A kind of Portable metering automatization terminal trouble-shooter and its diagnostic method
CN109784276A (en) A kind of voltage dip feature extraction based on DBN and temporarily drop source discrimination method
CN112200694A (en) Dominant instability mode identification model construction and application method based on graph neural network
CN114006370B (en) Power system transient stability analysis and evaluation method and system
CN112507720B (en) Causal semantic relation transfer-based graph convolution network root cause identification method
CN106203531A (en) A kind of transmission line fault sorting technique based on the sparse autocoder of convolution
CN110321555A (en) A kind of power network signal classification method based on Recognition with Recurrent Neural Network model
CN111880044A (en) Online fault positioning method for power distribution network with distributed power supply
CN110059737A (en) Distribution transformer connection relationship discrimination method based on integrated deep neural network
CN114021433A (en) Construction method and application of dominant instability mode recognition model of power system
CN109239527A (en) Distribution network failure recognition methods based on depth confidence network
Peng et al. A very short term wind power prediction approach based on Multilayer Restricted Boltzmann Machine
Thomas et al. Neural architecture search algorithm to optimize deep transformer model for fault detection in electrical power distribution systems
CN114519293A (en) Cable body fault identification method based on hand sample machine learning model
CN113705695A (en) Power distribution network fault data identification method based on convolutional neural network
Vohra et al. End-to-end learning with multiple modalities for system-optimised renewables nowcasting

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20190409