CN108154223A - Power distribution network operating mode recording sorting technique based on network topology and long timing information - Google Patents

Power distribution network operating mode recording sorting technique based on network topology and long timing information Download PDF

Info

Publication number
CN108154223A
CN108154223A CN201711404581.5A CN201711404581A CN108154223A CN 108154223 A CN108154223 A CN 108154223A CN 201711404581 A CN201711404581 A CN 201711404581A CN 108154223 A CN108154223 A CN 108154223A
Authority
CN
China
Prior art keywords
hyper parameter
pattern
neural network
deep neural
input
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201711404581.5A
Other languages
Chinese (zh)
Other versions
CN108154223B (en
Inventor
姚蔷
张建良
戴义波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BEIJING INHAND NETWORK TECHNOLOGY Co Ltd
Original Assignee
BEIJING INHAND NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BEIJING INHAND NETWORK TECHNOLOGY Co Ltd filed Critical BEIJING INHAND NETWORK TECHNOLOGY Co Ltd
Priority to CN201711404581.5A priority Critical patent/CN108154223B/en
Publication of CN108154223A publication Critical patent/CN108154223A/en
Application granted granted Critical
Publication of CN108154223B publication Critical patent/CN108154223B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R31/00Arrangements for testing electric properties; Arrangements for locating electric faults; Arrangements for electrical testing characterised by what is being tested not provided for elsewhere

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The invention discloses a kind of base network topology and the power distribution network producing condition classification method of long timing information, the method includes:The common section of waveform to the acquisition time point alignment of the operating mode recording of each power distribution network monitoring point and is intercepted out according to technology during wide area pair, is then spliced in order by the network topology of different power distribution network monitoring points;More data blocks input deep neural network frame of the structure comprising convolution layer region and long short-term memory network unit, inputs deep neural network training according to block using operating mode recording grouped data set pair majority and inputs deep neural network producing condition classification device model to obtain optimal more data blocks;The multiple effective optimal more data blocks of wave regions input are inputted into deep neural network producing condition classification device models to obtain the operating mode type of the operating mode recording.

Description

Power distribution network operating mode recording sorting technique based on network topology and long timing information
Technical field
The present invention relates to technical field of electric power more particularly to a kind of power distribution network works based on network topology and long timing information Condition recording sorting technique.
Background technology
Power distribution network is the important component in electric system, with the fast development of intelligent grid, distributed generation resource A large amount of uncertain access so that distribution network failure information is more complicated, and the accurate quick analysis of failure becomes more difficult.For Ensure distribution net height intelligence operation, need to monitor feeder line operation data in real time, the timely early warning of abnormal conditions and therefore Hinder quick discovery processing, wherein being the critical function of intelligent distribution network to the identification of feeder line unusual service condition.Traditional power distribution network work Always using emulation data, emulation data are too preferable for condition classification, deal with simple.In recent years, as distribution network line is supervised The appearance of examining system, the current and voltage data in power distribution network actual motion are collected, and are started with traditional extraction feature side Method classifies to operating mode with reference to some machine learning methods.As disclosed a kind of Traditional Wavelet packet extraction in CN103136587A The power distribution network producing condition classification method that emulation data characteristics is combined with support vector machines.One kind is disclosed in CN103245881A to be based on The distribution network failure analysis method and device of the characteristics of tidal current distribution.It is disclosed in CN107340456A a kind of based on multiple features analysis Power distribution network operating mode intelligent identification Method.Producing condition classification method of the prior art has all only used single detection device in short-term Data, this recorder data cause the loss of distribution network topology and long timing information, it is difficult to which identification has complicated wave form Operating mode.Meanwhile in the prior art using the method for artificial extraction feature, not end-to-end learning training further limits The upper limit of its recognition correct rate is made.
Invention content
The first technical problem to be solved by the present invention is realized to incorporating distribution network topology and long sequential letter The recorder data of breath carries out producing condition classification.
Another technical problem to be solved by this invention is by way of transfer learning, is realized to large capacity complex die The training study of type.
In order to solve the above technical problem, the present invention provides a kind of base network topology and the power distribution network works of long timing information Condition sorting technique, the power distribution network producing condition classification method include:Waveform spelling is carried out to the operating mode recording of multiple power distribution network monitoring points It connects;Spliced waveform is pre-processed to obtain multiple effective wave regions;Structure includes convolution layer region and length in short-term More data blocks input deep neural network frame of phase memory network unit, and input deep neural network according to more data blocks Framework establishment has the forms data block input deep neural network frame of same convolution layer region;Utilize hyper parameter generator Multiple forms data block input deep neural networks are generated, the multiple forms data block input of applying working condition recording grouped data set pair is deep Degree neural network is respectively trained to obtain optimal forms data block input deep neural network model, and defeated from the optimal forms data block Enter to extract the structure and parameter of convolution layer region in deep neural network model;Using the convolution layer region of the extraction structure and The more data block input deep neural network frames of parameter initialization are inputted deep using the multiple more data blocks of hyper parameter generator generation Neural network is spent, the multiple more data block input deep neural networks of applying working condition recording grouped data set pair are respectively trained to obtain Obtain optimal more data blocks input deep neural network producing condition classification device models;The multiple effective wave regions input is optimal more Data block inputs deep neural network producing condition classification device model to obtain the operating mode type of the operating mode recording.
In one embodiment, the waveform concatenation refers to the operating mode to each power distribution network monitoring point according to technology during wide area pair Then the acquisition time point alignment of recording and the common section for intercepting out waveform are closed by the network topology of different power distribution network monitoring points System splices in order.
In one embodiment, the convolution layer region includes convolution block, and the structure of the convolution block can be twin-laminate roll Lamination overlaying structure is either that multichannel and each channel is made of the structure that double-deck convolutional layer is superimposed or is multichannel And structure that each channel includes 1 to 3 layer of convolutional layer form.
In one embodiment, residual connection is provided between the convolution block in the convolution layer region, the residual connects Connect refer to by convolution block output and input take and, and will take and be transferred to next convolution block as input with result.
In one embodiment, the operating mode recording categorized data set includes training dataset, validation data set and test Data set, the training dataset, validation data set and test data set include short circuit, ground connection, power failure, telegram in reply, the throwing of big load Enter, big load is cut out and the floor data that at least one of is struck by lightning.
According to another aspect of the present invention, a kind of be used for the progress of deep neural network producing condition classification device frame is additionally provided The method of hyper parameter machine training, the method includes:A. basis includes convolution layer region and long short-term memory network unit More data blocks input deep neural network frame, the deep neural network frame of structure forms data block input;
B. the deep neural network frame by forms data block input inputs hyper parameter random generator;
C. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
D. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, such as Hyper parameter built-up pattern input has been trained hyper parameter built-up pattern pond by fruit test by then terminating training, and such as test is not led to It crosses, then using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement is led to It crosses;
E. using validation data set to each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to test Card, the hyper parameter built-up pattern being verified is optimal hyper parameter built-up pattern, extracts the optimal hyper parameter built-up pattern Convolutional layer structure and parameter;
F. the deep neural network frame that more data blocks input is carried out just using the convolutional layer structure and parameter extracted Beginningization, and input hyper parameter random generator;
G. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
H. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, such as Hyper parameter built-up pattern input has been trained hyper parameter built-up pattern pond by fruit test by then terminating training, and such as test is not led to It crosses, then using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement is led to It crosses;
I. using validation data set to each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to test Card, the hyper parameter built-up pattern being verified is optimal hyper parameter built-up pattern.
In one embodiment, the training dataset, validation data set and test data set, the training dataset, Validation data set and test data set include short circuit, ground connection, have a power failure, during telegram in reply, big load input, big load cut out and are struck by lightning extremely A kind of few floor data.
In one embodiment, the training dataset, validation data set and test data set are directed to short circuit, are grounded, stop Operating mode during electricity, telegram in reply, big load input, big load cut out and are struck by lightning 7, each operating mode choose 5000 data;Wherein train number 4200 are chosen according to each operating mode is collected, test data set and validation data set each operating mode choose 400 data respectively.
In one embodiment, the optimal hyper parameter built-up pattern, which includes at least, forms optimal depth neural network operating mode The length and width and quantity of port number, convolution kernel inside the convolution block number of sorter model, each convolution block, full articulamentum nerve First quantity.
In one embodiment, to hyper parameter built-up pattern into the optimization method that uses during optimization for batch Adam after to biography Defeated method.
The machine training of the waveform pretreatment of the present invention, deep neural network grader frame and hyper parameter is made below It is further to be described in detail.
<Waveform concatenation and pretreatment>:
Fig. 1 is being illustrated based on network topology and the power distribution network operating mode recording sorting technique flow of long timing information for the present invention Figure, the waveform concatenation refer to according to technology during wide area pair to the acquisition time point alignment of each data of monitoring point and intercept out waveform Common section, it is same to be then spliced to form complete each monitoring point in order by the network topology of different power distribution network monitoring points Step data.
The splicing in order refers to, centered on the current monitoring point recorder data that need to be judged, according to network topology knot The recording of the front end monitoring point that need to currently judge monitoring point and rear end monitoring point is spliced the monitoring point for needing to judge in this prior by structure Recorder data it is front and rear, and the recorder data of other monitoring points is arranged in front end according to Wave anomaly degree and network topology structure Near monitoring point or rear end monitoring point.
Waveform preprocessing process includes two steps.The first step splices in waveform and intercepts out multiple effective undulating segments.The Two steps carry out down-sampled processing or interpolation processing to the undulating segment intercepted, Wave data are made to be converted to expected frequence.
In the waveform interception of the first step, effective undulating segment is defined as containing except work in electric current or electric field The abnormal section of other frequency components outside frequency.Multiple effective undulating segments can obtain by the waveform interception, the waveform area The quantity of section is consistent with abnormal the number of sections, but for the complexity for reducing required neural network structure, in of the invention Also it can be previously set in the particular number of multiple effective undulating segments, such as the present invention and be set as 5.It can be used in the present invention The method that undulating segment is extracted needed for three kinds is specially two-order-difference method, sliding window fourier transform method and Wavelet Transform.
The two-order-difference method is command N (t)={ n1,n2,…,nkIt is original waveform clock signal, extract waveform First difference is N'(t)={ n2‐n1,n3‐n2,…,nk‐nk‐1, the second order difference for extracting waveform is then N " (t)={ n3‐2n2+ n1,n4‐2n3+n2,…,nk‐2nk‐1+nK‐2}。
The sliding window Fourier transformation is with the running lengthwise of a window, every time to window to entire waveform Interior data carry out discrete Fourier transform, and Fourier transform definition isWherein x (i) is each Frequency point Coefficient.It, being capable of day part self-energy is distributed in different frequency range in detection waveform window chaotic journey using Fourier's Energy-Entropy Degree.Define the ENERGY E of different frequency in windowi=| x (i) |2, E=∑s EiFor signal in window energy and.Then in window Fu Leaf Energy-Entropy FEE can be defined as,Wherein pi=Ei/E。
The wavelet transformation refers to, enablesWherein Di(k) it is obtained for signal through J rank Wavelet decomposing and recomposings I order frequency component coefficients.The chaotic journey being distributed using signal energy in day part in wavelet energy entropy detection waveform in different frequency range Degree achievees the purpose that the abnormal section of extraction.The signal power spectrum E being defined on different scale i time ki(k)=| Di(k)|2,Ei=∑ Ei(k) for the energy at scale i upper all moment with.Then wavelet energy entropy WEE can be defined as, Wherein pi=Ei/ E,It is approximately the gross energy of signal.
In above-mentioned three kinds of different Wave shape extracting methods, it can be good at identifying using the second order difference absolute value of waveform Go out the catastrophe point of waveform, the calculation amount of this method is small, can save computing resource, therefore can make when computing resource is restricted With, but this method cannot calculate the abundant degree that waveform includes different frequency information.It can using window Fourier Energy-Entropy Power frequency component is excluded well, obtains the confusion degree of other different frequency range energy, but window size needs to fix, it is impossible to flexible To the egrabage of each time, while its fast algorithm calculation amount is smaller, therefore can be needed again in precision and calculation amount It is used in the case of balance.Wavelet energy entropy is ratio of precision window Fourier Energy-Entropy is high, but calculation amount on section needed for detection Also it is big, it can be used in the case of accuracy requirement height.
Down-sampled processing is carried out to the undulating segment intercepted in the present invention or interpolation processing may be used cubic spline and insert Waveform frequency is become 700Hz by value method.
<Deep neural network grader>:
Deep neural network grader frame schematic construction for the present invention as described in Figure 2, deep neural network classification The input terminal of device is inputted according to sequential with multiple data blocks, and each data block inputs an identical convolution layer region, so Input the long short-term memory network unit (LSTM cell) of shared weight again afterwards, it is chronologically front and rear between each LSTM cell to connect It connects, is finally output to connect layer region entirely, operating mode type is then exported by softmax output layers.It is the present invention shown in Fig. 3 Convolutional layer regional structure schematic diagram, the convolution layer region includes input convolutional layer, convolution block, average pond layer, of the invention In convolutional layer in involved convolution algorithm using convolution algorithm method commonly known in the art, but in convolution in the present invention Convolution kernel and relevant parameter used in operation are to obtain optimizing depth nerve net by the model training method of the present invention Network model and it is determining.
As shown in Fig. 4 a to 4b is convolution block concrete structure of the present invention, is two layers of convolutional coding structure wherein shown in Fig. 4 a, It is made of two layers of convolutional layer superposition.It is multi-channel structure shown in Fig. 4 b, and each channel has two layers of convolutional layer superposition to form. It is another multi-channel structure shown in Fig. 4 c, each channel is made of 1 to 3 layer of convolutional layer.Above-mentioned convolution convolution kernel in the block Relevant parameter and number of channels, and or each channel the convolution number of plies can be obtained according to model training method optimize depth Neural network model and determine.
Residual connection can also be increased in the present invention between the input and output of convolution block, i.e., by each convolution block It inputs and the output of the convolution block takes and the output valve as the convolution block, then it is convolution to have F (x)+x=H (x), wherein F () Block function, H () are the input of next module, and x is the output of last module.F (x)=H (x)-x, the increase of residual x are advantageous again In the training of F () parameter.
<Optimal models training based on transfer learning>:
Deep neural network grader of the invention is complicated as can be seen from Figure 2, and capacity is huge.If using random initial Change parameter, then carry out the mode of model training again, be then easy to that training process is made to be trapped in local optimum in optimization process Point, so as to be difficult that training obtains preferably result.Therefore present invention employs the methods of transfer learning to train optimal models, i.e., As shown in figure 5, the deep neural network that forms data block inputs is used to carry out model training as model framework first, from acquired Optimal depth neural network model in extract convolution layer region structure and parameter.Then, by the structure of convolution layer region and Parameter is input to as fixed hyper parameter in the deep neural network grader frame of the present invention as shown in Figure 2, then to the depth It spends neural network classifier frame and carries out model training as shown in Figure 6, due to the convolutional layer area of the model in the training process The structure and parameter in domain has been fixed, therefore model training process can be to avoid being absorbed in local optimum state, and can be simultaneously The consumed calculation amount of training is reduced, improves training speed.Specifically, the training step of the deep neural network model of the present invention It is as follows:
A. basis includes convolution layer region and more data blocks of long short-term memory network unit input deep neural network Frame, the deep neural network frame of structure forms data block input;
B. the deep neural network frame by forms data block input inputs hyper parameter random generator;
C. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
D. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, such as Hyper parameter built-up pattern input has been trained hyper parameter built-up pattern pond by fruit test by then terminating training, and such as test is not led to It crosses, then using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement is led to It crosses;
E. using validation data set to each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to test Card, the hyper parameter built-up pattern being verified is optimal hyper parameter built-up pattern, extracts the optimal hyper parameter built-up pattern Convolutional layer structure and parameter;
F. the deep neural network frame that more data blocks input is carried out just using the convolutional layer structure and parameter extracted Beginningization, and input hyper parameter random generator;
G. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
H. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, such as Hyper parameter built-up pattern input has been trained hyper parameter built-up pattern pond by fruit test by then terminating training, and such as test is not led to It crosses, then using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement is led to It crosses;
I. using validation data set to each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to test Card, the hyper parameter built-up pattern being verified is optimal hyper parameter built-up pattern.
Compared with prior art, one or more embodiments of the invention can have the following advantages that:
1. the present invention has spliced the Wave data of each detection device acquisition of distribution network in certain sequence, distribution network is believed Breath entirely enters recorder data so that operating mode recording classification results of the invention embody the topology information of the network of power distribution network.
2. the present invention adds in recurrent neural network in deep neural network structure, the processing to long clock signal is realized.
3. present invention uses the mode of transfer learning, to complete deep neural network grader, part learning is carried out, It is big to solve model structure complexity capacity, it is difficult to the problem of learning.
Other features and advantages of the present invention will be illustrated in the following description, also, partly becomes from specification It obtains it is clear that being understood by implementing the present invention.The purpose of the present invention and other advantages can be by specification, rights Specifically noted structure is realized and is obtained in claim and attached drawing.
Description of the drawings
Attached drawing is used to provide further understanding of the present invention, and a part for constitution instruction, the reality with the present invention It applies example and is used together to explain the present invention, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is being illustrated based on network topology and the power distribution network operating mode recording sorting technique flow of long timing information for the present invention Figure;
Fig. 2 is the deep neural network block schematic illustration of the present invention;
Fig. 3 is the convolutional layer regional structure schematic diagram of the present invention;
Fig. 4 a-4c are the convolution block structure schematic diagrames of the present invention;
Fig. 5 is the forms data block input deep neural network model training flow diagram of the present invention;
Fig. 6 is more data blocks input deep neural network model training flow diagram of the present invention;
Fig. 7 is the optimization deep neural network model schematic diagram of one embodiment of the invention.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, the present invention is made below in conjunction with attached drawing further Ground is described in detail.
Fig. 7 is optimization deep neural network model schematic diagram according to an embodiment of the invention.With reference to Fig. 7 to this Method illustrates.
In the present embodiment, to the acquisition time point alignment of each data of monitoring point and intercepted out according to technology during wide area pair first The common section of waveform, then to the data of monitoring point that need to currently judge, centered on the current data of monitoring point that need to be judged, according to The recording of the front end monitoring point of the monitoring point and rear end monitoring point is arranged in order from the distant to the near according to network topology structure and is spelled It connects, operating mode recorder data is synchronized to form complete each monitoring point.
Then, it synchronizes operating mode recorder data to each monitoring point using two-order-difference method to handle, to intercept out 5 effectively Waveform frequency followed by cubic spline interpolation, is become 700Hz by undulating segment.
Optimization training is carried out to forms data block input deep neural network by as shown in Figure 5, forms data block input The frame of deep neural network is to include convolution layer region and full connection layer region, convolution layer region include input convolutional layer, Convolution block, average pond layer.Containing multiple full articulamentums inside the full connection layer region, except the neuron of last layer of full articulamentum Number setting is identical with operating mode type number outer, and the number of plies of the neuronal quantity of other full articulamentums and full articulamentum is to need Optimize trained hyper parameter.
Used training dataset, verification number when the present embodiment is to the input deep neural network model training of forms data block 7 kinds of floor datas are used altogether according to collection and test data training, and respectively short circuit, ground connection, power failure, telegram in reply, big load put into, are big negative Lotus cuts out and is struck by lightning, each operating mode totally 5000 data, totally 35000 data.Each uses 4200 to training dataset, test Distinguish each with validation data set and use 400 data.Optimization method in training flow is batch Adam reverse transfers, works as survey When examination data set accuracy is more than 98% or trains more than 10000 wheel, otherwise training stopping continues to optimize, multiple hyper parameter groups Validation data set accuracy is highest in molding type is combined as optimal hyper parameter built-up pattern.
The structure and parameter of its convolution layer region, the convolutional layer structure obtained are extracted to the optimal hyper parameter built-up pattern With parameter be input convolutional layer in convolution kernel wide and a length of 6 × 5, number 8.Convolution block I is single pass double-deck convolution Layer, wherein wide and a length of the 1 × 3 of the convolution kernel of the first convolutional layer, number 8, wide and a length of the 1 of the convolution kernel of the second convolutional layer × 3, number 16.Convolution block II is set as the convolutional layer with triple channel, and channel a is double-deck convolutional layer, the wherein first volume Wide and a length of the 1 × 2 of the convolution kernel of lamination, number 16, wide and a length of the 1 × 3 of the convolution kernel of the second convolutional layer, number is 32.Channel b is double-deck convolutional layer, wherein wide and a length of the 1 × 3 of the convolution kernel of the first convolutional layer, number 32, the second convolutional layer Convolution kernel wide and a length of 1 × 3, number 32.Channel c is three-layer coil lamination, wherein the width of the convolution kernel of the first convolutional layer With a length of 1 × 3, number 16, wide and a length of the 1 × 4 of the convolution kernel of the second convolutional layer, number 16, the volume of third convolutional layer Wide and a length of the 1 × 3 of product core, the result of II 3 channels of convolution block is taken and inputted convolution block III by number 32.Convolution block III The convolutional layer with 8 channels is set as, each of which channel has double-deck convolutional layer to form, wherein the convolution kernel of the first convolutional layer Wide and a length of 1 × 3, number 32, wide and a length of the 1 × 3 of the convolution kernel of the second convolutional layer, number 64.Convolution block I, Residual is provided between convolution block II, convolution block III to connect, that is, input convolutional layer output result and convolution block I export result take and Convolution block II is inputted, convolution block I exports result and takes and input convolution block III, the output knot of convolution block II with the output of convolution block II result Fruit exports result with convolution block III and takes and input average pond layer.By setting residual connection that can strengthen convolution block I, convolution block IIth, the parameter training of convolution block III.
The structure and parameter of above-mentioned convolution layer region is updated in more data block input deep neural network frames.This When, the structure and parameter which inputs convolution layer region in deep neural network frames becomes preset parameter.It is more to this Data block input deep neural network is trained again, and used data set is still aforementioned training dataset, verification number According to collection and test data training.Pass through the feature of the LSTM cell in the obtained optimal depth neural network model of training Number is 16, and full articulamentum structure is bilayer, and the neuron number of the wherein full articulamentum of first layer is set as 12, and the second layer connects entirely It is all mutually 7 to connect the neuron number of layer and operating mode type.
5 groups of operating mode recorder datas by pretreatment are inputted into the optimal depth neural network model, you can included The producing condition classification structure of network topology and long timing information.
The above, only specific implementation case of the invention, protection scope of the present invention is not limited thereto, any ripe Those skilled in the art are known in technical specification of the present invention, modifications of the present invention or replacement all should be in the present invention Protection domain within.

Claims (10)

  1. A kind of 1. power distribution network producing condition classification method based on network topology and long timing information, which is characterized in that the power distribution network Producing condition classification method includes:
    Waveform concatenation is carried out to the operating mode recording of multiple power distribution network monitoring points;Spliced waveform is pre-processed more to obtain A effective wave regions;
    More data blocks input deep neural network frame of the structure comprising convolution layer region and long short-term memory network unit, and According to more data block input deep neural network framework establishments there is the forms data block of same convolution layer region to input deep Spend neural network framework;
    Multiple forms data block input deep neural network models, applying working condition recording grouped data are generated using hyper parameter generator The multiple forms data block input deep neural network model of set pair is respectively trained to obtain optimal forms data block input depth nerve Network model, and the structure and parameter of extracting convolution layer region is inputted from the optimal forms data block in deep neural network model;
    Using the more data block input deep neural network frames of the structure and parameter initialization of the convolution layer region of the extraction, utilize The multiple more data blocks input deep neural network models of hyper parameter generator generation, this is more for applying working condition recording grouped data set pair A more data block input deep neural network models are respectively trained to obtain optimal more data block input deep neural network operating modes Sorter model;
    The multiple effective optimal more data blocks of wave regions input are inputted into deep neural network producing condition classification device models to obtain Obtain operating mode type.
  2. 2. power distribution network producing condition classification method according to claim 1, which is characterized in that the waveform concatenation refers to according to wide Technology to the acquisition time point alignment of the operating mode recording of each power distribution network monitoring point and intercepts out the common section of waveform during domain pair, so Splice in order by the network topology of different power distribution network monitoring points afterwards.
  3. 3. power distribution network producing condition classification method according to claim 1, which is characterized in that the convolution layer region includes convolution Block, the structure of the convolution block can be twin-laminate roll lamination overlaying structure or be multichannel and each channel by twin-laminate roll The structure of lamination superposition is formed or is formed for the structure that multichannel and each channel includes 1 to 3 layer of convolutional layer.
  4. 4. power distribution network producing condition classification method according to claim 3, which is characterized in that the convolution in the convolution layer region Be provided with residual connection between block, residual connection refer to by convolution block output and input take and, and will take and tie Fruit is transferred to next convolution block as input.
  5. 5. power distribution network producing condition classification method according to claim 1, which is characterized in that the operating mode recording categorized data set Including training dataset, validation data set and test data set, the training dataset, validation data set and test data set packet Containing short circuit, ground connection, have a power failure, the floor data that telegram in reply, big load input, big load cut out and at least one of be struck by lightning.
  6. 6. a kind of machine learning method for more data blocks input deep neural network producing condition classification device frame, feature exists In, the method includes:
    A. deep neural network frame is inputted according to comprising convolution layer region and more data blocks of long short-term memory network unit, Build the deep neural network frame of forms data block input;
    B. the deep neural network frame by forms data block input inputs hyper parameter random generator;
    C. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
    D. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, if surveyed It pinged then to terminate to train and hyper parameter built-up pattern pond has been trained into hyper parameter built-up pattern input, such as test does not pass through, then Using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement passes through;
    E. it is tested using validation data set each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to verify It is optimal hyper parameter built-up pattern to demonstrate,prove the hyper parameter built-up pattern passed through, extracts the convolutional layer of the optimal hyper parameter built-up pattern Structure and parameter;
    F. the deep neural network frame that more data blocks input is initialized using the convolutional layer structure and parameter extracted, And input hyper parameter random generator;
    G. hyper parameter random generator generation hyper parameter built-up pattern pond is utilized;
    H. it is tested using each hyper parameter built-up pattern in test data set pair hyper parameter built-up pattern pond, if surveyed It pinged then to terminate to train and hyper parameter built-up pattern pond has been trained into hyper parameter built-up pattern input, such as test does not pass through, then Using training data set pair, the hyper parameter built-up pattern optimizes, and is tested again after optimization, until the model measurement passes through;
    I. it is tested using validation data set each hyper parameter built-up pattern in hyper parameter built-up pattern pond has been trained to verify It is optimal hyper parameter built-up pattern to demonstrate,prove the hyper parameter built-up pattern passed through.
  7. 7. the method for machine learning according to claim 6, which is characterized in that the training dataset, validation data set And test data set, the training dataset, validation data set and test data set include short circuit, ground connection, have a power failure, send a telegram in reply, is big The floor data that load input, big load cut out and at least one of be struck by lightning.
  8. 8. the method for machine learning according to claim 7, which is characterized in that the training dataset, validation data set Operating mode in cutting out and being struck by lightning 7 for short circuit, ground connection, power failure, telegram in reply, big load input, big load with test data set, each work Condition, which is chosen, is no less than 5000 data;Wherein each operating mode of training dataset chooses no less than 4200, test data set and tests Each operating mode of card data set chooses no less than 400 data respectively.
  9. 9. the method for machine learning according to claim 6, which is characterized in that the structure of the convolution layer region of the extraction With parameter include at least convolution block number, the port number inside each convolution block, convolution kernel length and width and quantity.
  10. 10. a kind of power distribution network producing condition classification device, described device is using the power distribution network operating mode point as described in one of claim 1-5 Class method classifies to power distribution network operating mode recording.
CN201711404581.5A 2017-12-22 2017-12-22 Power distribution network working condition wave recording classification method based on network topology and long time sequence information Active CN108154223B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711404581.5A CN108154223B (en) 2017-12-22 2017-12-22 Power distribution network working condition wave recording classification method based on network topology and long time sequence information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711404581.5A CN108154223B (en) 2017-12-22 2017-12-22 Power distribution network working condition wave recording classification method based on network topology and long time sequence information

Publications (2)

Publication Number Publication Date
CN108154223A true CN108154223A (en) 2018-06-12
CN108154223B CN108154223B (en) 2022-04-15

Family

ID=62464984

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711404581.5A Active CN108154223B (en) 2017-12-22 2017-12-22 Power distribution network working condition wave recording classification method based on network topology and long time sequence information

Country Status (1)

Country Link
CN (1) CN108154223B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109102126A (en) * 2018-08-30 2018-12-28 燕山大学 One kind being based on depth migration learning theory line loss per unit prediction model
CN109922038A (en) * 2018-12-29 2019-06-21 中国电力科学研究院有限公司 A kind of detection method and device of the abnormal data for electric power terminal
CN110794255A (en) * 2018-08-01 2020-02-14 北京映翰通网络技术股份有限公司 Power distribution network fault prediction method and system
CN110794254A (en) * 2018-08-01 2020-02-14 北京映翰通网络技术股份有限公司 Power distribution network fault prediction method and system based on reinforcement learning
CN111008455A (en) * 2019-11-01 2020-04-14 国网河南省电力公司电力科学研究院 Medium-term wind power scene generation method and system
CN112529104A (en) * 2020-12-23 2021-03-19 东软睿驰汽车技术(沈阳)有限公司 Vehicle fault prediction model generation method, fault prediction method and device
CN117368598A (en) * 2023-10-09 2024-01-09 武汉杰创博特自动化科技有限公司 Electric energy quality monitoring method and device

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136587A (en) * 2013-03-07 2013-06-05 武汉大学 Power distribution network operating state classification recognition method based on support vector machine
US20160284347A1 (en) * 2015-03-27 2016-09-29 Google Inc. Processing audio waveforms
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106569089A (en) * 2016-10-11 2017-04-19 国网上海市电力公司 Power distribution line dynamic Information-based power distribution network fault monitoring method
US20170112401A1 (en) * 2015-10-27 2017-04-27 CardioLogs Technologies Automatic method to delineate or categorize an electrocardiogram
CN107144760A (en) * 2016-12-31 2017-09-08 国电南瑞科技股份有限公司 It is a kind of to support electric network fault to perceive panorama Wave record method and plateform system with analysis
CN107167702A (en) * 2017-05-04 2017-09-15 国网福建省电力有限公司 A kind of distribution feeder fault type recognition method and device
CN107340456A (en) * 2017-05-25 2017-11-10 国家电网公司 Power distribution network operating mode intelligent identification Method based on multiple features analysis
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103136587A (en) * 2013-03-07 2013-06-05 武汉大学 Power distribution network operating state classification recognition method based on support vector machine
US20160284347A1 (en) * 2015-03-27 2016-09-29 Google Inc. Processing audio waveforms
US20170112401A1 (en) * 2015-10-27 2017-04-27 CardioLogs Technologies Automatic method to delineate or categorize an electrocardiogram
CN106056147A (en) * 2016-05-27 2016-10-26 大连楼兰科技股份有限公司 System and method for establishing target division remote damage assessment of different vehicle types based artificial intelligence radial basis function neural network method
CN106569089A (en) * 2016-10-11 2017-04-19 国网上海市电力公司 Power distribution line dynamic Information-based power distribution network fault monitoring method
CN107144760A (en) * 2016-12-31 2017-09-08 国电南瑞科技股份有限公司 It is a kind of to support electric network fault to perceive panorama Wave record method and plateform system with analysis
CN107167702A (en) * 2017-05-04 2017-09-15 国网福建省电力有限公司 A kind of distribution feeder fault type recognition method and device
CN107340456A (en) * 2017-05-25 2017-11-10 国家电网公司 Power distribution network operating mode intelligent identification Method based on multiple features analysis
CN107358203A (en) * 2017-07-13 2017-11-17 西安电子科技大学 A kind of High Resolution SAR image classification method based on depth convolution ladder network

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
BISWARUP BHATTACHARYA等: "Intelligent fault analysis in electrical power grids", 《ARXIV:1711.03026V1》 *
CHRISTIAN SZEGEDY等: "Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning", 《ARXIV:1602.07261V2》 *
JAMES BERGSTRA等: "Random Search for Hyper-Parameter Optimization", 《JOURNAL OF MACHINE LEARNING RESEARCH》 *
MING ZHANG等: "Convolutional Neural Networks for Automatic Cognitive Radio Waveform Recognition", 《IEEE ACCESS》 *
MOU-FA GUO等: "Deep-Learning-Based Earth Fault Detection Using Continuous Wavelet Transform and Convolutional Neural Network in Resonant Grounding Distribution Systems", 《IEEE SENSORS JOURNAL》 *
王艳娜等: "基于卷积神经网络的烟瘾渴求脑电分类", 《计算机系统应用》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110794255A (en) * 2018-08-01 2020-02-14 北京映翰通网络技术股份有限公司 Power distribution network fault prediction method and system
CN110794254A (en) * 2018-08-01 2020-02-14 北京映翰通网络技术股份有限公司 Power distribution network fault prediction method and system based on reinforcement learning
CN110794255B (en) * 2018-08-01 2022-01-18 北京映翰通网络技术股份有限公司 Power distribution network fault prediction method and system
CN109102126A (en) * 2018-08-30 2018-12-28 燕山大学 One kind being based on depth migration learning theory line loss per unit prediction model
CN109922038A (en) * 2018-12-29 2019-06-21 中国电力科学研究院有限公司 A kind of detection method and device of the abnormal data for electric power terminal
CN111008455A (en) * 2019-11-01 2020-04-14 国网河南省电力公司电力科学研究院 Medium-term wind power scene generation method and system
CN111008455B (en) * 2019-11-01 2021-09-07 国网河南省电力公司电力科学研究院 Medium-term wind power scene generation method and system
CN112529104A (en) * 2020-12-23 2021-03-19 东软睿驰汽车技术(沈阳)有限公司 Vehicle fault prediction model generation method, fault prediction method and device
CN117368598A (en) * 2023-10-09 2024-01-09 武汉杰创博特自动化科技有限公司 Electric energy quality monitoring method and device
CN117368598B (en) * 2023-10-09 2024-04-12 武汉杰创博特自动化科技有限公司 Electric energy quality monitoring method and device

Also Published As

Publication number Publication date
CN108154223B (en) 2022-04-15

Similar Documents

Publication Publication Date Title
CN108154223A (en) Power distribution network operating mode recording sorting technique based on network topology and long timing information
CN107909118A (en) A kind of power distribution network operating mode recording sorting technique based on deep neural network
CN108107324A (en) A kind of electrical power distribution network fault location method based on depth convolutional neural networks
Wilson et al. Deep learning-aided cyber-attack detection in power transmission systems
CN109635928A (en) A kind of voltage sag reason recognition methods based on deep learning Model Fusion
CN106443310B (en) A kind of transformer fault detection method based on SOM neural network
CN108120900A (en) A kind of electrical power distribution network fault location method and system
CN109446187A (en) Complex equipment health status monitoring method based on attention mechanism and neural network
CN106443447B (en) A kind of aerogenerator fault signature extracting method based on iSDAE
CN112041693A (en) Power distribution network fault positioning system based on mixed wave recording
CN109325526A (en) A kind of distribution network failure classification method using convolution depth confidence network
CN109765333A (en) A kind of Diagnosis Method of Transformer Faults based on GoogleNet model
CN104777418B (en) A kind of analog-circuit fault diagnosis method based on depth Boltzmann machine
CN104155574A (en) Power distribution network fault classification method based on adaptive neuro-fuzzy inference system
CN108921285A (en) Single-element classification method in sequence based on bidirectional valve controlled Recognition with Recurrent Neural Network
CN108732528A (en) A kind of digitalized electrical energy meter method for diagnosing faults based on depth confidence network
CN104617574A (en) Assessment method for transient voltage stabilization of load area of electrical power system
CN111680788B (en) Equipment fault diagnosis method based on deep learning
CN105974265A (en) SVM (support vector machine) classification technology-based power grid fault cause diagnosis method
CN103440497B (en) A kind of GIS insulation defect shelf depreciation collection of illustrative plates mode identification method
CN107817404A (en) A kind of Portable metering automatization terminal trouble-shooter and its diagnostic method
CN111509862A (en) Offshore platform power system structure optimization method and system based on digital twin cloud
CN109164362A (en) A kind of recognition methods and system of direct current cables shelf depreciation defect failure
CN105606914A (en) IWO-ELM-based Aviation power converter fault diagnosis method
CN109444667A (en) Power distribution network initial failure classification method and device based on convolutional neural networks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant