CN107491792A - Feature based maps the electric network fault sorting technique of transfer learning - Google Patents

Feature based maps the electric network fault sorting technique of transfer learning Download PDF

Info

Publication number
CN107491792A
CN107491792A CN201710756382.4A CN201710756382A CN107491792A CN 107491792 A CN107491792 A CN 107491792A CN 201710756382 A CN201710756382 A CN 201710756382A CN 107491792 A CN107491792 A CN 107491792A
Authority
CN
China
Prior art keywords
msub
mrow
feature
msubsup
auxiliary source
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710756382.4A
Other languages
Chinese (zh)
Other versions
CN107491792B (en
Inventor
张化光
刘鑫蕊
孙秋野
于晓婷
杨珺
王智良
赵鑫
吴泽群
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Original Assignee
Northeastern University China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China filed Critical Northeastern University China
Priority to CN201710756382.4A priority Critical patent/CN107491792B/en
Publication of CN107491792A publication Critical patent/CN107491792A/en
Application granted granted Critical
Publication of CN107491792B publication Critical patent/CN107491792B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Other Investigation Or Analysis Of Materials By Electrical Means (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a kind of electric network fault sorting technique in Feature Mapping transfer learning, it includes:1st, selected target FIELD Data and auxiliary source FIELD Data;2nd, respectively target domain data and auxiliary source FIELD Data are carried out with the fault signature extraction based on Tiny increment dt wavelet singular entropy, and using each Tiny increment dt wavelet singular entropy as fault signature, and then separately constitute the corresponding characteristic vector space of target domain and aid in the corresponding characteristic vector space of source domain;3rd, feature based mapping transfer learning method, axle feature, auxiliary source domain characteristic feature, target domain characteristic feature each corresponding base vector are found;4th, using the corresponding base vector of the auxiliary source domain obtained as supporting vector;Concurrently set similitude penalty term and add the constraints of supporting vector training set, to train grader to obtain corresponding classification results jointly.The present invention can accurately be quickly found out three groups of base vectors for best embodying fault category.

Description

Feature based maps the electric network fault sorting technique of transfer learning
Technical field
The invention belongs to transmission & distribution electro-technical field, particularly relates to a kind of power network of feature based mapping transfer learning Fault Classification.
Background technology
The continuous improvement of the expanding day by day of power network scale, transmission line capability and voltage class brings huge economy and society Benefit, but at the same time, the failure of power network can also cause more serious harm to social economy and people's lives.Quickly, it is accurate True electric network fault classification is the premise of fast quick-recovery power network power supply, and a pith of accident analysis, therefore, research Fast and reliable Fault Classification is to ensureing that the security of power system has great importance with economy.
Classification has been obtained for extensive research and application as a kind of important machine learning method;Main method is According to source domain data train classification models, then the type of target domain data is predicted with disaggregated model.In order to Ensure that the disaggregated model that training obtains has accuracy and high reliability, traditional classification learning needs to meet two vacations substantially If:(1) training sample for being used to learn meets independent identically distributed condition with new test sample;(2) must have enough can profit Training sample could learn to obtain a good disaggregated model.But in actual applications we have found that the two conditions It can not often meet.
To solve the problems, such as that above-mentioned data volume deficiency and feature difference, most of machine learning algorithms are used to failure sample This is re-flagged to solve, but it needs many experiments and professional knowledge, and the factor such as power system operating mode, load Change, does not ensure that the flag data collected is consistent with the distribution of target domain fault data, and reduce diagnostic result can Reliability.
Applicants have found that transfer learning is as a kind of cross-cutting, across task learning method, in machine learning field In cause the concern of more and more scholars.Transfer learning is that the problem of different but association area is asked with existing knowledge A kind of new engine learning method of solution.It relaxes two basic assumptions in conventional machines study, it is therefore an objective in source domain number In the case of there is different pieces of information distribution with target domain data, the neck from the knowledge migration that source domain learns to target Domain, solve target domain in only have on a small quantity exemplar data even without problem concerning study.In grid collapses, net Network topological structure changes, and data distribution changes therewith, the method based on transfer learning, makes full use of and target data The knowledge of different but related assistance datas, can effectively improve failure modes performance of the machine learning algorithm on power network.
Therefore, a kind of proposition of the electric network fault classification based on transfer learning, there is certain theoretical foundation and reality to anticipate Justice.
The content of the invention
In view of defects in the prior art, the invention aims to provide a kind of feature based mapping transfer learning Electric network fault sorting technique, it is by abstractively analyzing auxiliary source domain characteristic feature, target domain characteristic feature and axle feature Between correlation, the data in each field effectively can be mapped to low-dimensional feature space from original high-dimensional feature space, make it Under the lower dimensional space, source domain data possess similar distribution with target domain data;Asked again by Lagrange multiplier methods The maximum of coefficient of relationship, and then find three groups of base vectors for best embodying fault category.
To achieve these goals, technical scheme:
A kind of electric network fault sorting technique in Feature Mapping transfer learning, it is characterised in that comprise the following steps:
Step 1, selected target domain data to be sorted and auxiliary source FIELD Data, the target domain data are at least Including:Three-phase current data of each faulty line in each fault moment;Auxiliary source FIELD Data includes:Each faulty line In the three-phase current data of the previous fault moment corresponding to each fault moment, each faulty line is in each fault moment institute Three-phase current data corresponding to the corresponding previous normal operation moment and with the adjacent lines of the faulty line each therefore Hinder the three-phase current data corresponding to the moment;
Step 2, respectively target domain data and auxiliary source FIELD Data are carried out with the event based on Tiny increment dt wavelet singular entropy Barrier feature extraction to extract each corresponding Tiny increment dt wavelet singular entropy, and using each Tiny increment dt wavelet singular entropy as Fault signature, so separately constitute the corresponding characteristic vector space of target domain and aid in the corresponding feature of source domain to Quantity space;
Step 3, feature based mapping transfer learning method, the common factor for aiding in source domain and target domain is special as axle Levy, and the method for extreme value is sought based on Lagrange multiplier methods, it is special to find axle feature, auxiliary source domain characteristic feature, target domain There is feature each corresponding base vector;
Step 4, in the fault classification process based on support vector machines, auxiliary source domain phase that step 3 is obtained Corresponding base vector is as supporting vector;Add auxiliary source domain in original object function in support vector machines simultaneously The similitude penalty term of corresponding supporting vector training set simultaneously adds support in original object function constraints The constraints of vectorial training set, to train grader to obtain corresponding classification results jointly.
Further, the step 2 includes:
Step 21, m layer Wavelet Multiresolution Decomposition signal decompositions are carried out to target domain data and auxiliary source FIELD Data respectively To obtain wavelet conversion coefficient matrix corresponding to wavelet transform result, the wavelet transformation system is obtained after singular value decomposition calculates Singular value features matrix corresponding to matrix number, it is designated as Λ=diag (λ12,…λn);
Step 22, the n rank Tiny increment dt wavelet singular entropies for constructing target domain data and auxiliary source FIELD Data respectively, it is corresponding Formula be
In formula, λiFor the i-th rank non-zero singular eigenvalue problem, XiFor λiI-th of Tiny increment dt wavelet singular entropy;
Step 23, n rank Tiny increment dt wavelet singular entropies element one characteristic vector of construction with the auxiliary source FIELD Data Xs1, it is designated as Xs1=[X1,X2…Xn], with seasonThen it is corresponding normalization wavelet packet character to Measure Xs1 *It is expressed as Xs1 *=[X1/X,X2/X,…,Xn/ X], and form the vector space X of auxiliary source FIELD Datas *=[Xs1 *, Xs2 *,…Xsn *];Similarly form the vector space X of target domain datat *=[Xt1 *,Xt2 *,…Xtn *]。
Further, the n=m in the singular value features matrix2- 1 and so that λnMeet constraints.
Further, the step 3 includes:
Step 31, definition auxiliary source domain Xs *The failure identification of middle known fault type is Y so that a certain fault type mark Know y ∈ Y;Auxiliary source source domain Xs *With target domain Xt *Common factor be corresponding to axle feature or be field axle feature X *∈Xs * ∩Xt *, while calculate axle feature X *Coefficient correlation between Y, corresponding calculation formula are as follows:
Wherein, I (X *, Y) and represent axle feature X *Coefficient correlation between Y, P (X *, y) and represent field axle feature X * With failure identification y Joint Distribution probability, P (X *) represent axle feature X *Appear in auxiliary source FIELD Data Xs *In probability, P (y) represent that failure identification y appears in target domain data Xt *In probability, and select in m layer Wavelet Multiresolution Decomposition signal decompositions The maximum axle feature composition axle characteristic set of each correlation coefficient value, and it is designated as X={ X∩1 *,X∩2 *,…,X∩m *};
And select the axle feature composition axle feature set that each correlation coefficient value is maximum in m layer Wavelet Multiresolution Decomposition signal decompositions Close, be designated as X={ X∩1 *,X∩2 *,…,X∩m *};
Step 32, the fault signature being primarily based in extracted auxiliary source FIELD Data and target domain data are formed UnionThree groups of stochastic variable α are constructed, the paired samples collection of beta, gamma, are designated asWherein | X|, Axle feature is represented respectively Dimension, the dimension of the fault signature of auxiliary source FIELD Data, the dimension of the fault signature of target domain data, andRepresent sample point X in auxiliary source FIELD Datas *In axle feature space XOn value,Table Show auxiliary source FIELD Data sample point Xs *In feature spaceOn value,Represent target domain number According to middle sample pointIn feature spaceOn value,
Then according to making linear combinationBetween coefficient correlation reach maximum former Then find three groups of base vectorsFound based on following formula
Corresponding constraints
Wherein CAA=(AS∪At)(AS∪At)T
Wherein:WAIt is the corresponding basal orientation duration set of axle feature;WSIt is the base vector for aiding in source domain characteristic feature corresponding Set;WTIt is the corresponding basal orientation duration set of target domain characteristic feature;CssRefer to fault signature D in auxiliary source FIELD Datas The covariance matrix of axis feature;ASIt is on α | X|×nsThe matrix of dimension;AtIt is on α | X|×ntThe matrix of dimension;S It is on βThe matrix of dimension;T is on βThe matrix of dimension;CTTRefer to target domain data Middle fault signature DtThe covariance matrix of axis feature;CAARefer to fault signature D in auxiliary source FIELD DatasWith target domain number According to middle fault signature DtUnion Ds∪DtThe covariance matrix of axis feature;
Step 33, the method for seeking based on Lagrange multiplier methods extreme value, find axle feature, auxiliary source domain fault signature, The respective corresponding base vector of target domain fault signature, i.e., it is special to find axle feature, auxiliary source domain failure based on following formula The respective corresponding base vector of sign, target domain fault signature:
Then preceding m of foregoing matrix Characteristic vector corresponding to generalized eigenvalue is required base vector WA, WS, WT
Further, the step 4 includes:
Step 41, in the fault classification process based on support vector machines, the auxiliary source that is first obtained step 3 The corresponding base vector W in fieldSAs supporting vector;Added simultaneously in support vector machines in original object function auxiliary The similitude penalty term of the corresponding supporting vector training set of source domain is helped, is designated asAnd in original object function about The constraints of supporting vector training set is added in beam condition;Then contain auxiliary source domain number in support vector machines According to supporting vector training set VsTraining sample T optimization process be
Constraints
Wherein NtFor i number, Ns-NtIt is j number,K is mesh The number of FIELD Data training set is marked,It is the supporting vector of j-th of auxiliary source FIELD Data, DtIt is that target domain data are corresponding Training data,Represent j-th of supporting vector and the distance of the training data, γt、γsRespectively target domain The regularization coefficient of data and auxiliary source FIELD Data, For the quadratic term of error function;
Then optimized with method of Lagrange multipliers, i.e., in order to reach between predicted value and real class label Loss function is minimum, then adds the i.e. improved SVM Function Estimations expression of SVM Function Estimations expression formula of auxiliary supporting vector collection Formula is:
Step 42, by constructing and obtaining corresponding classification results with reference to multiple two graders.
Further, the step 42 obtains corresponding classification results using decision Binary Tree method.
Compared with prior art, beneficial effects of the present invention:
The present invention relaxes training condition identical with test data distribution and targeted diagnostics data volume adequate data source, And add auxiliary source FIELD Data so that auxiliary source FIELD Data effectively helps target domain by the method for transfer learning Realize classification, specifically refer to due to characteristic value diagonal matrix can quickly and easily faults signal time-frequency distributions feature, and micro- increasing Amount wavelet singular entropy can quantitatively distinguish the signal with different time-frequency distributions, and feature of the data on distribution trend can be entered Row quantificational expression, and by the statistical analysis to information quantitatively to reflect systematic uncertainty and complexity the features such as, this hair It is bright to be extracted by carrying out the fault signature based on Tiny increment dt wavelet singular entropy to the three-phase current of target domain and auxiliary source domain, So that the wavelet conversion coefficient matrix of fault-signal is after SVD is converted, and the thinking of feature based mapping transfer learning, by taking out As the correlation between ground analysis auxiliary source domain characteristic feature, target domain characteristic feature and axle feature, effectively by each neck The data in domain are mapped to low-dimensional feature space from original high-dimensional feature space, then seek coefficient of relationship by Lagrange multiplier methods Maximum, three groups of base vectors for best embodying fault category are found, finally by the base vector that will be aided in source domain as branch These supporting vectors are given to certain weight after holding vector by penalty term and target domain training set trains grader jointly To cause the base vector with discriminant classification ability to greatly improve nicety of grading.
Brief description of the drawings
Fig. 1 is flow chart of steps corresponding to the method for the invention;
Fig. 2 is core procedure figure corresponding to the method for the invention example;
Fig. 3 is power network line simplified model corresponding to example of the present invention;
Fig. 4 is the polytypic structural representation of decision Binary Tree corresponding to example of the present invention;
Fig. 5 is the base vector projection result based on transfer learning corresponding to example of the present invention.
Embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with attached in the embodiment of the present invention Figure, technical scheme is clearly and completely described, it is clear that described embodiment is that a part of the invention is real Apply example, rather than whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art are not making creation Property work under the premise of the every other embodiment that is obtained, belong to the scope of protection of the invention.
A kind of electric network fault sorting technique in Feature Mapping transfer learning as Figure 1-Figure 2, it is characterised in that bag Include following steps:
Step 1, selected target domain data to be sorted and auxiliary source FIELD Data, the target domain data are at least Including:Each faulty line each fault moment three-phase current data, three-phase current data be three-phase current size and Direction;Auxiliary source FIELD Data includes:Three-phase of each faulty line in the previous fault moment corresponding to each fault moment Current data, three-phase current data of each faulty line at the normal operation moment of the eve corresponding to each fault moment And the three-phase current data of the fault moment corresponding to adjacent lines corresponding with the faulty line;If in terms of 24 hours Calculate, then it is then current fault moment to break down today, and last failure is previous fault moment, as event occurs for yesterday Barrier, then three-phase current data during failure today are just used as target domain data, and the fault data being related to yesterday should just include In source domain data;
Step 2, respectively target domain data and auxiliary source FIELD Data are carried out with the event based on Tiny increment dt wavelet singular entropy Barrier feature extraction to extract each corresponding Tiny increment dt wavelet singular entropy, and using each Tiny increment dt wavelet singular entropy as Fault signature, so separately constitute the corresponding characteristic vector space of target domain and aid in the corresponding feature of source domain to Quantity space;Further, the step 2 includes:
Step 21, m layer Wavelet Multiresolution Decomposition signal decompositions are carried out to target domain data and auxiliary source FIELD Data respectively To obtain wavelet conversion coefficient matrix corresponding to wavelet transform result, the wavelet transformation system is obtained after singular value decomposition calculates Singular value features matrix corresponding to matrix number (singular value features matrix represents the basic friction angle feature of wavelet conversion coefficient matrix), It is designated as Λ=diag (λ12,…λn);
Step 22, the combination of wavelet transformation, singular value decomposition and comentropy formd into Tiny increment dt wavelet singular entropy, It is specifically the n rank Tiny increment dt wavelet singular entropies of construction target domain data and auxiliary source FIELD Data respectively, corresponding formula For
In formula, XiFor the i-th rank non-zero singular value λiTiny increment dt wavelet singular entropy;
Step 23, n rank Tiny increment dt wavelet singular entropies element one characteristic vector of construction with the auxiliary source FIELD Data Xs1, it is designated as Xs1=[X1,X2…Xn], with seasonThen it is corresponding normalization wavelet packet character to Measure Xs1 *It is expressed as Xs1 *=[X1/X,X2/X,…,Xn/ X], and form the vector space X of auxiliary source FIELD Datas *=[Xs1 *, Xs2 *,…Xsn *];Similarly form the vector space X of target domain datat *=[Xt1 *,Xt2 *,…Xtn *].Further, Chang Gen M, general n=m are selected according to different failure situations2- 1, thus can be according to the complexity of failure to the layer of wavelet decomposition Number enters Mobile state adjustment, and makes λnMeet constraints λn1>=0.01%, the singular value features matrix so obtained can be most simple Clean faults information.
Step 3, feature based mapping transfer learning method, the common factor for aiding in source domain and target domain is special as axle Levy, and the method for extreme value is sought based on Lagrange multiplier methods, find axle feature, auxiliary source domain fault signature, target domain event Hinder feature each corresponding base vector;Further, the thinking of feature based mapping transfer learning, by auxiliary source domain and mesh The data in each field are mapped to low-dimensional feature space from original high-dimensional feature space and existed by the common factor in mark field as axle feature Under the lower dimensional space, source domain data possess similar distribution with target domain data, therefore can be abstracted as analysis auxiliary source Correlation between field characteristic feature, target domain characteristic feature and axle feature, and ask relation system with Lagrange multiplier methods Several maximum, three groups of base vectors for best embodying fault category are just found, specifically, the step 3 includes:Step 31, determine Justice auxiliary source domain Xs *The failure identification of middle known fault type is Y so that a certain fault type mark y ∈ Y;Auxiliary is led in a steady stream Domain Xs *With target domain Xt *Common factor be corresponding to axle feature or be field axle feature X *∈Xs *∩Xt *, while calculate axle spy Levy X *Coefficient correlation between Y, corresponding calculation formula are as follows:
Wherein, P (X *, y) and represent field axle feature X *With failure identification y Joint Distribution probability, coefficient correlation I (X *,y) The big axle feature of value has stronger identification so each in selection m layer Wavelet Multiresolution Decomposition signal decompositions for fault type The maximum axle feature composition axle characteristic set of correlation coefficient value, is designated as X={ X∩1 *,X∩2 *,…,X∩m *};Step 32, it is primarily based on The union that fault signature in the auxiliary source FIELD Data and target domain data that are extracted is formedStructure Three groups of stochastic variable α are produced, the paired samples collection of beta, gamma, are designated asWherein | X|,The dimension of axle feature, the dimension of the fault signature of auxiliary source FIELD Data, mesh are represented respectively The dimension of the fault signature of FIELD Data is marked, andRepresent sample point X in auxiliary source FIELD Datas *In axle feature Space XOn value,Represent auxiliary source FIELD Data sample point Xs *In feature spaceOn Value,Represent sample point in target domain dataIn feature spaceOn value, then according to Make linear combinationBetween coefficient correlation reach maximum principle and find three groups of base vectorsFound based on following formula
Constraints is
Wherein CAA=(AS∪At)(AS∪At)T
Step 33, the method for seeking based on Lagrange multiplier methods extreme value, find axle feature, auxiliary source domain fault signature, The respective corresponding base vector of target domain fault signature, i.e., find axle feature, the auxiliary peculiar spy of source domain based on following formula Each corresponding base vector, the source domain characteristic feature refer to remove in source domain feature for sign, target domain characteristic feature The part (axle feature) of the common factor of source domain and target domain, remaining feature, target domain characteristic feature refers to target domain The part (axle feature) of the common factor of source domain and target domain, remaining feature are removed in feature:
Then preceding m of foregoing matrix Characteristic vector corresponding to generalized eigenvalue is required base vector WA, WS, WT
Step 4, in the fault classification process based on support vector machines, auxiliary source domain phase that step 3 is obtained Corresponding base vector is as supporting vector;Add auxiliary source domain in original object function in support vector machines simultaneously The similitude penalty term of corresponding supporting vector training set simultaneously adds support in original object function constraints The constraints of vectorial training set, to train grader to obtain corresponding classification results jointly.Further, the step 4 is wrapped Include:Step 41, in the fault classification process based on support vector machines, the auxiliary source domain phase that is first obtained step 3 Corresponding base vector WSAs supporting vector;Add auxiliary source neck in original object function in support vector machines simultaneously The similitude penalty term of the corresponding supporting vector training set in domain, is designated asAnd in original object function constraints The middle constraints for adding supporting vector training set;The then branch containing auxiliary source FIELD Data in support vector machines Hold vectorial training set VsTraining sample T optimization process be
WhereinK is the number of target domain data training set, It is the supporting vector of j-th of auxiliary source FIELD Data, DtIt is training data corresponding to target domain data,Represent jth Individual supporting vector and the distance of the training data, if its value is smaller, thenValue is bigger, illustrates supporting vector pair The classification of target domain acts on bigger, γt、γsThe respectively regularization coefficient of target domain data and auxiliary source FIELD Data, For the quadratic term of error function, original slack variable is replaced with the quadratic term of error, calculating can be simplified;
Then optimized with method of Lagrange multipliers, i.e., in order to reach between predicted value and real class label Loss function is minimum, then adds the i.e. improved SVM Function Estimations expression of SVM Function Estimations expression formula of auxiliary supporting vector collection Formula is:
Wherein, sgn represents sign function, if its corresponding return value numeral is more than 0, sgn and returns to 1, if numeral is equal to 0, then 0 is returned, if numeral is less than 0, returns to -1.
More classification of step 42, electric network fault can be tied by constructing and obtaining corresponding classification with reference to multiple two graders Fruit.Further, the step 42 obtains corresponding classification results using decision Binary Tree method, such as utilizes decision Binary Tree All categories are first divided into two subclasses by method, and each subclass is divided into two subclasses again, and failure can be divided into ground connection, earth-free, Ground connection is divided into single-phase earthing (a/b/c), two phase ground (ab/ac/bc);It is earth-free to be divided into line to line fault (ab/ac/bc), three-phase Short-circuit (abc), by that analogy, until marking off final classification.
Scheme of the present invention is described further by taking instantiation as an example below:
Such as Fig. 3-Fig. 5, it is specific to the step in above-mentioned electric network model:
Parameter setting:Such as Fig. 4, electric network model is the 500kV both end power supplying transmission systems of a simplification, overall length 200km;Line Road model is using frequency dependent model come so that the result of calculation obtained in transient emulation is more accurate, the model is considered not Same frequency signal attenuation degree in transmitting procedure is different;In the case of power frequency, positive order parameter is r1=0.035W/km, x1= 0.424W/km, b1=2.726 × 10-6S/km;Zero sequence parameter is r0=0.3W/km, x0=1.143W/km, b0=1.936 × 10-6S/km;Produced simultaneously on this electric network model 10 under different faults position, different transition resistances and different faults moment operating mode A, B, C three-phase current data of kind of failure totally 1 089 groups of samples as failure modes, wherein Ag105 groups of failure, BgFailure 145 Group, Cg90 groups of failure, ABg95 groups of failure, BCg118 groups of failure, ACg102 groups of failure, 129 groups of AB failures, 109 groups of BC failures, AC 85 groups of 111 groups of failure and ABC failures.
Step 2:Target domain data and auxiliary source FIELD Data are carried out respectively m layer Wavelet Multiresolution Decomposition signal decompositions with Wavelet conversion coefficient matrix corresponding to wavelet transform result is obtained, the wavelet conversion coefficient is obtained after singular value decomposition calculates Singular value features matrix corresponding to matrix, it is designated as Λ=diag (λ12,…λn);To aid in source domain to carry out 3 layers to C phase currents Exemplified by small echo resolution decomposition, singular value features matrix is Λ=diag (λ12,…λ8), to C phases electricity under different type failure Flow after signal carries out SVD conversion, obtained singular eigenvalue problem (the black volume representation number of faults relevant with C phases as shown in table 1 According to).Known by table 1, all failures relevant with C phases, 8 singular values are relatively average;And the fault data unrelated with C phases, then relatively It is unequal.
Each rank singular eigenvalue problem of the unusual diagonal matrix of C phase currents of table 1
By taking A phase single-phase earthings as an example, calculating Tiny increment dt wavelet singular entropy is It can similarly obtainX can be obtained by that analogys1=[X1,X2,…,X6,X7,X8] =[2.198,0.341, -0.345, -0.187, -0.108, -0.196, -0.084, -0.056],Xs1 *=[X1/X,X2/X,…,X8/ X]=[0.970,0.151, -0.152, - 0.083, -0.047, -0.092, -0.003, -0.001], similarly it can obtain X in B phase single-phase short circuitss2 *, by 10 kinds of failure classes The singular eigenvalue problem of type can obtain aiding in source domain vector space Xs *=[Xs1 *,Xs2 *,…Xs10 *]8×10It is empty with target domain vector Between Xt *=[Xt1 *,Xt2 *,…Xt10 *]8×10
Step 3:Based on the Feature Mapping transfer learning method described in step, three groups of bases for best embodying fault category are found Vector, corresponding to this example then mainly includes:
(1) auxiliary source domain vector space X is obtaineds *=[Xs1 *,Xs2 *,…Xs10 *]8×10With target domain vector space Xt * =[Xt1 *,Xt2 *,…Xt10 *]8×10
(2) from X *∈Xs *∩Xt *The maximum axle feature composition axle characteristic set of middle m correlation coefficient value of selection, is designated as X ={ X∩1 *,X∩2 *,…,X∩m *};
(3) it is configured to sample set
(4) it is exactly required base vector W then to choose the characteristic vector corresponding to the preceding m generalized eigenvalue of above formula matrixA, WS, WT;According to (1) -- (4), it is 100 to take axle characteristic, and projection vector dimension is 70, available base vector projection result such as Fig. 5 It is shown:
Classification results are obtained after being eventually adding supporting vector training set, specifically, as represented first using C phases as special phase, 1 This is mutually failure phase, and 0 is represented as non-faulting phase, and table 2 lists the part training sample and coding situation using C phases as special phase, Remaining situation is similar.
The C phases of table 2 are the malfunction coding of special phase
By the electric network fault class test result such as following table after addition supporting vector training set:As shown in Table 3, add and support All kinds of failures can be correctly identified after vectorial training set, the average accuracy of failure modes is relatively added without branch up to more than 99% Vectorial training set is held to be significantly improved.
The failure modes test result of table 3 counts
Table 4 shows, SVM Fault Classification is improved substantially not by fault moment, event after adding supporting vector training set Hinder the influence of position and transition resistance, found by analyzing erroneous judgement sample, only when high resistive fault occurs for transmission line of electricity end This algorithm is possible to judge by accident.
Failure modes result under 4 different operating modes of table
To verify the adaptability based on transfer learning Fault Classification to power network line Parameters variation, above-mentioned training is utilized Good improvement SVM models are tested the line fault sample data of 3 different parameters, and line parameter circuit value is as shown in table 5, each line The test result on road is as shown in table 6.As shown in Table 6:Event based on the Fault Classification of transfer learning to different transmission lines of electricity Barrier classification accuracy rate is attained by more than 98%, illustrates that it can be well adapted for the change of line parameter circuit value;Meanwhile this method energy It is enough rapidly to realize from feature extraction to failure modes whole process, to the time required to the Classification and Identification of 1 sample data less than 0.2 s, meet requirement of the fault diagnosis to Diagnostic Time.
3 line parameter circuit values in the electric network model of table 5
The failure modes result of 6 different power network lines of table
The foregoing is only a preferred embodiment of the present invention, but protection scope of the present invention be not limited thereto, Any one skilled in the art the invention discloses technical scope in, technique according to the invention scheme and its Inventive concept is subject to equivalent substitution or change, should all be included within the scope of the present invention.

Claims (6)

1. a kind of electric network fault sorting technique in Feature Mapping transfer learning, it is characterised in that comprise the following steps:
Step 1, selected target domain data to be sorted and auxiliary source FIELD Data, the target domain data are at least wrapped Include:Three-phase current data of each faulty line in each fault moment;Auxiliary source FIELD Data includes:Each faulty line exists The three-phase current data of previous fault moment corresponding to each fault moment, each faulty line are right in each fault moment institute The three-phase current data corresponding to the previous normal operation moment answered and with the adjacent lines of the faulty line in each failure Three-phase current data corresponding to moment;
Step 2, respectively target domain data and auxiliary source FIELD Data are carried out with the failure spy based on Tiny increment dt wavelet singular entropy Sign is extracted to extract each corresponding Tiny increment dt wavelet singular entropy, and using each Tiny increment dt wavelet singular entropy as failure Feature, and then separately constitute the corresponding characteristic vector space of target domain and aid in the corresponding characteristic vector of source domain empty Between;
Step 3, feature based mapping transfer learning method, will aid in the common factor of source domain and target domain as axle feature, and The method that extreme value is sought based on Lagrange multiplier methods, find axle feature, auxiliary source domain characteristic feature, target domain characteristic feature Each corresponding base vector;
Step 4, in the fault classification process based on support vector machines, the auxiliary source domain that step 3 is obtained is corresponding Base vector as supporting vector;It is relative to add auxiliary source domain in original object function in support vector machines simultaneously The similitude penalty term for the supporting vector training set answered simultaneously adds supporting vector in original object function constraints The constraints of training set, to train grader to obtain corresponding classification results jointly.
2. electric network fault sorting technique according to claim 1, it is characterised in that:
The step 2 includes:
Step 21, m layer Wavelet Multiresolution Decomposition signal decompositions are carried out to target domain data and auxiliary source FIELD Data to obtain respectively To wavelet conversion coefficient matrix corresponding to wavelet transform result, the wavelet conversion coefficient square is obtained after singular value decomposition calculates Singular value features matrix corresponding to battle array, and it is designated as Λ=diag (λ12,…λn);
Step 22, the n rank Tiny increment dt wavelet singular entropies for constructing target domain data and auxiliary source FIELD Data respectively, corresponding public affairs Formula is
<mrow> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>=</mo> <mfrac> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msqrt> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>&amp;lambda;</mi> <mi>j</mi> </msub> </mrow> </msqrt> </mfrac> <mi>l</mi> <mi>n</mi> <mfrac> <msub> <mi>&amp;lambda;</mi> <mi>i</mi> </msub> <msqrt> <mrow> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <msub> <mi>&amp;lambda;</mi> <mi>j</mi> </msub> </mrow> </msqrt> </mfrac> </mrow>
In formula, λiFor the i-th rank non-zero singular eigenvalue problem, XiFor λiI-th of Tiny increment dt wavelet singular entropy;
Step 23, n rank Tiny increment dt wavelet singular entropies element one feature vector, X of construction with the auxiliary source FIELD Datas1, note For Xs1=[X1,X2…Xn], with seasonThen corresponding normalization Wavelet Packet Energy Eigenvector Xs1 *Table It is shown as Xs1 *=[X1/X,X2/X,…,Xn/ X], and form the characteristic vector space X of auxiliary source FIELD Datas *=[Xs1 *,Xs2 *,… Xsn *];The characteristic vector space X for the composition target domain data that repeat the above stepst *=[Xt1 *,Xt2 *,…Xtn *]。
3. electric network fault sorting technique according to claim 2, it is characterised in that:
N=m in the singular value features matrix2- 1 and so that λnMeet constraints.
4. electric network fault sorting technique according to claim 1, it is characterised in that:
The step 3 includes:
Step 31, define auxiliary source FIELD Data Xs *The failure identification of middle known fault type is Y so that a certain fault type mark Know y ∈ Y;Auxiliary source source domain data Xs *With target domain data Xt *Common factor be corresponding to axle feature or be that field axle is special Sign, is designated as X *∈Xs *∩Xt *, while calculate axle feature X *Coefficient correlation between Y, corresponding calculation formula are as follows:
<mrow> <mi>I</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>*</mo> </msup> <mo>,</mo> <mi>Y</mi> <mo>)</mo> </mrow> <mo>=</mo> <msubsup> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>*</mo> </msup> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>*</mo> </msup> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>+</mo> <mfrac> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <msup> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>*</mo> </msup> <mo>,</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> <mrow> <mi>P</mi> <mrow> <mo>(</mo> <mi>y</mi> <mo>)</mo> </mrow> </mrow> </mfrac> </mrow>
Wherein, I (X *, Y) and represent axle feature X *Coefficient correlation between Y, P (X *, y) and represent field axle feature X *With failure Identify y Joint Distribution probability, P (X *) represent axle feature X *Appear in auxiliary source FIELD Data Xs *In probability, P (y) tables Show that failure identification y appears in target domain data Xt *In probability, and select each correlation in m layer Wavelet Multiresolution Decomposition signal decompositions The maximum axle feature composition axle characteristic set of coefficient value, and it is designated as X={ X∩1 *,X∩2 *,…,X∩m *};
It is that step 32, the fault signature being primarily based in extracted auxiliary source FIELD Data and target domain data are formed and CollectionThe number for setting three groups of stochastic variables α, β, γ is respectively ns, three groups of stochastic variables α, β are constructed, γ paired samples collection, is designated asWherein | X|, The dimension of expression axle feature respectively, the dimension of the fault signature of auxiliary source FIELD Data, the fault signature of target domain data Dimension, andRepresent sample point X in auxiliary source FIELD Datas *In axle feature space XOn value,Represent auxiliary source FIELD Data sample point Xs *In feature spaceOn value,Represent sample point in target domain dataIn feature spaceOn value,
Then linear combination is madeBetween coefficient correlation reach maximum principle and find three Group base vectorFound based on following formula
<mrow> <mi>M</mi> <mi>a</mi> <mi>x</mi> <mfrac> <mrow> <msub> <mi>W</mi> <mi>A</mi> </msub> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> </mrow> <mroot> <mrow> <msubsup> <mi>W</mi> <mi>S</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <msubsup> <mi>W</mi> <mi>A</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>A</mi> <mi>A</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>A</mi> </msub> <msubsup> <mi>W</mi> <mi>T</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> </mrow> <mn>3</mn> </mroot> </mfrac> </mrow>
Constraints is corresponding to formula
CAA=(AS∪At)(AS∪At)T
<mrow> <msub> <mi>C</mi> <mrow> <mi>s</mi> <mi>s</mi> </mrow> </msub> <mo>=</mo> <msup> <mi>SS</mi> <mi>T</mi> </msup> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>S</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>S</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </msup> </mrow>
<mrow> <msub> <mi>C</mi> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </msub> <mo>=</mo> <msup> <mi>TT</mi> <mi>T</mi> </msup> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>T</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>T</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> </mrow> </msup> </mrow>
<mrow> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <mo>=</mo> <msup> <mi>S</mi> <mi>T</mi> </msup> <msub> <mi>A</mi> <mi>S</mi> </msub> <msup> <mi>T</mi> <mi>T</mi> </msup> </mrow>
<mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>&amp;alpha;</mi> <msub> <mi>n</mi> <mi>s</mi> </msub> </msub> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> </mrow> </msup> </mrow>
<mrow> <msub> <mi>A</mi> <mi>t</mi> </msub> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;alpha;</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&amp;alpha;</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>&amp;alpha;</mi> <msub> <mi>n</mi> <mi>t</mi> </msub> </msub> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mi>t</mi> </msub> </mrow> </msup> </mrow>
<mrow> <mi>S</mi> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;beta;</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&amp;beta;</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>&amp;beta;</mi> <msub> <mi>n</mi> <mi>s</mi> </msub> </msub> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>S</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mi>s</mi> </msub> </mrow> </msup> </mrow>
<mrow> <mi>T</mi> <mo>=</mo> <mo>&amp;lsqb;</mo> <msub> <mi>&amp;gamma;</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>&amp;gamma;</mi> <mn>2</mn> </msub> <mo>,</mo> <mo>...</mo> <msub> <mi>&amp;gamma;</mi> <msub> <mi>n</mi> <mi>t</mi> </msub> </msub> <mo>&amp;rsqb;</mo> <mo>&amp;Element;</mo> <msup> <mi>R</mi> <mrow> <mrow> <mo>|</mo> <mrow> <msubsup> <mi>X</mi> <mi>t</mi> <mo>*</mo> </msubsup> <mo>-</mo> <msub> <mi>X</mi> <mo>&amp;cap;</mo> </msub> </mrow> <mo>|</mo> </mrow> <mo>&amp;times;</mo> <msub> <mi>n</mi> <mi>t</mi> </msub> </mrow> </msup> </mrow>
Wherein:WAIt is the corresponding basal orientation duration set of axle feature;WSIt is the basal orientation quantity set for aiding in source domain characteristic feature corresponding Close;WTIt is the corresponding basal orientation duration set of target domain characteristic feature;CssRefer to fault signature D in auxiliary source FIELD DatasIn The covariance matrix of axle feature;ASIt is on α | X|×nsThe matrix of dimension;AtIt is on α | X|×ntThe matrix of dimension;S is On β'sThe matrix of dimension;T is on βThe matrix of dimension;CTTRefer in target domain data Fault signature DtThe covariance matrix of axis feature;CAARefer to fault signature D in auxiliary source FIELD DatasWith target domain data Middle fault signature DtUnion Ds∪DtThe covariance matrix of axis feature;
Step 33, the method for seeking based on Lagrange multiplier methods extreme value, find axle feature, auxiliary source domain characteristic feature, target The respective corresponding base vector of field characteristic feature, i.e., find axle feature, auxiliary source domain characteristic feature, mesh based on following formula The respective corresponding base vector of mark field characteristic feature:
<mrow> <mi>L</mi> <mrow> <mo>(</mo> <msub> <mi>&amp;lambda;</mi> <mi>1</mi> </msub> <mo>,</mo> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>&amp;lambda;</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>W</mi> <mi>A</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mi>S</mi> </msub> <mo>,</mo> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <msub> <mi>W</mi> <mi>A</mi> </msub> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>-</mo> <mfrac> <msub> <mi>&amp;lambda;</mi> <mi>1</mi> </msub> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>A</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>A</mi> <mi>A</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>A</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mfrac> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>S</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> <mo>-</mo> <mfrac> <msub> <mi>&amp;lambda;</mi> <mn>3</mn> </msub> <mn>2</mn> </mfrac> <mrow> <mo>(</mo> <msubsup> <mi>W</mi> <mi>T</mi> <mi>T</mi> </msubsup> <msub> <mi>C</mi> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>L</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>W</mi> <mi>A</mi> </msub> </mrow> </mfrac> <mo>=</mo> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mi>1</mi> </msub> <msub> <mi>C</mi> <mrow> <mi>A</mi> <mi>A</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>A</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>L</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>W</mi> <mi>S</mi> </msub> </mrow> </mfrac> <mo>=</mo> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>A</mi> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>2</mn> </msub> <msub> <mi>C</mi> <mrow> <mi>S</mi> <mi>S</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow>
<mrow> <mfrac> <mrow> <mo>&amp;part;</mo> <mi>L</mi> </mrow> <mrow> <mo>&amp;part;</mo> <msub> <mi>W</mi> <mi>T</mi> </msub> </mrow> </mfrac> <mo>=</mo> <msub> <mi>C</mi> <mrow> <msub> <mi>A</mi> <mi>S</mi> </msub> <mi>S</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>A</mi> </msub> <msub> <mi>W</mi> <mi>S</mi> </msub> <mo>-</mo> <msub> <mi>&amp;lambda;</mi> <mn>3</mn> </msub> <msub> <mi>C</mi> <mrow> <mi>T</mi> <mi>T</mi> </mrow> </msub> <msub> <mi>W</mi> <mi>T</mi> </msub> <mo>=</mo> <mn>0</mn> </mrow>
Then the preceding m broad sense of foregoing matrix is special Characteristic vector corresponding to value indicative is required base vector WA, WS, WT
5. electric network fault sorting technique according to claim 1, it is characterised in that:
The step 4 includes:
Step 41, in the fault classification process based on support vector machines, the auxiliary source domain that is first obtained step 3 Corresponding base vector WSAs supporting vector;Simultaneously auxiliary source is added in support vector machines in original object function The similitude penalty term of the corresponding supporting vector training set in field, is designated asAnd constrain bar in original object function The constraints of supporting vector training set is added in part;Then contain auxiliary source FIELD Data in support vector machines Supporting vector training set VsTraining sample T optimization process be
Wherein NtFor i number, Ns-NtIt is j number,K is target neck The number of numeric field data training set,It is the supporting vector of j-th of auxiliary source FIELD Data, DtIt is to be instructed corresponding to target domain data Practice data,Represent j-th of supporting vector and the distance of the training data, γt、γsRespectively target domain data With the regularization coefficient of auxiliary source FIELD Data,For the quadratic term of error function;
Then optimized with method of Lagrange multipliers, i.e., in order to reach the loss between predicted value and real class label Function is minimum, then adds the SVM Function Estimation expression formulas of auxiliary supporting vector collection, and the SVM Function Estimations expression formula is:
<mrow> <mi>y</mi> <mrow> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> <mo>=</mo> <mi>sgn</mi> <mrow> <mo>(</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <msub> <mi>N</mi> <mi>t</mi> </msub> <mo>+</mo> <msub> <mi>N</mi> <mi>s</mi> </msub> </mrow> </munderover> <msub> <mi>a</mi> <mi>i</mi> </msub> <msub> <mi>y</mi> <mi>i</mi> </msub> <mi>k</mi> <mo>(</mo> <mrow> <msub> <mi>x</mi> <mi>i</mi> </msub> <mo>,</mo> <mi>x</mi> </mrow> <mo>)</mo> <mo>+</mo> <mi>b</mi> <mo>)</mo> </mrow> <mo>;</mo> </mrow>
Step 42, by constructing and obtaining corresponding classification results with reference to multiple two graders.
6. electric network fault sorting technique according to claim 5, it is characterised in that:
The step 42 obtains corresponding classification results using decision Binary Tree method.
CN201710756382.4A 2017-08-29 2017-08-29 Power grid fault classification method based on feature mapping transfer learning Active CN107491792B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710756382.4A CN107491792B (en) 2017-08-29 2017-08-29 Power grid fault classification method based on feature mapping transfer learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710756382.4A CN107491792B (en) 2017-08-29 2017-08-29 Power grid fault classification method based on feature mapping transfer learning

Publications (2)

Publication Number Publication Date
CN107491792A true CN107491792A (en) 2017-12-19
CN107491792B CN107491792B (en) 2020-04-07

Family

ID=60650761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710756382.4A Active CN107491792B (en) 2017-08-29 2017-08-29 Power grid fault classification method based on feature mapping transfer learning

Country Status (1)

Country Link
CN (1) CN107491792B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509850A (en) * 2018-02-24 2018-09-07 华南理工大学 A kind of invasion signal Recognition Algorithm based on distribution type fiber-optic system
CN108764915A (en) * 2018-04-26 2018-11-06 阿里巴巴集团控股有限公司 Model training method, data type recognition methods and computer equipment
CN108805206A (en) * 2018-06-13 2018-11-13 南京工业大学 Improved L SSVM establishing method for analog circuit fault classification
CN108798641A (en) * 2018-06-19 2018-11-13 东北大学 A kind of Diagnosing The Faults of Sucker Rod Pumping System method based on subspace transfer learning
CN108875918A (en) * 2018-08-14 2018-11-23 西安交通大学 It is a kind of that diagnostic method is migrated based on the mechanical breakdown for being adapted to shared depth residual error network
CN109635837A (en) * 2018-11-10 2019-04-16 天津大学 A kind of carefree fall detection system of scene based on commercial wireless Wi-Fi
CN110363763A (en) * 2019-07-23 2019-10-22 上饶师范学院 Image quality evaluating method, device, electronic equipment and readable storage medium storing program for executing
CN110365583A (en) * 2019-07-17 2019-10-22 南京航空航天大学 A kind of sign prediction method and system based on bridged domain transfer learning
CN110726958A (en) * 2019-11-05 2020-01-24 国网江苏省电力有限公司宜兴市供电分公司 Fault diagnosis method of dry-type reactor
CN110736968A (en) * 2019-10-16 2020-01-31 清华大学 Radar abnormal state diagnosis method based on deep learning
WO2020168676A1 (en) * 2019-02-21 2020-08-27 烽火通信科技股份有限公司 Method for constructing network fault handling model, fault handling method and system
CN112036301A (en) * 2020-08-31 2020-12-04 中国矿业大学 Driving motor fault diagnosis model construction method based on intra-class feature transfer learning and multi-source information fusion
CN112255500A (en) * 2020-10-12 2021-01-22 山东翰林科技有限公司 Power distribution network weak characteristic fault identification method based on transfer learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101672873A (en) * 2009-10-20 2010-03-17 哈尔滨工业大学 Detection method of transient harmonic signals of power system based on combination of Tsallis wavelet singular entropy and FFT computation
CN104361396A (en) * 2014-12-01 2015-02-18 中国矿业大学 Association rule transfer learning method based on Markov logic network
CN105469111A (en) * 2015-11-19 2016-04-06 浙江大学 Small sample set object classification method on basis of improved MFA and transfer learning
CN106841910A (en) * 2016-12-20 2017-06-13 国网辽宁省电力有限公司沈阳供电公司 Imitative electromagnetism algorithm is melted into the Fault Diagnosis Method for Distribution Networks of timing ambiguity Petri network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101672873A (en) * 2009-10-20 2010-03-17 哈尔滨工业大学 Detection method of transient harmonic signals of power system based on combination of Tsallis wavelet singular entropy and FFT computation
CN104361396A (en) * 2014-12-01 2015-02-18 中国矿业大学 Association rule transfer learning method based on Markov logic network
CN105469111A (en) * 2015-11-19 2016-04-06 浙江大学 Small sample set object classification method on basis of improved MFA and transfer learning
CN106841910A (en) * 2016-12-20 2017-06-13 国网辽宁省电力有限公司沈阳供电公司 Imitative electromagnetism algorithm is melted into the Fault Diagnosis Method for Distribution Networks of timing ambiguity Petri network

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XINRUI LIU等: "A Novel Protection Scheme against Fault Resistance for", 《MATHEMATICAL PROBLEMS IN ENGINEERING》 *
刘鑫蕊等: "基于多源信息的智能电网动态层次化故障诊断", 《东北大学学报(自然科学版)》 *
覃姜维: "迁移学习方法研究及其在跨领域数据分类中的应用", 《中国博士学位论文全文数据库 信息科技辑》 *
赵智等: "基于自组织特征映射网络的配电网故障类型识别", 《自动化技术与应用》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509850A (en) * 2018-02-24 2018-09-07 华南理工大学 A kind of invasion signal Recognition Algorithm based on distribution type fiber-optic system
CN108509850B (en) * 2018-02-24 2022-03-29 华南理工大学 Intrusion signal identification method based on distributed optical fiber system
CN108764915A (en) * 2018-04-26 2018-11-06 阿里巴巴集团控股有限公司 Model training method, data type recognition methods and computer equipment
CN108764915B (en) * 2018-04-26 2021-07-30 创新先进技术有限公司 Model training method, data type identification method and computer equipment
CN108805206A (en) * 2018-06-13 2018-11-13 南京工业大学 Improved L SSVM establishing method for analog circuit fault classification
CN108798641A (en) * 2018-06-19 2018-11-13 东北大学 A kind of Diagnosing The Faults of Sucker Rod Pumping System method based on subspace transfer learning
CN108798641B (en) * 2018-06-19 2021-06-11 东北大学 Rod pump pumping well fault diagnosis method based on subspace migration learning
CN108875918B (en) * 2018-08-14 2021-05-04 西安交通大学 Mechanical fault migration diagnosis method based on adaptive shared depth residual error network
CN108875918A (en) * 2018-08-14 2018-11-23 西安交通大学 It is a kind of that diagnostic method is migrated based on the mechanical breakdown for being adapted to shared depth residual error network
CN109635837A (en) * 2018-11-10 2019-04-16 天津大学 A kind of carefree fall detection system of scene based on commercial wireless Wi-Fi
WO2020168676A1 (en) * 2019-02-21 2020-08-27 烽火通信科技股份有限公司 Method for constructing network fault handling model, fault handling method and system
CN110365583B (en) * 2019-07-17 2020-05-22 南京航空航天大学 Symbol prediction method and system based on bridge domain transfer learning
CN110365583A (en) * 2019-07-17 2019-10-22 南京航空航天大学 A kind of sign prediction method and system based on bridged domain transfer learning
CN110363763B (en) * 2019-07-23 2022-03-15 上饶师范学院 Image quality evaluation method and device, electronic equipment and readable storage medium
CN110363763A (en) * 2019-07-23 2019-10-22 上饶师范学院 Image quality evaluating method, device, electronic equipment and readable storage medium storing program for executing
CN110736968A (en) * 2019-10-16 2020-01-31 清华大学 Radar abnormal state diagnosis method based on deep learning
CN110736968B (en) * 2019-10-16 2021-10-08 清华大学 Radar abnormal state diagnosis method based on deep learning
CN110726958A (en) * 2019-11-05 2020-01-24 国网江苏省电力有限公司宜兴市供电分公司 Fault diagnosis method of dry-type reactor
CN110726958B (en) * 2019-11-05 2022-06-28 国网江苏省电力有限公司宜兴市供电分公司 Fault diagnosis method of dry-type reactor
CN112036301A (en) * 2020-08-31 2020-12-04 中国矿业大学 Driving motor fault diagnosis model construction method based on intra-class feature transfer learning and multi-source information fusion
CN112036301B (en) * 2020-08-31 2021-06-22 中国矿业大学 Driving motor fault diagnosis model construction method based on intra-class feature transfer learning and multi-source information fusion
CN112255500A (en) * 2020-10-12 2021-01-22 山东翰林科技有限公司 Power distribution network weak characteristic fault identification method based on transfer learning

Also Published As

Publication number Publication date
CN107491792B (en) 2020-04-07

Similar Documents

Publication Publication Date Title
CN107491792A (en) Feature based maps the electric network fault sorting technique of transfer learning
Celik et al. A robust WLAV state estimator using transformations
CN102496069B (en) Cable multimode safe operation evaluation method based on fuzzy analytic hierarchy process (FAHP)
CN103729432B (en) Method for analyzing and sequencing academic influence of theme literature in citation database
Abur et al. Least absolute value state estimation with equality and inequality constraints
CN109873501A (en) A kind of low-voltage network topology automatic identification method
CN107529651A (en) A kind of urban transportation passenger flow forecasting and equipment based on deep learning
CN110208643A (en) A kind of electric network failure diagnosis method based on PMU data and fault recorder data
Wang Extension neural network for power transformer incipient fault diagnosis
CN110161343A (en) A kind of non-intrusion type real-time dynamic monitoring method of intelligence train exterior power receiving device
CN108414896B (en) Power grid fault diagnosis method
CN103245881A (en) Power distribution network fault analyzing method and device based on tidal current distribution characteristics
CN106295911B (en) A kind of grid branch parameter evaluation method based on chromatographic assays
CN110348114B (en) Non-precise fault identification method for power grid completeness state information reconstruction
CN105656028B (en) A kind of visual display method of the stabilization of power grids nargin based on GIS
CN111881971A (en) Power transmission line fault type identification method based on deep learning LSTM model
Mínguez et al. State estimation sensitivity analysis
CN110045207A (en) A kind of complex fault diagnostic method based on power grid architecture and multisource data fusion
CN104967097A (en) Excitation surge current identification method based on support vector classifier
Xing et al. Data-Driven Transmission Line Fault Location with Single-Ended Measurements and Knowledge-Aware Graph Neural Network
CN107462810A (en) A kind of fault section location method suitable for active power distribution network
CN104484546B (en) Method for generating automatic power flow check file of power grid planning project
CN114358092A (en) Method and system for online diagnosis of internal insulation performance of capacitor voltage transformer
Abur et al. Educational toolbox for power system analysis
Nguyen et al. Transmission line fault type classification based on novel features and neuro-fuzzy system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant