CN115577249A - Transformer acoustic signal identification method, system and medium with multi-view feature fusion - Google Patents

Transformer acoustic signal identification method, system and medium with multi-view feature fusion Download PDF

Info

Publication number
CN115577249A
CN115577249A CN202211158361.XA CN202211158361A CN115577249A CN 115577249 A CN115577249 A CN 115577249A CN 202211158361 A CN202211158361 A CN 202211158361A CN 115577249 A CN115577249 A CN 115577249A
Authority
CN
China
Prior art keywords
feature
features
acoustic signal
sequence
transformer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211158361.XA
Other languages
Chinese (zh)
Inventor
曹浩
卢铃
邓艾东
吴晓文
蔡炜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202211158361.XA priority Critical patent/CN115577249A/en
Publication of CN115577249A publication Critical patent/CN115577249A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Computational Mathematics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Complex Calculations (AREA)

Abstract

The invention discloses a method, a system and a medium for identifying a transformer acoustic signal with multi-view characteristic fusion, wherein the method comprises the steps of extracting the characteristics of the transformer acoustic signal from a plurality of dimensional view angles including a time domain, a frequency domain and a nonlinear domain; generating effective characteristics with four dimensions of monotonicity, robustness, trend and identifiability according to the extracted characteristics; fusing according to the generated effective features to obtain fused feature vectors; the operation state of the transformer is obtained by adopting a machine learning model for classification based on the fusion feature vector.

Description

Transformer acoustic signal identification method, system and medium with multi-view feature fusion
Technical Field
The invention relates to the technical field of transformer acoustic signal identification, in particular to a method, a system and a medium for identifying a transformer acoustic signal with multi-view characteristic fusion.
Background
The transformer is one of the key devices of the power system, and the operation state of the transformer directly affects the reliability and safety of the power grid. However, the transformer has a severe operating environment, and has problems of complex wiring structure, severe electromagnetic interference and the like, which results in a large difficulty in overhauling the transformer. The transformer can bear voltage impact, thermal impact and mechanical vibration impact in the long-time operation process, so that the failure of the transformer is frequent. Therefore, the intelligent diagnosis of the running state of the transformer has important significance for maintaining the safety of the power grid, reducing the operation and maintenance cost and guaranteeing the life of people. Acoustic signal analysis is one of the most effective methods for diagnosing the operating condition of a transformer. In recent years, a great deal of research work is done by scholars at home and abroad on the aspect of transformer state identification based on acoustic signal analysis, however, most of the existing analysis methods are based on time domain/frequency domain characteristics of signals, the analysis process depends on expert experience, and the problems of low diagnosis efficiency, low diagnosis accuracy and the like exist, so that the intelligent operation and maintenance requirements of power equipment in the big data era are difficult to meet.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides a method, a system and a medium for identifying the sound signal of the transformer with the multi-view feature fusion.
In order to solve the technical problems, the invention adopts the technical scheme that:
a multi-view feature fused transformer acoustic signal identification method comprises the following steps:
s101, extracting characteristics of a transformer acoustic signal from multiple dimensional visual angles including a time domain, a frequency domain and a nonlinear domain;
s102, generating effective characteristics with four dimensions of monotonicity, robustness, trend and identifiability according to the extracted characteristics;
s103, fusing according to the generated effective features to obtain fusion feature vectors;
and S104, classifying by adopting a machine learning model based on the fusion characteristic vector to obtain the running state of the transformer.
Optionally, the characteristics of the transformer acoustic signal extracted from the time-domain dimension perspective in step S101 include some or all of a peak value, a peak-to-peak value, an average value, an absolute average value, a root mean square value, a standard deviation, a slope, and a kurtosis of the transformer acoustic signal; the characteristics of the transformer acoustic signal extracted from the frequency domain dimension view comprise part or all of the average amplitude, the center frequency, the frequency root mean square and the frequency standard deviation of the transformer acoustic signal; the characteristics of the transformer acoustic signal extracted from the nonlinear domain dimension perspective include at least one of an information entropy and a fractal dimension of the transformer acoustic signal.
Optionally, the calculation function expression of the valid features of the monotonicity dimension in step S102 is:
Figure BDA0003859850210000011
in the above formula, mon (F) represents the valid feature of the feature sequence F in the monotonicity dimension, K is the sequence length of the feature sequence F, and No {. DEG } represents the counting function.
Optionally, the calculation function expression of the effective features of the robustness dimension in step S102 is:
Figure BDA0003859850210000021
in the above formula, rob (F) represents the effective feature of the characteristic sequence F in the robustness dimension, K is the sequence length of the characteristic sequence F, and F (t) k ) Is the kth feature in the sequence of features F.
Optionally, the calculation function expression of the effective features of the trending dimension in step S102 is:
Figure BDA0003859850210000022
in the above formula, trer (F, T) represents the effective feature of the sequence of features F and its corresponding sampling time sequence T in the trend dimension, F (T) k ) Is the kth feature, t, in the sequence of features F k Is the kth feature F (t) in the sequence F of features k ) The time of sampling of the (c) clock,
Figure BDA0003859850210000023
is the average of all features in the sequence F of features,
Figure BDA0003859850210000024
k is the sequence length of the characteristic sequence F as the average of all times in the sample time sequence T.
Optionally, the calculation function expression of the valid features of the identifiability dimension in step S102 is:
Figure BDA0003859850210000025
in the above formula, ide (F, C) represents the effective feature of the sequence F of features and the corresponding class label sequence C in the identifiability dimension, F (t) k ) Is the kth feature in the sequence of features F,
Figure BDA0003859850210000026
is the average of all features in the sequence F of features, c (t) k ) As the kth feature in the sequence F of featuresf(t k ) The category label of (a) is used,
Figure BDA0003859850210000027
is the average of all class labels in the sample time series T, and K is the sequence length of the sequence F of features.
Optionally, step S103 includes:
s201, inputting the generated effective feature vector, and recording any ith effective feature as x i ∈R n I =1,2, …, m, where m is the number of valid features generated, R n The dimension representing the effective feature is a real number n;
s202, determining a kernel function and parameters in the kernel function to obtain any ith element K in a kernel matrix K i Comprises the following steps:
K i =k(x i ,x j ),i,j=1,2,…,m,
in the above formula, k is a definite kernel function, x i ,x j The ith and jth valid features respectively;
s203, centralizing the kernel matrix K according to the following formula:
K=K-1 m K-K1 m +1 m K1 m
in the above formula, 1 m Representing a matrix with elements of 1/m, wherein m is the number of generated effective features;
s204, solving the first p eigenvalues lambda of the core matrix K after the centralization 1 ≥λ 2 ≥…≥λ p With the first p feature vectors alpha 12 ,…,α p And is provided with<α ii >=1,i =1,2, …, p, wherein<,>Representing an inner product operation;
s205, for any test point x ∈ R n Is provided with
Figure BDA0003859850210000031
Where Φ (x) represents the fused feature for test point x, k (x) i X) represents a kernel function pair (x) i And x) calculation results.
Optionally, when the machine learning model is used for classification based on the fused feature vector to obtain the operating state of the transformer in step S104, the machine learning model used is a support vector machine, and the classification of the support vector machine to obtain the operating state of the transformer is to solve an optimization problem shown in the following formula by selecting a kernel function k (·) penalty parameter C:
Figure BDA0003859850210000032
and the classification decision function f (x) adopted when solving the optimization problem is as follows:
Figure BDA0003859850210000033
where N is the number of training samples, α i And alpha j Are respectively a feature vector x i And x j Corresponding penalty factor, y i And y j Are respectively a feature vector x i And x j The label of (a) is used,
Figure BDA0003859850210000034
is a pair of
Figure BDA0003859850210000035
And x j The result of the kernel function calculation of (a),
Figure BDA0003859850210000036
is x i Transpose of (x) i ,x j Respectively the ith and jth valid features, C penalty parameter, sign is a sign function, K (x.x) i ) Is for x and x i The kernel function of (a) is calculated, x is a feature vector after the test samples are fused, and b is a bias parameter.
In addition, the invention also provides a multi-view characteristic fused transformer acoustic signal identification system, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the multi-view characteristic fused transformer acoustic signal identification method.
Furthermore, the present invention also provides a computer-readable storage medium having stored therein a computer program for being programmed or configured by a microprocessor to perform the multi-perspective feature fused transformer acoustic signal identification method.
Compared with the prior art, the invention mainly has the following advantages: the method comprises the steps of extracting the characteristics of the transformer acoustic signal from various dimensional visual angles including a time domain, a frequency domain and a nonlinear domain; generating effective characteristics with four dimensions of monotonicity, robustness, trend and identifiability according to the extracted characteristics; fusing according to the generated effective features to obtain fused feature vectors; the method comprises the steps of adopting a machine learning model to classify based on the fusion characteristic vector to obtain the running state of the transformer, constructing a multi-dimensional characteristic set capable of effectively representing the running state of the transformer through multi-view characteristic extraction, selection and fusion, and realizing accurate identification of the running state of the transformer through a parameter optimization support vector machine.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Fig. 2 is a schematic diagram of intelligent recognition according to the method of the embodiment of the present invention.
Fig. 3 is a time domain waveform of an acoustic signal collected in an embodiment of the present invention.
Fig. 4 is a spectrum of an acoustic signal in an embodiment of the present invention.
Fig. 5 is a characteristic parameter distribution extracted in the embodiment of the present invention.
FIG. 6 is a diagram illustrating the optimization results of grid parameters of the SVM in accordance with the present invention.
Fig. 7 shows the accuracy of intelligent identification of the acoustic signal of the transformer in the embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for identifying a multi-view feature fused transformer acoustic signal in this embodiment includes:
s101, extracting characteristics of a transformer acoustic signal from multiple dimensional visual angles including a time domain, a frequency domain and a nonlinear domain;
s102, generating monotonicity, robustness, trend and identifiability effective features according to the extracted features;
s103, fusing according to the generated effective features to obtain fused feature vectors;
and S104, classifying by adopting a machine learning model based on the fusion characteristic vector to obtain the running state of the transformer.
Referring to fig. 2, in the present embodiment, the machine learning model may be divided into a training phase and a use phase, in the training phase, steps S102 and S103 need to be performed by using a transformer acoustic signal sample to extract a fusion feature vector, then data is labeled to attach a label corresponding to an operation state of the transformer and establish a training set and a test set, then parameter optimization (training) may be performed on the machine learning model by using the training set, a test is performed by using the test set, a trained machine learning model may be obtained after the test is passed (the test does not pass and the training needs to be continued), on this basis, a measured transformer acoustic signal may be obtained, steps S102 and S103 are performed to extract the fusion feature vector, and finally the operation state of the transformer may be obtained by inputting the trained machine learning model. Although an SVM (support vector machine) recognition model is used as a machine learning model in fig. 2, other machine learning models may be used as necessary to implement training and application of the same mapping relationship between input and output in the light of the mapping relationship between input and output which is clear herein, and therefore, the detailed description thereof is omitted.
The characteristics of the transformer acoustic signal extracted from the time domain dimension perspective in step S101 include part or all of a peak value, a peak-to-peak value, an average value, an absolute average value, a root mean square value, a standard deviation, a slope, and a kurtosis of the transformer acoustic signal; the characteristics of the transformer acoustic signal extracted from the frequency domain dimension view comprise part or all of the average amplitude, the center frequency, the frequency root mean square and the frequency standard deviation of the transformer acoustic signal; the characteristics of the transformer acoustic signal extracted from the nonlinear domain dimension perspective include at least one of an information entropy and a fractal dimension of the transformer acoustic signal.
In this embodiment, the characteristics of the transformer acoustic signal extracted from the time-domain dimension perspectiveThe characteristics comprise peak value, peak-to-peak value, average value, absolute average value, root mean square value, standard deviation, inclination and kurtosis of the sound signal of the transformer, and the calculation mode is as follows: for the signal x i I =1,2, …, N is the time series length, with peak V max The maximum value of the signal amplitude belongs to a time-unstable parameter, and the variation is large, so the method is commonly used for detecting impact components, and the expression is as follows:
V max =max(x i ),
peak to peak value V p-p Is the difference between the maximum value and the minimum value of the signal, and the expression is as follows:
V p-p =V max -V min
in the above formula, V min Is the minimum value (and peak value V) of the signal max Corresponding), the expression is:
V min =min(x i )。
it can be found that the peak-to-peak value V p-p The magnitude of (2) is independent of the direct current component of the signal, is mainly used for describing the range of signal change, and can be used for monitoring the change of signal strength.
Signal { x i Mean value of V c Represents the central value around which the signal surrounds, has the characteristic of steady state, and the expression is as follows:
Figure BDA0003859850210000051
in the above formula, the first and second carbon atoms are,
Figure BDA0003859850210000052
i.e. the signal { x i The average of the N signals of (N) }, N being the length of the time series. Signal { x i Mean value of V c The magnitude of the trend of the data and the dc component of the signal are calculated by removing the mean from the data, leaving a dynamic part useful for diagnosis.
Absolute average value
Figure BDA0003859850210000053
Is the average value of the absolute values of the signals, and the expression is as follows:
Figure BDA0003859850210000054
root mean square value V rms The total energy of the reflected signal is insensitive to early faults, but has better stability, and shows a monotone ascending trend along with the development of the faults, and the expression is as follows:
Figure BDA0003859850210000055
the standard deviation σ represents the degree of dispersion of the signal, and is expressed as:
Figure BDA0003859850210000056
the inclination alpha reflects the asymmetry of the amplitude probability density function p (x) of the signal to the ordinate, the more asymmetric the inclination is, and the expression is:
Figure BDA0003859850210000057
the kurtosis beta represents the degree of deviation of a signal from normal distribution, and as faults occur, such as root mean square value, absolute average value and the like are increased, but in contrast, the kurtosis is more sensitive to large amplitude values, because the kurtosis carries out biquadratic operation on the amplitude values, low amplitude values can be effectively restrained, high amplitude values are highlighted, and the kurtosis is effective for detecting faults with impact components, and the expression is as follows:
Figure BDA0003859850210000058
in this embodiment, the characteristics of the transformer acoustic signal extracted from the frequency domain dimension perspective include an average amplitude, a center frequency, a frequency root mean square, and a frequency of the transformer acoustic signalAnd standard deviation of frequency, assuming time domain signal x = { x = i I =1,2, …, N, which is converted by fourier transform into a frequency-domain signal X = { X = j J =1,2, …, N, the frequency domain features are calculated as follows:
average amplitude F 1 The expression of (a) is:
Figure BDA0003859850210000059
center frequency F 2 The expression of (c) is:
Figure BDA0003859850210000061
in the above formula, f (j) represents the frequency of the jth point in the spectrum;
frequency root mean square F 3 The expression of (a) is:
Figure BDA0003859850210000062
standard deviation of frequency F 4 The expression of (a) is:
Figure BDA0003859850210000063
in the above formula, F 2 Representing the center frequency.
In this embodiment, the characteristics of the transformer acoustic signal extracted from the nonlinear domain dimension view include an information entropy and a fractal dimension of the transformer acoustic signal, and the calculation method includes:
the entropy of information describes the uncertainty of the information. For the acoustic signals, the more regularity, the lower uncertainty and the smaller information entropy; conversely, the higher the uncertainty, the larger the information entropy. For time domain signal x = { x i I =1,2, …, N, its time domain information entropy F 5 The calculation expression of (a) is:
Figure BDA0003859850210000064
and is provided with
Figure BDA0003859850210000065
Wherein, p (x) i ) Denotes x i The probability of occurrence.
The fractal dimension can describe a complex irregular geometric object which cannot be described by the traditional Euclidean geometry, and is specifically defined as follows: assuming that Y is a non-empty subset of the set of real planes and z is a closed set of dimension epsilon in the set of real planes, if the covering set Y requires a number of closed sets z of N (epsilon), a fractal dimension D of the set Y can be defined as:
Figure BDA0003859850210000066
the method comprises the steps that effective characteristics of the running state of the transformer can be reflected in a characteristic set selected from four dimensions of monotonicity, robustness, trend and identifiability; assume that there is a sequence of eigenvalues F = [ F (t) 1 ),f(t 2 ),…,f(t k )]Time series T = [ T = [ T ] 1 ,t 2 ,…,t k ]Wherein f (t) k ) Is t k The characteristic value of the time, k is the total length of the time series, and the class label series is C = [ C (t) 1 ),c(t 2 ),…,c(t k )]. The monotonicity is recorded as Mon (F), the robustness is recorded as Rob (F), the trend is recorded as Tre (F, T), and the identifiability is recorded as Ide (F, C), then:
in this embodiment, the calculation function expression of the effective feature of the monotonicity dimension in step S102 is:
Figure BDA0003859850210000067
in the above formula, mon (F) represents the valid feature of the feature sequence F in the monotonicity dimension, K is the sequence length of the feature sequence F, and No {. DEG } represents the counting function.
In this embodiment, the calculation function expression of the effective feature of the robustness dimension in step S102 is:
Figure BDA0003859850210000071
in the above formula, rob (F) represents the effective feature of the characteristic sequence F in the robustness dimension, K is the sequence length of the characteristic sequence F, F (t) k ) Is the kth feature in the sequence of features F.
In this embodiment, the calculation function expression of the effective feature of the trend dimension in step S102 is as follows:
Figure BDA0003859850210000072
in the above formula, tre (F, T) represents the effective feature of the feature sequence F and its corresponding sampling time sequence T in the trend dimension, F (T) k ) Is the kth feature, t, in the sequence of features F k Characteristic of the kth characteristic F (t) in the sequence F k ) The time of sampling of the (c) clock,
Figure BDA0003859850210000073
is the average of all features in the sequence F of features,
Figure BDA0003859850210000074
k is the average of all times in the sampled time series T and K is the sequence length of the characteristic sequence F.
In this embodiment, the calculation function expression of the valid features of the identifiability dimension in step S102 is:
Figure BDA0003859850210000075
in the above formula, ide (F, C) represents the effective feature of the sequence F of features and the corresponding class label sequence C in the identifiability dimension, F (t) k ) Is the kth feature in the sequence of features F,
Figure BDA0003859850210000076
is the average of all features in the sequence F of features, c (t) k ) Is the kth feature F (t) in the sequence F of features k ) The category label of (a) is set,
Figure BDA0003859850210000077
is the average of all class labels in the sample time series T, and K is the sequence length of the sequence F of features.
In this embodiment, step S103 specifically adopts kernel principal component analysis-based fusion to the valid features, and step S103 includes:
s201, inputting the generated effective feature vector, and recording any ith effective feature as x i ∈R n I =1,2, …, m, where m is the number of valid features generated, R n The dimension representing the effective feature is a real number n;
s202, determining a kernel function and parameters in the kernel function to obtain any ith element K in a kernel matrix K i Comprises the following steps:
K i =k(x i ,x j ),i,j=1,2,…,m,
in the above formula, k is a definite kernel function, x i ,x j The ith and jth valid features respectively;
s203, centralizing the kernel matrix K according to the following formula:
K=K-1 m K-K1 m +1 m K1 m
in the above formula, 1 m Representing a matrix with elements of 1/m, wherein m is the number of generated effective features;
s204, solving the first p eigenvalues lambda of the core matrix K after the centralization 1 ≥λ 2 ≥…≥λ p With the first p feature vectors alpha 12 ,…,α p And is provided with<α ii >=1,i =1,2, …, p, wherein<,>Representing an inner product operation;
s205, for any test point x ∈ R n Is provided with
Figure BDA0003859850210000078
Where Φ (x) represents the fused feature for test point x, k (x) i X) represents a kernel function pair (x) i And x) calculation results.
In this embodiment, when the machine learning model is used for classification based on the fused feature vector in step S104 to obtain the operating state of the transformer, the machine learning model used is a support vector machine, and the classification of the support vector machine to obtain the operating state of the transformer is to select a kernel function k (·) penalty parameter C to solve the optimization problem shown in the following formula:
Figure BDA0003859850210000081
and the classification decision function f (x) adopted when solving the optimization problem is as follows:
Figure BDA0003859850210000082
where N is the number of training samples, α i And alpha j Are respectively a feature vector x i And x j Corresponding penalty factor, y i And y j Are respectively a feature vector x i And x j The label of (a) is used,
Figure BDA0003859850210000083
is a pair of
Figure BDA0003859850210000084
And x j The result of the kernel function calculation of (a),
Figure BDA0003859850210000085
is x i Transpose of (a), x i ,x j Respectively the ith and jth valid features, C penalty parameter, sign is a sign function, K (x.x) i ) Is paired with x and x i The kernel function of (a) is calculated, x is a feature vector after the test sample is fused, and b is a bias parameter.
Support vector machine handle input vectorx is mapped from the input space to a higher dimension hilbert space, typically in a manner represented by a kernel function K (x) i ,x j )=Φ(x i )·Φ(x j ) And (5) determining. And selecting a proper kernel function K (-) and a proper penalty parameter C, solving the optimization problem by a support vector machine, and searching an optimal value by a grid parameter optimization algorithm according to the parameter g of the kernel function K (-) and the penalty parameter C.
In order to verify the actual effect of the invention on the intelligent identification of the sound signals of the transformer, sound signal data of different classes are respectively collected from a Changsha 110kv Mawang transformer substation (class 1), a Changsha 110kv Hefeng transformer substation (class 2), a Changsha 110kv sports New City site transformer substation (class 3) and a London nine-Lorentz transformer substation (class 4), wherein the time domain waveforms are shown in figure 3, the frequency spectrums are shown in figure 4, and the feature extraction results are shown in figure 5. Fig. 6 shows the optimization result of the support vector machine parameters based on the mesh parameter optimization in this embodiment, which results in the optimal penalty parameter C =256 and the optimal parameter g =0.10882 of the kernel function K (·). Fig. 7 shows the transformer acoustic signal recognition result, where the blue circle is the actual label, the red asterisk is the model recognition result, and the lower diagram is the recognition result confusion matrix. The recognition rate for category 1 is 100%, the recognition rate for category 4 is 90%, and the recognition rates for category 2 and category 3 are 70% or more. Experiment results show that the method for identifying the sound signals of the transformer with the multi-view characteristic fusion can better identify various types of running states of the transformer.
In summary, in the method of the embodiment, the multi-view features of the transformer acoustic signal are extracted from the dimensions of the time domain, the frequency domain, the nonlinear domain and the like, then the effective features of the transformer operation state can be reflected from the feature set selected from the four dimensions of monotonicity, robustness, trend and identifiability, the effective features are fused based on kernel principal component analysis, and finally the mapping relation between the fused features and the fault categories is learned by using a parameter optimization support vector machine, so that the automatic identification of the transformer acoustic signal is realized. The method has the characteristics of independence on expert experience and high diagnosis precision, and can be applied to diagnosis of the running state of the transformer in a complex noise environment.
In addition, the embodiment also provides a multi-view characteristic fused transformer acoustic signal identification system, which comprises a microprocessor and a memory which are connected with each other, wherein the microprocessor is programmed or configured to execute the multi-view characteristic fused transformer acoustic signal identification method. Furthermore, the present embodiment also provides a computer-readable storage medium, in which a computer program is stored, the computer program being programmed or configured by a microprocessor to execute the foregoing method for identifying transformer acoustic signals by multi-view feature fusion.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to those skilled in the art without departing from the principles of the present invention should also be considered as within the scope of the present invention.

Claims (10)

1. A multi-view feature fused transformer acoustic signal identification method is characterized by comprising the following steps:
s101, extracting the characteristics of the transformer acoustic signal from multiple dimensional visual angles including a time domain, a frequency domain and a nonlinear domain;
s102, generating monotonicity, robustness, trend and identifiability effective features according to the extracted features;
s103, fusing according to the generated effective features to obtain fused feature vectors;
and S104, classifying by adopting a machine learning model based on the fusion characteristic vector to obtain the running state of the transformer.
2. The method for identifying the multi-view feature-fused transformer acoustic signal according to claim 1, wherein the features of the transformer acoustic signal extracted from the time-domain dimension view in step S101 include some or all of a peak value, a peak-to-peak value, an average value, an absolute average value, a root mean square value, a standard deviation, a slope, and a kurtosis of the transformer acoustic signal; the characteristics of the transformer acoustic signal extracted from the frequency domain dimension view comprise part or all of the average amplitude, the center frequency, the frequency root mean square and the frequency standard deviation of the transformer acoustic signal; the features of the transformer acoustic signal extracted from the nonlinear domain dimension perspective include at least one of an information entropy and a fractal dimension of the transformer acoustic signal.
3. The method for recognizing the transformer acoustic signal with the multi-view feature fusion as claimed in claim 2, wherein the computational function expression of the effective features of the monotonicity dimension in step S102 is as follows:
Figure FDA0003859850200000011
in the above formula, mon (F) represents the valid feature of the feature sequence F in the monotonicity dimension, K is the sequence length of the feature sequence F, and No {. DEG } represents the counting function.
4. The method for identifying the multi-view feature-fused transformer acoustic signal according to claim 3, wherein the computational function expression of the effective features of the robustness dimension in step S102 is as follows:
Figure FDA0003859850200000012
in the above formula, rob (F) represents the effective feature of the characteristic sequence F in the robustness dimension, K is the sequence length of the characteristic sequence F, F (t) k ) Is the kth feature in the sequence of features F.
5. The method for identifying the transformer acoustic signal with the multi-view feature fusion according to claim 4, wherein the calculation function expression of the effective features of the trend dimension in step S102 is as follows:
Figure FDA0003859850200000013
in the above formula, tre (F, T) representsValid features of the characterized sequence F and its corresponding sampling time sequence T in the trend dimension, F (T) k ) Is the kth feature, t, in the sequence of features F k Is the kth feature F (t) in the sequence F of features k ) The time of sampling of the (c) clock,
Figure FDA0003859850200000014
is the average of all features in the sequence F of features,
Figure FDA0003859850200000015
k is the sequence length of the characteristic sequence F as the average of all times in the sample time sequence T.
6. The method for identifying the transformer acoustic signal with the multi-view feature fusion according to claim 5, wherein the calculation function expression of the valid features of the identifiability dimension in step S102 is as follows:
Figure FDA0003859850200000021
in the above formula, ide (F, C) represents the effective feature of the characteristic sequence F and its corresponding class label sequence C in the identifiability dimension, F (t) k ) Is the kth feature in the sequence of features F,
Figure FDA0003859850200000022
is the average of all features in the sequence F of features, c (t) k ) Is the kth feature F (t) in the sequence F of features k ) The category label of (a) is set,
Figure FDA0003859850200000023
is the average of all class labels in the sample time series T and K is the sequence length of the sequence F of features.
7. The method for recognizing the multi-view feature fused transformer acoustic signal according to claim 1, wherein step S103 comprises:
s201, inputting the generated effective characteristic vector, and recording any ith effective characteristic as x i ∈R n I =1,2, …, m, where m is the number of valid features generated, R n The dimension representing the effective feature is a real number n;
s202, determining a kernel function and parameters in the kernel function to obtain any ith element K in a kernel matrix K i Comprises the following steps:
K i =k(x i ,x j ),i,j=1,2,…,m,
in the above formula, k is a definite kernel function, x i ,x j The ith and jth valid features respectively;
s203, centralizing the kernel matrix K according to the following formula:
K=K-1 m K-K1 m +1 m K1 m
in the above formula, 1 m Representing a matrix with elements of 1/m, wherein m is the number of generated effective features;
s204, solving the first p eigenvalues lambda of the core matrix K after the centralization 1 ≥λ 2 ≥…≥λ p With the first p feature vectors alpha 12 ,…,α p And is provided with<α ii >=1,i =1,2, …, p, wherein<,>Representing an inner product operation;
s205, for any test point x ∈ R n Is provided with
Figure FDA0003859850200000024
Where Φ (x) represents the fused feature for test point x, k (x) i X) represents a kernel function pair (x) i And x) calculation results.
8. The method for recognizing the acoustic signal of the multi-view feature-fused transformer according to claim 1, wherein in step S104, when the machine learning model is used for classification based on the fused feature vector to obtain the operating state of the transformer, the machine learning model is a support vector machine, and the classification of the support vector machine to obtain the operating state of the transformer is performed by selecting a penalty parameter C of a kernel function k (·), and solving an optimization problem represented by the following formula:
Figure FDA0003859850200000025
Figure FDA0003859850200000026
0≤α i ≤C,i=1,2,…,N.
and the classification decision function f (x) adopted when solving the optimization problem is as follows:
Figure FDA0003859850200000027
where N is the number of training samples, α i And alpha j Are respectively a feature vector x i And x j Corresponding penalty factor, y i And y j Are respectively a feature vector x i And x j The label of (a) to (b),
Figure FDA0003859850200000031
is a pair of
Figure FDA0003859850200000032
And x j The result of the kernel function calculation of (a),
Figure FDA0003859850200000033
is x i Transpose of (a), x i ,x j Respectively the ith and jth valid features, the penalty parameter C, sign as a sign function, K (x. X) i ) Is paired with x and x i The kernel function of (a) is calculated, x is a feature vector after the test sample is fused, and b is a bias parameter.
9. A multi-view feature fused transformer acoustic signal identification system comprising a microprocessor and a memory connected to each other, wherein the microprocessor is programmed or configured to perform the multi-view feature fused transformer acoustic signal identification method of any one of claims 1 to 8.
10. A computer-readable storage medium, in which a computer program is stored, wherein the computer program is used for being programmed or configured by a microprocessor to execute the method for multi-view feature fused transformer acoustic signal identification according to any one of claims 1 to 8.
CN202211158361.XA 2022-09-22 2022-09-22 Transformer acoustic signal identification method, system and medium with multi-view feature fusion Pending CN115577249A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211158361.XA CN115577249A (en) 2022-09-22 2022-09-22 Transformer acoustic signal identification method, system and medium with multi-view feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211158361.XA CN115577249A (en) 2022-09-22 2022-09-22 Transformer acoustic signal identification method, system and medium with multi-view feature fusion

Publications (1)

Publication Number Publication Date
CN115577249A true CN115577249A (en) 2023-01-06

Family

ID=84580611

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211158361.XA Pending CN115577249A (en) 2022-09-22 2022-09-22 Transformer acoustic signal identification method, system and medium with multi-view feature fusion

Country Status (1)

Country Link
CN (1) CN115577249A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351659A (en) * 2023-12-01 2024-01-05 四川省华地建设工程有限责任公司 Hydrogeological disaster monitoring device and monitoring method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351659A (en) * 2023-12-01 2024-01-05 四川省华地建设工程有限责任公司 Hydrogeological disaster monitoring device and monitoring method
CN117351659B (en) * 2023-12-01 2024-02-20 四川省华地建设工程有限责任公司 Hydrogeological disaster monitoring device and monitoring method

Similar Documents

Publication Publication Date Title
Liu et al. Classifying transformer winding deformation fault types and degrees using FRA based on support vector machine
CN111459700B (en) Equipment fault diagnosis method, diagnosis device, diagnosis equipment and storage medium
US5745382A (en) Neural network based system for equipment surveillance
CN109245099A (en) Power load identification method, device, equipment and readable storage medium
Haroun et al. Multiple features extraction and selection for detection and classification of stator winding faults
CN111428755A (en) Non-invasive load monitoring method
US11443137B2 (en) Method and apparatus for detecting signal features
JP5328858B2 (en) Operating status determination device, operating status determination program, operating status determination method, waveform pattern learning device, waveform pattern learning program, and waveform pattern learning method
CN111177216B (en) Association rule generation method and device for comprehensive energy consumer behavior characteristics
CN111398679B (en) Sub-synchronous oscillation identification and alarm method based on PMU (phasor measurement Unit)
CN110070102B (en) Method for establishing sequence-to-sequence model for identifying power quality disturbance type
KR102272573B1 (en) Method for nonintrusive load monitoring of energy usage data
CN111398798B (en) Circuit breaker energy storage state identification method based on vibration signal interval feature extraction
CN115577249A (en) Transformer acoustic signal identification method, system and medium with multi-view feature fusion
CN112507479A (en) Oil drilling machine health state assessment method based on manifold learning and softmax
Li et al. Intelligent fault diagnosis of aeroengine sensors using improved pattern gradient spectrum entropy
Rodrigues et al. Deep learning for power quality event detection and classification based on measured grid data
CN113987910A (en) Method and device for identifying load of residents by coupling neural network and dynamic time planning
Li et al. A novel application of intelligent algorithms in fault detection of rudder system
Petladwala et al. Canonical correlation based feature extraction with application to anomaly detection in electric appliances
US20230351158A1 (en) Apparatus, system and method for detecting anomalies in a grid
CN115238733A (en) Method for evaluating operation state of switching-on and switching-off coil of high-voltage circuit breaker and related equipment
CN112731208B (en) Low-voltage line fault and abnormity on-line monitoring method, equipment and medium
CN114371426A (en) Transformer winding mechanical state detection method based on non-negative tensor decomposition
CN116449204B (en) Fault detection method for opposed-piston magnetic force linear generator and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination