CN115758078A - Transformer fault detection method and device, terminal equipment and readable storage medium - Google Patents

Transformer fault detection method and device, terminal equipment and readable storage medium Download PDF

Info

Publication number
CN115758078A
CN115758078A CN202211398183.8A CN202211398183A CN115758078A CN 115758078 A CN115758078 A CN 115758078A CN 202211398183 A CN202211398183 A CN 202211398183A CN 115758078 A CN115758078 A CN 115758078A
Authority
CN
China
Prior art keywords
transformer
voiceprint data
loss function
sample set
data sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211398183.8A
Other languages
Chinese (zh)
Inventor
柯清派
史训涛
刘通
李楷然
孙健
项恩新
喻磊
胡冉
厉冰
欧鸣宇
白浩
徐敏
谈赢杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CSG Electric Power Research Institute
Shenzhen Power Supply Bureau Co Ltd
Original Assignee
CSG Electric Power Research Institute
Shenzhen Power Supply Bureau Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CSG Electric Power Research Institute, Shenzhen Power Supply Bureau Co Ltd filed Critical CSG Electric Power Research Institute
Priority to CN202211398183.8A priority Critical patent/CN115758078A/en
Publication of CN115758078A publication Critical patent/CN115758078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Supply And Distribution Of Alternating Current (AREA)

Abstract

The application provides a transformer fault detection method, a device, terminal equipment and a readable storage medium, wherein the method comprises the steps of obtaining voiceprint data of a transformer to be detected; inputting voiceprint data of the transformer to be detected into a fault detection model, and outputting a fault mode of the transformer; the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multisource transformer voiceprint data sample set and an optimization objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function. The method adopts a multi-source transformer voiceprint data sample set to train the model, and adopts an optimized objective function when the model is trained, so that the trained fault detection model has stronger fault recognition capability; therefore, the fault type of the transformer can be more accurately determined by adopting the fault detection model.

Description

Transformer fault detection method and device, terminal equipment and readable storage medium
Technical Field
The application relates to the technical field of fault detection, in particular to a transformer fault detection method, a transformer fault detection device, terminal equipment and a readable storage medium.
Background
With the improvement of information processing capability, the development of computer technology and various sensors and the continuous improvement of advanced control theories such as expert system and fuzzy control theory, the transformer fault detection algorithm based on data driving is gradually and deeply researched and developed. However, in practical use, due to the difference between training data and test data, data-driven based transformer fault detection algorithms are often affected in real-world applications.
Although recently emerging migration learning based fault diagnosis methods can learn migratable features from relevant source data and adapt the diagnostic model to target data, these methods are still only applicable to target domains with a priori data distribution. For invisible domains, the generalization capability of the migration model cannot be guaranteed. Since the working state of the transformer is constantly changed during the operation process, how to construct a fault diagnosis model with more generalization capability is important under the condition.
Disclosure of Invention
In view of this, the present application provides a transformer fault detection method, apparatus, terminal device and readable storage medium.
In a first aspect, an embodiment of the present application provides a transformer fault detection method, where the method includes:
acquiring voiceprint data of a transformer to be detected;
inputting the voiceprint data of the transformer to be detected into a fault detection model, and outputting a fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multisource transformer voiceprint data sample set and an optimization objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
In a second aspect, an embodiment of the present application provides a transformer fault detection apparatus, where the apparatus includes:
the voiceprint data acquisition module is used for acquiring voiceprint data of the transformer to be detected;
the fault mode output module is used for inputting the voiceprint data of the transformer to be detected into the fault detection model and outputting the fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multisource transformer voiceprint data sample set and an optimization objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
In a third aspect, an embodiment of the present application provides a terminal device, including: a memory; one or more processors coupled with the memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications being configured to perform the transformer fault detection method provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be invoked by a processor to execute the transformer fault detection method provided in the first aspect.
According to the transformer fault detection method, the transformer fault detection device, the terminal equipment and the computer readable storage medium, voiceprint data of a transformer to be detected are firstly obtained; then, inputting the voiceprint data of the transformer to be detected into a fault detection model, and outputting a fault mode of the transformer; the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multisource transformer voiceprint data sample set and an optimization objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
According to the transformer fault detection method provided by the embodiment of the application, a multi-source transformer voiceprint data sample set is adopted to train a model, and an optimized objective function is adopted during model training, so that the trained fault detection model has stronger fault recognition capability; therefore, the fault type of the transformer can be more accurately determined by adopting the fault detection model.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario of a transformer fault detection method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart of a transformer fault detection method according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a convolutional neural network model and a model training process according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a transformer fault detection apparatus provided in an embodiment of the present application;
fig. 5 is a schematic structural diagram of a terminal device provided in an embodiment of the present application;
fig. 6 is a schematic structural diagram of a computer-readable storage medium provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly and completely below, and it should be understood that the described embodiments are only a part of the embodiments of the present application, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For more detailed explanation of the present application, a transformer fault detection method, an apparatus, a terminal device and a computer-readable storage medium provided by the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 shows a schematic diagram of an application scenario of the transformer fault detection method provided in the embodiment of the present application, where the application scenario includes a terminal device 100 provided in the embodiment of the present application, and the terminal device 100 may be various electronic devices (such as block diagrams of 102, 104, 106, and 108) having a display screen, including but not limited to a smart phone and a computer device, where the computer device may be at least one of a desktop computer, a portable computer, a laptop computer, a tablet computer, and the like. The terminal device 100 may be generally referred to as one of a plurality of terminal devices, and the present embodiment is only illustrated by the terminal device 100. Those skilled in the art will appreciate that the number of terminal devices may be greater or fewer. For example, the number of the terminal devices may be only a few, or the number of the terminal devices may be tens of or hundreds, or may be more, and the number and the type of the terminal devices are not limited in the embodiment of the present application. The terminal device 100 may be used to perform a transformer fault detection method provided in the embodiments of the present application.
In an optional implementation manner, the application scenario may include a server in addition to the terminal device 100 provided in the embodiment of the present application, where a network is provided between the server and the terminal device. The network is used to provide a medium for a communication link between the terminal device and the server. The network may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
It should be understood that the number of terminal devices, networks, and servers is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation. For example, the server may be a server cluster composed of a plurality of servers. Wherein, the terminal device interacts with the server through the network to receive or send messages and the like. The server may be a server that provides various services. Wherein the server may be configured to perform the steps of a transformer fault detection method provided in the embodiments of the present application. In addition, when the terminal device executes the transformer fault detection method provided in the embodiment of the present application, a part of the steps may be executed at the terminal device, and a part of the steps may be executed at the server, which is not limited herein.
Based on this, the embodiment of the application provides a transformer fault detection method. Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a transformer fault detection method according to an embodiment of the present application, and taking the application of the method to the terminal device in fig. 1 as an example for description, the method includes the following steps:
step S110, obtaining voiceprint data of the transformer to be detected.
The voiceprint data of the transformer to be detected refers to the voiceprint data of any transformer needing fault mode detection. The voiceprint data is generated by the voiceprint vibration of the transformer itself, and the voiceprint vibration is generated by the vibration of the refrigeration system and the vibration of the transformer body (a general name of the iron core, the winding and the like). The voiceprint vibration of the surface of the transformer, the compression condition of the transformer winding and the iron core, the deformation of the winding and other conditions are closely related, whether the transformer winding or the iron core and the like have defects can be judged according to the change degree of the voiceprint vibration signals measured at each position of the transformer, and the voiceprint vibration abnormity of the transformer can be monitored and positioned on line.
Step S120, inputting voiceprint data of the transformer to be detected into a fault detection model, and outputting a fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multisource transformer voiceprint data sample set and an optimization objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
Specifically, the model training is to give an input vector and a target output value, input the input vector into one or more network structures or functions to obtain an actual output value, calculate an offset according to the target output value and the actual output value, and judge whether the offset is within an allowable range; if the training is within the allowable range, finishing the training and fixing the related parameters; if the deviation is not in the allowable range, some parameters in the network structure or the function are continuously adjusted until the training is finished and the related parameters are fixed when the deviation is in the allowable range or a certain finishing condition is reached, and finally the trained model can be obtained according to the fixed related parameters.
The training of the fault detection model in the embodiment of the present application is actually: inputting a transformer voiceprint data sample marked by a transformer voiceprint data sample set of a multi-source domain into a convolutional neural network as an input quantity, and taking a fault mode as a target output value; and solving a hidden layer, outputting the output of each layer of unit, solving the deviation between the target classification result and the actual classification result, calculating the error of the neurons in the network layer when the deviation is in an unallowable range, solving the error gradient, updating the weight, solving the hidden layer again, outputting the output of each layer of unit, solving the deviation between the target classification result and the actual classification result until the deviation is in the allowable range, finishing training, and fixing the weight and the threshold value so as to obtain the fault detection model. And when the network parameters of the convolutional neural network are updated in the training mode, an optimization objective function consisting of an internal distribution loss function, an antithetical training loss function and a cross entropy loss function is adopted, so that the fault detection model generated by training can be more accurate.
It should be noted that the transformer voiceprint data sample set of the multi-source domain refers to a set formed by transformer voiceprint data samples collected under multiple different fault conditions or working condition modes of the transformer.
According to the transformer fault detection method provided by the embodiment of the application, a multi-source-domain transformer voiceprint data sample set is adopted to train a model, and an optimized objective function is adopted during model training, so that the trained fault detection model has stronger fault recognition capability; therefore, the fault type of the transformer can be more accurately determined by adopting the fault detection model.
In one embodiment, establishing a fault detection model includes:
s1: acquiring a voiceprint data sample set of the transformer; the voiceprint data sample set comprises a plurality of transformer voiceprint data samples, and the transformer voiceprint data samples comprise fault voiceprint data samples and normal voiceprint data samples.
In particular, the set of transformer voiceprint data samples may also be referred to as a voiceprint signal dataset or a source domain dataset of a voiceprint signal. Can assume that
Figure BDA0003934550240000051
For the source domain data set of the voiceprint signal, it is contemplated that the transformer fault data may come from different operating conditions, such as different voltages or currents,
Figure BDA0003934550240000052
should be extended to multi-source domains, i.e.
Figure BDA0003934550240000053
K is the number of source domains, i.e. the number of different operating conditions. The kth source domain contains n labeled samples
Figure BDA0003934550240000054
Wherein
Figure BDA0003934550240000055
As raw data, N input As a result of the length of the sample,
Figure BDA0003934550240000056
for the failure mode of the transformer, N c Is the total number of failure modes of the transformer. Thus the transformer voiceprint data sample set is
Figure BDA0003934550240000057
Transformer failure mode {1, 2, …, N c Is predetermined.
In addition, it can also be assumed that
Figure BDA0003934550240000058
Is a target domain data set of the voiceprint signal, which is stored with voiceprint data of the transformer to be detected. Likewise, having n t Target Domain data for an individual sample is recorded
Figure BDA0003934550240000061
Wherein
Figure BDA0003934550240000062
Is the sample data of the voiceprint,
Figure BDA0003934550240000063
is the corresponding label. Wherein the transformer failure modes are the same in both the source domain and the target domain. In this embodiment, a transformer voiceprint data sample in a source domain is adopted to learn a convolutional neural network model so as to obtain a fault detection model; the fault detection model can be suitable for analyzing the voiceprint data of the transformer to be detected in the target domain under the unknown working condition, so that the transformer is obtainedThe failure mode of (1).
S2: and respectively carrying out internal distribution generalization and external generalization processing on the transformer voiceprint data sample set to construct an internal distribution loss function and an antithetical training loss function.
In one embodiment, in step S2, the intrinsic distribution generalization is performed on the transformer voiceprint data sample set to construct an intrinsic distribution loss function, including: based on a metric learning algorithm and according to working conditions, carrying out cluster analysis on a plurality of transformer voiceprint data samples; constructing a triple set according to the clustered transformer voiceprint data samples; and establishing a loss function of the triplets according to the triple set to form an intrinsic distribution loss function.
Specifically, the transformer voiceprint data sample set is subjected to internal distribution generalization, and the specific process is to perform clustering and generalization on transformer voiceprint data samples in the transformer voiceprint data sample set by adopting a metric learning algorithm; the clustering objective or result is: the distance between every two transformer voiceprint data samples in the same working condition is minimized, and the distance between every two transformer voiceprint data samples in different working conditions is maximized, so that the identification force of the diagnosis model is increased.
And after clustering is finished, constructing a triple set according to a clustering result, wherein the triple set comprises a plurality of triples, and three samples can form a triple. Where three samples may define a loss function, i.e., an anchor value, for a triplet
Figure BDA0003934550240000064
Positive values of
Figure BDA0003934550240000065
Negative values
Figure BDA0003934550240000066
Wherein
Figure BDA0003934550240000067
And
Figure BDA0003934550240000068
are in the same class as the main component,
Figure BDA0003934550240000069
belonging to different classes. The minimization process is such that
Figure BDA00039345502400000610
And
Figure BDA00039345502400000611
distance ratio therebetween
Figure BDA00039345502400000612
And
Figure BDA00039345502400000613
closer together, can be expressed as:
Figure BDA00039345502400000614
in the formula
Figure BDA00039345502400000615
Figure BDA00039345502400000616
Is the set of all triplets, α is the margin of positive value, f (.) represents the feature extractor in the convolutional application network model. For each anchor point
Figure BDA00039345502400000617
Selecting positive samples by means of traversal
Figure BDA0003934550240000071
And negative sample
Figure BDA0003934550240000072
The anchor point is a point initially selected, and is recorded as a positive sample which belongs to the same class as the anchor point, and is recorded as a negative sample which does not belong to the same class as the anchor point. Then three of the data samples in the triplet setThe total value of the element loss is (i.e., the intrinsic distribution loss function):
Figure BDA0003934550240000073
the source data set contains N tr And (4) a triplet. It should be noted that the alpha penalty in the triplet, compared to a penalty that only considers the positive and negative samples versus distance, avoids projecting all samples onto the same point in the feature space, i.e., the triplet penalty
Figure BDA0003934550240000074
In this way, the triplet loss function minimization may force the convolutional neural network model to maximize the distance between different classes of samples, thereby generating a more discriminative boundary.
In one embodiment, in step S2, performing an external generalization on the transformer voiceprint data sample set to construct a training loss resisting function, includes: adding variation factors to the transformer voiceprint data samples in the transformer voiceprint data sample set to form a varied transformer voiceprint data sample set; and respectively adopting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set to carry out minimum training on the convolutional neural network model and the discriminator, and constructing a binary cross entropy between the prediction probability distribution and the real probability distribution so as to form a resistance training loss function.
In one embodiment, the varying factors include gaussian noise and amplitude drift; adding variation factors to the transformer voiceprint data samples in the transformer voiceprint data sample set, wherein the variation factors comprise: and adding Gaussian noise to the transformer voiceprint data samples in the transformer voiceprint data sample set, and performing amplitude drift processing.
In one embodiment, the method for constructing the binary cross entropy between the prediction probability distribution and the real probability distribution by respectively adopting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set to carry out the minimum-maximum training on the convolutional neural network model and the discriminator comprises the following steps: inputting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set into a feature extractor in a convolutional neural network model for feature extraction, and respectively recording the feature extraction as a first feature and a second feature; carrying out minimum and maximum operation on the first characteristic and the second characteristic by adopting a discriminator; and constructing a binary cross entropy between the prediction probability distribution and the real probability distribution according to the operation result.
Specifically, the external representation of the source domain data can be generalized through an antagonistic training assisting method, so that the overfitting risk is reduced, the robustness of a diagnosis model is improved, and misdiagnosis of the model caused by interference such as environmental noise or amplitude fluctuation is prevented. Introducing tiny changes (namely, change factors) on original fault samples (namely, transformer voiceprint data samples in a transformer voiceprint data sample set), generating new fault samples (namely, transformer voiceprint data samples in the transformer voiceprint data sample set after the changes) to serve as a data set for network model training, and strengthening the learning of robust features by using the confrontation training of the original fault samples and the new fault samples.
Where minor variations include gaussian noise and amplitude drift.
It should be noted that, during the confrontation training, a discriminator needs to be added; in addition, the convolutional neural network model is provided with a feature extractor. Wherein the feature extractor has a feature extraction operator f (phi); psi), and the discriminator has a discrimination operator D (theta); theta d ) The feature extraction operator respectively extracts features from the original fault sample and the new fault sample so as to obtain a first feature and a second feature; the role of the discriminant operator is to minimize the difference between the two extracted features.
The specific process of the confrontation assistant training comprises the following steps: in the feature extraction operator f (·; ψ) and discrimination operator D (·; θ) d ) The maximum and minimum training is carried out between the training and the training expressions are as follows:
Figure BDA0003934550240000081
in the formula of alpha 1 And alpha 2 Which are factors of the amplitude shifted signal amplitude and the noise power of the gaussian noise added, respectively. Will be alpha 1 And alpha 2 Set to a smaller value to produce a slight change to the original fault sample. D (.;. Theta) d ) The output of (a) is a binary cross entropy between the predicted probability distribution and the true probability distribution, i.e.
Figure BDA0003934550240000082
In the formula
Figure BDA0003934550240000083
Labels representing two classes of samples, X s Is/are as follows
Figure BDA0003934550240000084
Is [0,1]G (X) s ) Is
Figure BDA0003934550240000085
Is [1,0],
Figure BDA0003934550240000086
The j-th element output by the discriminator, i.e. D (f (;/; ψ); θ) d )。
The confrontation assistant training is to generalize the external performance, so that the overfitting risk can be reduced, the robustness of a diagnosis model can be improved, and misdiagnosis of the model caused by interference such as environmental noise or amplitude fluctuation can be prevented.
S3: and constructing an optimization objective function according to the intrinsic distribution loss function, the antithetical training loss function and the cross entropy loss function.
In one embodiment, the convolutional neural network model includes a feature extractor and a classifier; the cross entropy loss function is the cross entropy loss function of the classifier.
Specifically, the cross-entropy loss function of classifier C (·; θ _ C) is:
Figure BDA0003934550240000091
in the formula
Figure BDA0003934550240000092
Is the last layer vector C (.; [ theta ]) c ) The j-th element of the output.
Combining the three loss functions, including the intrinsic distribution loss function, the antithetic training loss function and the cross entropy loss function, the optimization objective function can be generated according to the loss functions. Wherein the expression of the optimization objective function is:
Figure BDA0003934550240000093
in the formula of lambda 1 And λ 2 Is a hyperparameter that balances the three losses. The cross entropy term is closely related to the accuracy of fault diagnosis, so the balance weight should be determined to ensure the dominance of the cross entropy term in the total objective function, in this embodiment, the hyper-parameter λ 1 And λ 2 Are all set to 0.015.
It is to be noted that the goals of the method are to perform the cross entropy loss minimization, the ternary loss minimization and the antagonistic loss training when the convolutional neural network model is trained by adopting the optimization objective function.
S4: and constructing a convolutional neural network model.
Specifically, the convolutional neural network model mainly comprises two parts, namely a feature extractor
Figure BDA0003934550240000094
Figure BDA0003934550240000095
(mapping original failure samples to feature space) and classifier
Figure BDA0003934550240000096
(failure modes are identified from the extracted features), f (·; ψ): X → Z maps the original failure samples to feature space Z. As shown in FIG. 3, the feature extractor f (. Psi.) consists of 5 convolutional layers and 5 max pooling layers, and the extracted feature map is expanded into a one-dimensional matrix and stored in the feature space.The forward calculation of three optimization targets is carried out by utilizing fault characteristics, wherein a classifier C (·; theta) c ) And an arbiter D (·; theta d ) Consists of a full communication layer.
In addition, the training process of the fault detection model is also shown in fig. 3, the multi-source domain voiceprint data is an original fault sample, and the multi-source domain voiceprint data (noise) is a new fault sample.
S5: and updating the weight parameters of the convolutional neural network model until convergence by minimizing the optimization objective function so as to obtain a fault detection model.
In particular, a gradient descent algorithm may be employed in training the convolutional neural network model. The specific process is as follows: all parameters of the feature extractor, the classifier and the discriminator are optimized end to end, and the optimization process is as follows:
Figure BDA0003934550240000101
parameters psi and theta c After being optimized, the
Figure BDA0003934550240000102
And
Figure BDA0003934550240000103
and minimum. Parameters psi and theta d For is to
Figure BDA0003934550240000104
And carrying out reverse training. With respect to the psi, the two,
Figure BDA0003934550240000105
is maximum for ψ and is maximum for θ d Is the smallest. For this purpose, a gradient inversion layer (GRL) is inserted between the feature extractor and the discriminator, which is changed during the backward propagation from the discriminator to the feature extractor
Figure BDA0003934550240000106
The sign of the gradient of (c). Psi is a parameter of the feature extractor network, θ c Are parameters of the classifier network.
Figure BDA0003934550240000107
Is a cross entropy loss function,
Figure BDA0003934550240000108
Is a distributed loss function (i.e., a ternary loss function).
In addition, psi, theta of the model c And theta d The parameters are updated using a stochastic gradient descent algorithm, and the process can be represented as:
Figure BDA0003934550240000109
where μ is the learning rate and q represents the q-th iteration update. Wherein, theta d Representing parameters of the arbiter network. Equation (8) is a random gradient descent algorithm pair psi and theta c And theta d Updating the parameters until the total optimization objective function
Figure BDA00039345502400001010
And minimum.
It should be understood that, although the steps in the flowchart of fig. 2 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 2 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The embodiments disclosed in the present application describe a transformer fault detection method in detail, and the method disclosed in the present application can be implemented by using various types of devices, so that the present application also discloses a transformer fault detection apparatus corresponding to the method, and specific embodiments are described in detail below.
Referring to fig. 4, a transformer fault detection apparatus disclosed in an embodiment of the present application mainly includes:
the voiceprint data acquisition module 410 is used for acquiring voiceprint data of the transformer to be detected;
the fault mode output module 420 is configured to input voiceprint data of the transformer to be detected to the fault detection model, and output a fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a transformer voiceprint data sample set of a multi-source domain and an optimized objective function; the optimization objective function includes an intrinsic distribution loss function, an antithetical training loss function, and a cross-entropy loss function.
In one embodiment, an apparatus comprises:
the sample set acquisition module is used for acquiring a transformer voiceprint data sample set; the voiceprint data sample set comprises a plurality of transformer voiceprint data samples, and the transformer voiceprint data samples comprise fault voiceprint data samples and normal voiceprint data samples;
the loss function construction module is used for respectively carrying out internal distribution generalization and external generalization treatment on the transformer voiceprint data sample set so as to construct an internal distribution loss function and an antagonistic training loss function;
the optimization objective function building module is used for building an optimization objective function according to the internal distribution loss function, the antithetical training loss function and the cross entropy loss function;
the convolution model construction module is used for constructing a convolution neural network model;
and the fault model determining module is used for updating the weight parameters of the convolutional neural network model until convergence by minimizing the optimization objective function so as to obtain a fault detection model.
In one embodiment, the loss function construction module is used for performing cluster analysis on a plurality of transformer voiceprint data samples based on a metric learning algorithm and according to working conditions; constructing a triple set according to the clustered transformer voiceprint data samples; and establishing a loss function of the triplets according to the triple set to form an intrinsic distribution loss function.
In one embodiment, the loss function constructing module is configured to add a variation factor to a transformer voiceprint data sample in a transformer voiceprint data sample set to form a varied transformer voiceprint data sample set; and respectively adopting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set to carry out minimum training on the convolutional neural network model and the discriminator, and constructing a binary cross entropy between the prediction probability distribution and the real probability distribution so as to form a resistance training loss function.
In one embodiment, the varying factors include gaussian noise and amplitude drift; and the loss function building module is used for adding Gaussian noise to the transformer voiceprint data samples in the transformer voiceprint data sample set and performing amplitude drift processing.
In one embodiment, the loss function constructing module is configured to input the transformer voiceprint data sample set and the changed transformer voiceprint data sample set to a feature extractor in a convolutional neural network model for feature extraction, and the feature extraction is respectively marked as a first feature and a second feature; performing minimum maximum operation on the first characteristic and the second characteristic by adopting a discriminator; and constructing a binary cross entropy between the prediction probability distribution and the real probability distribution according to the operation result.
In one embodiment, the convolutional neural network model includes a feature extractor and a classifier; the cross entropy loss function is the cross entropy loss function of the classifier.
For the specific definition of the transformer fault detection device, reference may be made to the above definition of the method, which is not described herein again. The various modules in the above-described apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The modules can be embedded in a hardware form or independent of a processor in the terminal device, and can also be stored in a memory in the terminal device in a software form, so that the processor can call and execute operations corresponding to the modules.
Referring to fig. 5, fig. 5 is a block diagram illustrating a structure of a terminal device according to an embodiment of the present application. The terminal device 50 may be a computer device. The terminal device 50 in the present application may include one or more of the following components: a processor 52, a memory 54, and one or more applications, wherein the one or more applications may be stored in the memory 54 and configured to be executed by the one or more processors 52, the one or more applications configured to perform the methods described above as applied to the transformer fault detection method embodiments.
Processor 52 may include one or more processing cores. The processor 52 connects various parts within the overall terminal device 50 using various interfaces and lines, and performs various functions of the terminal device 50 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 54, and calling data stored in the memory 54. Alternatively, the processor 52 may be implemented in hardware using at least one of Digital Signal Processing (DSP), field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). Processor 52 may integrate one or a combination of Central Processing Unit (CPU), graphics Processing Unit (GPU), modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 52, but may be implemented by a communication chip.
The Memory 54 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 54 may be used to store instructions, programs, code sets, or instruction sets. The memory 54 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 50 in use, and the like.
Those skilled in the art will appreciate that the structure shown in fig. 5 is a block diagram of only a portion of the structure relevant to the present application, and does not constitute a limitation on the terminal device to which the present application is applied, and a particular terminal device may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
In summary, the terminal device provided in the embodiment of the present application is used to implement the corresponding transformer fault detection method in the foregoing method embodiment, and has the beneficial effects of the corresponding method embodiment, which are not described herein again.
Referring to fig. 6, a block diagram of a computer-readable storage medium according to an embodiment of the present disclosure is shown. The computer readable storage medium 60 has stored therein program code that can be invoked by a processor to perform the methods described in the above embodiments of the transformer fault detection method.
The computer-readable storage medium 60 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 60 includes a non-transitory computer-readable medium. The computer readable storage medium 60 has storage space for program code 62 for performing any of the method steps of the method described above. The program code can be read from and written to one or more computer program products. The program code 62 may be compressed, for example, in a suitable form.
In the description herein, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of transformer fault detection, the method comprising:
acquiring voiceprint data of a transformer to be detected;
inputting the voiceprint data of the transformer to be detected into a fault detection model, and outputting a fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multi-source transformer voiceprint data sample set and an optimized objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
2. The method of claim 1, wherein building the fault detection model comprises:
acquiring the transformer voiceprint data sample set; wherein the voiceprint data sample set comprises a plurality of transformer voiceprint data samples, the transformer voiceprint data samples comprising a fault voiceprint data sample and a normal voiceprint data sample;
respectively carrying out internal distribution generalization and external generalization treatment on the transformer voiceprint data sample set to construct the internal distribution loss function and the antagonistic training loss function;
constructing the optimization objective function according to the intrinsic distribution loss function, the antagonistic training loss function and the cross entropy loss function;
constructing a convolutional neural network model;
and updating the weight parameters of the convolutional neural network model until convergence by minimizing an optimization objective function so as to obtain the fault detection model.
3. The method of claim 2, wherein said constructing the intrinsic distribution loss function by performing the intrinsic distribution generalization on the transformer voiceprint data sample set comprises:
based on a metric learning algorithm and according to working conditions, carrying out cluster analysis on a plurality of transformer voiceprint data samples;
constructing a triple set according to the clustered transformer voiceprint data samples;
and establishing a loss function of the triplets according to the triple set so as to form an intrinsic distribution loss function.
4. The method of claim 2, wherein the externally generalizing the set of transformer voiceprint data samples to construct the opportune training loss function comprises:
adding variation factors to the transformer voiceprint data samples in the transformer voiceprint data sample set to form a varied transformer voiceprint data sample set;
and respectively adopting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set to carry out minimum training on the convolutional neural network model and the discriminator, and constructing a binary cross entropy between a prediction probability distribution and a real probability distribution so as to form a countertraining loss function.
5. The method of claim 4, wherein the varying factors include Gaussian noise and amplitude drift; adding variation factors to the transformer voiceprint data samples in the transformer voiceprint data sample set, wherein the variation factors comprise:
and adding the Gaussian noise to the transformer voiceprint data samples in the transformer voiceprint data sample set, and performing amplitude drift processing.
6. The method of claim 5, wherein the performing a minimization training on the convolutional neural network model and the discriminator using the transformer voiceprint data sample set and the changed transformer voiceprint data sample set, respectively, to construct a binary cross entropy between a predicted probability distribution and a true probability distribution comprises:
inputting the transformer voiceprint data sample set and the changed transformer voiceprint data sample set into a feature extractor in the convolutional neural network model for feature extraction, and respectively recording the feature extraction as a first feature and a second feature;
carrying out minimum maximum operation on the first characteristic and the second characteristic by adopting the discriminator;
and constructing a binary cross entropy between the prediction probability distribution and the real probability distribution according to the operation result.
7. The method of any one of claims 1-6, wherein the convolutional neural network model comprises a feature extractor and a classifier; the cross entropy loss function is a cross entropy loss function of the classifier.
8. A transformer fault detection device, characterized in that the device comprises:
the voiceprint data acquisition module is used for acquiring voiceprint data of the transformer to be detected;
the fault mode output module is used for inputting the voiceprint data of the transformer to be detected into the fault detection model and outputting the fault mode of the transformer;
the fault detection model is obtained by learning and training a convolutional neural network model by adopting a multi-source transformer voiceprint data sample set and an optimized objective function; the optimization objective function includes an intrinsic distribution loss function, a resistance training loss function, and a cross-entropy loss function.
9. A terminal device, comprising:
a memory; one or more processors coupled with the memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
CN202211398183.8A 2022-11-09 2022-11-09 Transformer fault detection method and device, terminal equipment and readable storage medium Pending CN115758078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211398183.8A CN115758078A (en) 2022-11-09 2022-11-09 Transformer fault detection method and device, terminal equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211398183.8A CN115758078A (en) 2022-11-09 2022-11-09 Transformer fault detection method and device, terminal equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN115758078A true CN115758078A (en) 2023-03-07

Family

ID=85368541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211398183.8A Pending CN115758078A (en) 2022-11-09 2022-11-09 Transformer fault detection method and device, terminal equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN115758078A (en)

Similar Documents

Publication Publication Date Title
CN109523018B (en) Image classification method based on deep migration learning
CN109120462B (en) Method and device for predicting opportunistic network link and readable storage medium
CN112464865A (en) Facial expression recognition method based on pixel and geometric mixed features
US11681913B2 (en) Method and system with neural network model updating
CN104933428A (en) Human face recognition method and device based on tensor description
CN112561060B (en) Neural network training method and device, image recognition method and device and equipment
CN110046707B (en) Evaluation optimization method and system of neural network model
CN111160462A (en) Unsupervised personalized human activity recognition method based on multi-sensor data alignment
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN112420125A (en) Molecular attribute prediction method and device, intelligent equipment and terminal
CN114511710A (en) Image target detection method based on convolutional neural network
WO2022100607A1 (en) Method for determining neural network structure and apparatus thereof
CN113822144A (en) Target detection method and device, computer equipment and storage medium
Bi et al. Critical direction projection networks for few-shot learning
CN113408721A (en) Neural network structure searching method, apparatus, computer device and storage medium
CN113255701B (en) Small sample learning method and system based on absolute-relative learning framework
CN115758078A (en) Transformer fault detection method and device, terminal equipment and readable storage medium
CN113298265B (en) Heterogeneous sensor potential correlation learning method based on deep learning
CN113010687B (en) Exercise label prediction method and device, storage medium and computer equipment
CN115457365A (en) Model interpretation method and device, electronic equipment and storage medium
CN112132269B (en) Model processing method, device, equipment and storage medium
CN114818945A (en) Small sample image classification method and device integrating category adaptive metric learning
CN113159419A (en) Group feature portrait analysis method, device and equipment and readable storage medium
Suyal et al. An Agile Review of Machine Learning Technique
Gritsenko et al. Extreme learning machines for visualization+ r: Mastering visualization with target variables

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination