CN114722721A - Power transformer health state evaluation method based on deep coding and decoding convolutional network - Google Patents

Power transformer health state evaluation method based on deep coding and decoding convolutional network Download PDF

Info

Publication number
CN114722721A
CN114722721A CN202210431641.7A CN202210431641A CN114722721A CN 114722721 A CN114722721 A CN 114722721A CN 202210431641 A CN202210431641 A CN 202210431641A CN 114722721 A CN114722721 A CN 114722721A
Authority
CN
China
Prior art keywords
power transformer
deep
data
model
convolutional network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210431641.7A
Other languages
Chinese (zh)
Inventor
孙强
蒋硕文
张孝金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangsu Huachen Transformer Co ltd
Xian University of Technology
Original Assignee
Jiangsu Huachen Transformer Co ltd
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangsu Huachen Transformer Co ltd, Xian University of Technology filed Critical Jiangsu Huachen Transformer Co ltd
Priority to CN202210431641.7A priority Critical patent/CN114722721A/en
Publication of CN114722721A publication Critical patent/CN114722721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/06Energy or water supply
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • General Health & Medical Sciences (AREA)
  • Strategic Management (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Marketing (AREA)
  • Molecular Biology (AREA)
  • Tourism & Hospitality (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Business, Economics & Management (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Development Economics (AREA)
  • Public Health (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Game Theory and Decision Science (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Water Supply & Treatment (AREA)
  • Geometry (AREA)
  • Primary Health Care (AREA)

Abstract

The invention provides a power transformer health state evaluation method based on a deep coding and decoding convolutional network, which comprises the following steps: the method is characterized in that the multi-attribute monitoring data of the power transformer is used as information input, and a non-linear mapping model from the monitoring data to a health state is learned in an end-to-end mode through two symmetrical U-Net convolution networks of an encoder and a decoder. The method is a data-driven dominant power transformer health state assessment method, is beneficial to improving the intelligent level of power transformer health state assessment in the digital economic development era, and provides a core technical support for a digital operation and maintenance system of a power transformer. The method automatically generates the power transformer health state evaluation learning model in an end-to-end data driving mode through the encoding and decoding process of the U-Net network, is beneficial to improving the accuracy of the transformer health state evaluation, and reduces the dependence on the experience of professional technicians.

Description

Power transformer health state evaluation method based on deep coding and decoding convolutional network
Technical Field
The invention belongs to the technical field of deep learning application, and particularly relates to a power transformer health state evaluation method based on a deep coding and decoding convolutional network.
Background
Power transformers are key devices of power systems, and their health has a crucial impact on the reliable operation of the overall power system, which in turn affects the production and living conditions of the associated area. Therefore, the method and the device can be used for carrying out real-time and accurate state evaluation on the health state of the power transformer, timely discovering health hidden dangers and eliminating faults, and have important practical significance for guaranteeing safe and stable operation of a power system.
In the digital era, deep learning technology in the field of artificial intelligence is fully utilized, the mapping relation from a data set consisting of various monitoring data of the power transformer to the health state of the power transformer is automatically learned in an end-to-end deep network training mode, a nonlinear learning model is obtained, and a solid technical foundation can be laid for the operation and maintenance of the power transformer towards the goals of comprehensive digitization and intellectualization.
At present, most work related to the health state evaluation of the power transformer is carried out according to a health index or a rating table, an empirical weight factor needs to be preset for each type of parameter summarized by an industry expert depending on professional experience, certain subjectivity is achieved, and a uniform analysis result is difficult to achieve. Therefore, a real-time automatic analysis method capable of detecting data according to the self-state of the equipment and the environmental state is needed to complete the health state assessment of the power transformer. Furthermore, some indexes closely related to the safe and stable operation of the power transformer are required to be input into the intelligent health state evaluation analysis model as parameters, and the output result is used as a basis for judging the health state grade of the equipment.
Therefore, in order to solve the above technical problems, it is necessary to provide a power transformer health status evaluation method based on a deep codec convolutional network.
Disclosure of Invention
The invention aims to provide a power transformer health state evaluation method based on a deep coding and decoding convolutional network, so as to solve the problems.
In order to achieve the above object, an embodiment of the present invention provides the following technical solutions:
the method for evaluating the health state of the power transformer based on the deep coding and decoding convolutional network comprises the following steps:
s1, acquiring multi-attribute sensor monitoring time sequence data of the power transformer and the environment thereof under different health conditions to form a data set with a health state label;
s2, dividing the data into a training set and a test set, and further dividing the training set into a model training set and a model verification set;
s3, inputting the model training set into a U-Net convolution network with deep coding and decoding functions to train a transformer health state deep learning model M;
s4, inputting the model verification set data into the deep learning model M, evaluating the performance of the model, and generating the deep learning model with super-parameter updating
Figure BDA0003610898680000021
S5, inputting the test data set into a deep learning model
Figure BDA0003610898680000022
In (1), obtain the corresponding health status level Lk,k∈{R,O,Y,G}。
In the step S1, the monitoring of the time-series data by the multi-attribute sensor includes the measurement of the coil temperature and the core temperature of the optical fiber temperature controller of the dry-type transformer, the monitoring of the optical sudden change of the electrical connection point, the monitoring of the partial discharge, the measurement of the temperature and the humidity of the operating environment, the monitoring of the operating vibration and the ground vibration of the transformer, and the monitoring of the environmental noise.
In step S1, the multi-sensor data collected at different time points t is regarded as an observation record of the power transformer and recorded as a vector dt. For each power transformer, the data of the whole observation time period T are cascaded to form data D consisting of short-term observation records and long-term observation recordsT={d1,d2,...,dTAnd obtaining a data set omega with a health status label for all the transformers in operation of the same model.
In step S2, the whole data set Ω is divided into two parts: 80% of the training set is used as a training set, 20% of the training set is used as a test set, the training set is subdivided into two parts, 80% of the training set is used for deep model training, 20% of the training set is used for model performance verification and is used for dynamically tracking the change of a training loss function value and a verification loss function value, the training process is monitored, and overfitting of the training model is prevented.
The U-Net convolution network used in the step S3 comprises an encoder structure and a decoder structure, wherein in the encoder part, a plurality of continuous convolution operations and pooling operations are adopted, and semantic feature extraction of transformer sensing data is mainly realized; in the decoder part, an up-sampling operation is adopted, which is helpful for propagating the information from a low resolution level to a high resolution level, thereby realizing the uniform characterization from local to global for the implicit characteristics in different attribute data.
In the convolution operation in step S3, the convolution kernel is in one-dimensional form, and after two convolution operations, the pooling process is followed, which is actually a down-sampling operation, the maximum pooling operation is adopted, the step size is 1, and the features after convolution are processed along the longitudinal direction.
In the step S3, the upsampling factor used in the present invention is 2, which corresponds to the step shrinkage rate of the pooling operation, and the convolution kernel used is identical to the encoder, and in this stage, after each convolution operation is completed twice, the high resolution information in the encoding stage needs to be symmetrically copied and concatenated to the corresponding decoding result, so as to propagate the characteristics of the encoder stage to this stage, and make a more comprehensive discriminant characterization of the characteristics of each attribute data sensed by the transformer.
After the characteristic information is extracted, as an output layer of the whole network, an activation function Softmax function is required to be utilized, and a real number domain result output by a model is mapped to an effective real number space which represents probability distribution and is [0,1], wherein the expression of the function is as follows:
Figure BDA0003610898680000031
wherein z isjRepresenting the real number domain result corresponding to the jth state.
In the step S3, for the performance status of the power transformer in actual operation, and for different health states of the power transformer, cross entropy loss functions with different weights are used, which facilitates faster convergence in the deep model training process and helps to improve the training performance of the model.
The expression of the cross entropy loss function in step S3 of the present invention is:
Figure BDA0003610898680000032
Figure BDA0003610898680000033
wherein, yiIndicating that the class label corresponding to the state i is 1, is not 0, and piIndicating the probability that the prediction of state i holds.
Compared with the prior art, the invention has the following advantages:
the health state evaluation learning model of the power transformer is automatically generated in an end-to-end mode through the encoding and decoding processes of the U-Net network, and the data-driven health state evaluation method of the power transformer is beneficial to improving the accuracy of the health state evaluation of the transformer and reducing the dependence on the experience of professional technicians.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a power transformer health status evaluation method based on a deep coding/decoding convolutional network according to an embodiment of the present invention;
FIG. 2 is a schematic diagram illustrating a U-Net network structure of a power transformer health status evaluation method based on a deep codec convolutional network according to an embodiment of the present invention.
Detailed Description
The present invention will be described in detail below with reference to embodiments shown in the drawings. The present invention is not limited to the above embodiments, and structural, methodological, or functional changes made by those skilled in the art according to the embodiments are included in the scope of the present invention.
The invention discloses a power transformer health state evaluation method based on a deep coding and decoding convolution network, which comprises five steps, wherein, the first step is that the multi-attribute sensing data of the power transformer relates to the coil temperature and the iron core temperature measurement of the optical fiber temperature controller of the dry type transformer, the light mutation monitoring of the electrical connection point, the partial discharge monitoring, the temperature and the humidity measurement of the operating environment, the vibration monitoring of the transformer and the ground vibration monitoring, and ambient noise monitoring, all attribute data being acquired at the same observation period and time point, and forming time series data related to the operating condition of the power transformer, taking the time series data as input of the deep U-Net network, and forming a data set with a health state label by using the acquired time series data detected by the multi-attribute sensors of the power transformer and the environment thereof under different health state conditions.
Referring to fig. 1, for different time points t, the collected multi-sensor data is regarded as an observation record of the power transformer and is recorded as a vector dtFor each power transformer, its entire observation period is setThe data in the (period) T are concatenated to form data D consisting of short-term observation records and long-term observation recordsT={d1,d2,...,dTThus, for all transformers in transit of the same model, a data set with a health status label is obtained, and for each observation dtDifferent levels of health status labels L need to be assigned by experienced professionalskK belongs to { R, O, Y, G }, so as to form a large-scale data set consisting of data records and corresponding health state labels in a paired form, wherein R represents that the operation capacity of the transformer is seriously degraded and faces the condition of quitting operation; o represents that the running capacity of the transformer is reduced and needs to be checked or overhauled in detail offline; y represents that the running state of the transformer is normal, but needs to be strictly monitored and maintained at proper time; and G represents normal operation and normal monitoring of the transformer.
And secondly, dividing the data into a training set and a test set, and further dividing the training set into a model training set and a model verification set, wherein the training set comprises sensing data and corresponding health state labels, and the test set only comprises the sensing data, and the health state labels are obtained by deep learning model estimation based on a U-Net convolution network.
The entire data set is divided into two large parts: 80% of the training set is used as a training set, 20% of the training set is used as a test set, meanwhile, the training set is subdivided into two parts, wherein 80% of the training set is used for deep model training, 20% of the training set is used for model performance verification and is used for dynamically tracking the change of a training loss function value and a verification loss function value, the training process is monitored, overfitting of the training model is prevented, and further, the training is stopped before the situation that the loss function value on the verification set is reduced to be large occurs.
And thirdly, inputting the model training set into a U-Net convolutional network with deep coding and decoding functions to train the transformer health state deep learning model M, wherein the used U-Net convolutional network comprises an encoder and a decoder. In the encoder part, multiple continuous convolution operation and pooling operation are adopted, and semantic feature extraction of transformer sensing data is mainly realized; in the decoder part, the up-sampling operation is adopted, which is beneficial to transmitting the information from the low-resolution level to the high-resolution level, so that the local to global uniform representation of the implicit characteristics in different attribute data is realized, and meanwhile, in the step, cross entropy loss functions with different weights are adopted for different power transformer health states aiming at the performance conditions of the power transformer in actual operation, so that the faster convergence is realized in the deep model training process, and the model training performance is improved.
Referring to fig. 2, the U-Net convolutional network architecture is composed of two parts, one is a down-stream contraction path stage in which semantic information is extracted step by step, and can be regarded as a semantic information encoder, and the other is an up-stream expansion path stage in which semantic information is propagated step by step, and can be regarded as a semantic information decoder.
In the encoder part, each step consists of two convolution layers and one pooling layer, the convolution operation is performed by adopting a one-dimensional form of convolution kernel, two convolution operations are followed by a pooling process, which is essentially a down-sampling operation, wherein the maximal pooling operation is adopted, the step size is 1, and the feature after convolution is processed along the longitudinal direction.
In the decoder part, each step consists of an up-sampling layer and two convolution layers, such an expansion path and a contraction path in the encoder stage are symmetrical, so that a U-shaped structure, referred to as a U-Net network structure for short, is formed, an up-sampling factor adopted is 2, corresponds to the step length contraction rate of pooling operation, and a convolution kernel adopted is consistent with an encoder.
After extracting the feature information, as an output layer of the whole network, an activation function Softmax function is needed to be used for mapping the real number domain result output by the model to an effective real number space of [0,1] representing probability distribution, wherein the expression of the function is as follows:
Figure BDA0003610898680000061
wherein z isjThe real number domain result corresponding to the j-th state is shown, and the health state of the power transformer is divided into four levels in consideration of the actual situation of the power transformer, so that j is {1, 2, 3, 4 }; the total number of states K is 4, and in practice the Softmax function maps the model output result to 0,1 with the nonnegativity of the exponential function and the denominator in the expression as normalization factors]An interval.
In application, it is desirable to obtain a probability value (confidence) of a health state level corresponding to each monitoring index set, and the probability value can also be simply understood as a confidence level belonging to a corresponding health state.
In the operation process of the U-Net network, a loss function is required to be utilized as error measurement for measuring a prediction result and an actual marking result of a training model, and considering that the evaluation of the health state of the power transformer is actually a classification problem, a cross entropy loss function is adopted, and the expression of the cross entropy loss function is as follows:
Figure BDA0003610898680000071
Figure BDA0003610898680000072
wherein, yiIndicating that the class label corresponding to the state i is 1, is not 0, and piIndicating the probability that the prediction of state i holds.
The loss function is only an expression adopted when the single state prediction result of the power transformer is true or not, for the four states related to the invention, different weights are respectively distributed by adopting the expressions, specifically, the two states marked as O and Y have low discrimination and uncertainty, the distribution weight can be larger, and for the two marked states R and G, the discrimination is obvious, and smaller weights can be adopted.
In the optimization process of model training, an Adam optimizer is adopted, the learning rate setting range is [0.0001,0.001], the number of epochs to be adopted is 40-60, and therefore the trained model M is obtained, namely, one epoch is a process of training all training samples once.
Fourthly, inputting the model verification set data into the deep learning model M, obtaining various state prediction results of the verification set data by adjusting different hyper-parameters, mainly learning rate and epoch, when the error between the state prediction results and the actual results (determined by cross entropy loss function) is gradually reduced, the prediction accuracy is higher and higher, however, usually with the increase of the epoch, the loss on the verification set can have a rebound trend from reduction to increase, at the moment, the training should be stopped, and the generated model is recorded as
Figure BDA0003610898680000073
The updated hyper-parameters will be used to test the data set.
Fifthly, normalizing the test data set and inputting the normalized test data set into the deep learning model
Figure BDA0003610898680000074
In (1), obtain the corresponding health status level Lk,k∈{R,O,Y,G}。
According to the technical scheme, the invention has the following beneficial effects:
the invention takes the multi-attribute monitoring data of the power transformer as information input, learns a nonlinear mapping model from the monitoring data to the health state in an end-to-end mode through two symmetrical information processing processes of an encoder and a decoder, is a power transformer health state assessment method which is mainly driven by data, is beneficial to improving the intelligent level of the power transformer health state assessment in the digital economic development and provides a core technical support for a digital operation and maintenance system of the power transformer.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned.
Furthermore, it should be understood that although the present description refers to embodiments, not every embodiment may contain only a single embodiment, and such description is for clarity only, and those skilled in the art should integrate the description, and the embodiments may be combined as appropriate to form other embodiments understood by those skilled in the art.

Claims (10)

1. The method for evaluating the health state of the power transformer based on the deep coding and decoding convolutional network is characterized by comprising the following steps:
s1, acquiring time series data monitored by the power transformer and the multi-attribute sensors of the environment of the power transformer under different health conditions to form a data set with a health state label;
s2, dividing the data into a training set and a test set, and further dividing the training set into a model training set and a model verification set;
s3, inputting the model training set into a U-Net convolution network with deep coding and decoding functions to train a transformer health state deep learning model M;
s4, inputting the data of the model verification set into the deep learning model M, evaluating the performance of the model and generating the deep learning model with super-parameter updating
Figure FDA0003610898670000011
S5, inputting the test data set into the deep learning model
Figure FDA0003610898670000012
In (1), obtain the corresponding health status level Lk,k∈{R,O,Y,G}。
2. The method for assessing the health status of a power transformer based on deep codec convolutional network of claim 1, wherein the monitoring time series data of the multi-attribute sensor in step S1 includes coil temperature and core temperature measurement of the optical fiber temperature controller of the dry-type transformer, optical sudden change monitoring of the electrical connection point, partial discharge monitoring, temperature and humidity measurement of the operating environment, vibration and ground vibration monitoring of the transformer, and environmental noise monitoring.
3. The method for evaluating the health status of a power transformer based on the deep codec convolutional network as claimed in claim 2, wherein the multi-sensor data collected at different time nodes t in step S1 is regarded as an observation record of the power transformer and recorded as a vector dt. For each power transformer, the data of the whole observation time period T are cascaded to form data D consisting of short-term observation records and long-term observation recordsT={d1,d2,...,dTAnd obtaining a data set omega with a health status label for all the transformers in operation of the same model.
4. The method for evaluating the health status of a power transformer based on a deep codec convolutional network as claimed in claim 1, wherein the whole data set Ω is divided into two parts in the step S2: 80% of the training set is used as a training set, 20% of the training set is used as a test set, the training set is subdivided into two parts, 80% of the training set is used for deep model training, 20% of the training set is used for model performance verification and is used for dynamically tracking the change of a training loss function value and a verification loss function value, the training process is monitored, and overfitting of the training model is prevented.
5. The method for evaluating the health status of the power transformer based on the deep coding-decoding convolutional network of claim 1, wherein the U-Net convolutional network used in step S3 includes an encoder part and a decoder part, and in the encoder part, a plurality of consecutive convolution operations and pooling operations are adopted, so as to mainly realize semantic feature extraction of transformer sensing data; in the decoder part, an up-sampling operation is adopted, which is helpful for propagating the information from a low resolution level to a high resolution level, thereby realizing the uniform characterization from local to global for the implicit characteristics in different attribute data.
6. The method for evaluating the health status of a power transformer based on a deep codec convolutional network as claimed in claim 5, wherein for step S3, when performing the convolution operation of the deep coder, the convolution kernel is in one dimension, two convolution operations are followed by a pooling process, which is essentially a down-sampling operation, the maximum pooling operation is to be performed, the step size is 1, and the feature after convolution is processed in the vertical direction.
7. The method for evaluating the health status of a power transformer based on a deep codec convolutional network as claimed in claim 6, wherein the up-sampling factor adopted by the decoder involved in step S3 is 2, which corresponds to the step size reduction rate of the pooling operation, and the adopted convolution kernel is consistent with the encoder, and in this stage, after each two times of convolution operations are completed, the high resolution information in the encoding stage needs to be symmetrically copied and concatenated to the corresponding decoding result, so as to propagate the characteristics of the encoder stage to this stage, and make a more comprehensive discriminant characterization on the characteristics of each attribute data sensed by the transformer.
8. The method for evaluating the health state of the power transformer based on the deep coding-decoding convolutional network as claimed in claim 7, wherein after the characteristic information is extracted, as an output layer of the whole network, an activation function Softmax function is used to map the real number domain result output by the model to an effective real number space [0,1] representing probability distribution, and the expression of the function is as follows:
Figure FDA0003610898670000021
wherein z isjRepresenting the real number domain result corresponding to the jth state.
9. The method for evaluating the health status of the power transformer based on the deep coding-decoding convolutional network as claimed in claim 8, wherein in step S3, for the performance status of the power transformer in actual operation, for different health statuses of the power transformer, cross entropy loss functions with different weights are used, so that faster convergence is realized in the deep model training process, and the method is helpful to improve the training performance of the model.
10. The method for evaluating the health status of the power transformer based on the deep codec convolutional network of claim 9, wherein the expression of the cross entropy loss function in step S3 is as follows:
Figure FDA0003610898670000031
Figure FDA0003610898670000032
wherein, yiIndicating that the class label corresponding to the state i is 1, is not 0, and piIndicating the probability that the prediction of state i holds.
CN202210431641.7A 2022-04-22 2022-04-22 Power transformer health state evaluation method based on deep coding and decoding convolutional network Pending CN114722721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210431641.7A CN114722721A (en) 2022-04-22 2022-04-22 Power transformer health state evaluation method based on deep coding and decoding convolutional network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210431641.7A CN114722721A (en) 2022-04-22 2022-04-22 Power transformer health state evaluation method based on deep coding and decoding convolutional network

Publications (1)

Publication Number Publication Date
CN114722721A true CN114722721A (en) 2022-07-08

Family

ID=82245276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210431641.7A Pending CN114722721A (en) 2022-04-22 2022-04-22 Power transformer health state evaluation method based on deep coding and decoding convolutional network

Country Status (1)

Country Link
CN (1) CN114722721A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879345A (en) * 2022-12-14 2023-03-31 兰州理工大学 Transformer health state assessment method and system based on magnetic force sound

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115879345A (en) * 2022-12-14 2023-03-31 兰州理工大学 Transformer health state assessment method and system based on magnetic force sound
CN115879345B (en) * 2022-12-14 2023-11-03 兰州理工大学 Transformer health state assessment method and system based on magnetic force sound

Similar Documents

Publication Publication Date Title
CN111798051B (en) Air quality space-time prediction method based on long-term and short-term memory neural network
CN111382542B (en) Highway electromechanical device life prediction system facing full life cycle
CN113723010B (en) Bridge damage early warning method based on LSTM temperature-displacement correlation model
CN111325403B (en) Method for predicting residual life of electromechanical equipment of highway tunnel
CN112183368B (en) LSTM-based rapid identification method for low-frequency oscillation modal characteristics of power system
CN112461537A (en) Wind power gear box state monitoring method based on long-time neural network and automatic coding machine
CN112507479B (en) Oil drilling machine health state assessment method based on manifold learning and softmax
Chen et al. Gated adaptive hierarchical attention unit neural networks for the life prediction of servo motors
CN115438726A (en) Device life and fault type prediction method and system based on digital twin technology
CN114722721A (en) Power transformer health state evaluation method based on deep coding and decoding convolutional network
CN116502123A (en) Non-supervision cross-domain prediction method for residual service life of aero-engine
CN112434887A (en) Water supply network risk prediction method combining network kernel density estimation and SVM
CN116842358A (en) Soft measurement modeling method based on multi-scale convolution and self-adaptive feature fusion
CN116960962A (en) Mid-long term area load prediction method for cross-area data fusion
CN116384223A (en) Nuclear equipment reliability assessment method and system based on intelligent degradation state identification
CN115017828A (en) Power cable fault identification method and system based on bidirectional long-short-time memory network
CN112069621B (en) Method for predicting residual service life of rolling bearing based on linear reliability index
CN114487643A (en) On-spot handing-over of extra-high voltage GIL equipment is accepted and is synthesized test platform
CN112685933B (en) Method for predicting residual service life of roller screw pair
CN116449717B (en) Extruder reduction gearbox state monitoring system based on digital twin
Zheng et al. Research on Predicting Remaining Useful Life of Equipment Based on Health Index
CN117435918B (en) Elevator risk early warning method based on spatial attention network and feature division
CN114104328B (en) Aircraft state monitoring method based on deep migration learning
CN116894172A (en) Wind power short-term power prediction method based on hybrid prediction model
Maragatham et al. Predicting the Uncertainty of Data Generated by Power Transformer using Gradient Boosting Decision Tree

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination