CN115016966A - Measurement automation system fault prediction method and device based on Transformer and storage medium - Google Patents

Measurement automation system fault prediction method and device based on Transformer and storage medium Download PDF

Info

Publication number
CN115016966A
CN115016966A CN202210609165.3A CN202210609165A CN115016966A CN 115016966 A CN115016966 A CN 115016966A CN 202210609165 A CN202210609165 A CN 202210609165A CN 115016966 A CN115016966 A CN 115016966A
Authority
CN
China
Prior art keywords
transformer
fault prediction
data
operation data
automation system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210609165.3A
Other languages
Chinese (zh)
Inventor
蔡乾乾
孙勇
化振谦
阙华坤
危阜胜
李经儒
胡皓鹏
潘峰
张捷
李健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Power Grid Co Ltd
Measurement Center of Guangdong Power Grid Co Ltd
Original Assignee
Guangdong Power Grid Co Ltd
Measurement Center of Guangdong Power Grid Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Power Grid Co Ltd, Measurement Center of Guangdong Power Grid Co Ltd filed Critical Guangdong Power Grid Co Ltd
Priority to CN202210609165.3A priority Critical patent/CN115016966A/en
Publication of CN115016966A publication Critical patent/CN115016966A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a method and a device for predicting faults of a metering automation system based on a Transformer and a storage medium. The method comprises the steps of acquiring real-time operation data of a metering automation master station system, and eliminating abnormal data points in the operation data; and inputting the operation data information into a pyramid type Transformer-based fault prediction model, and outputting a fault prediction result by the Transformer-based fault prediction model. The technical scheme of the invention realizes accurate prediction of the metering automation system fault.

Description

Measurement automation system fault prediction method and device based on Transformer and storage medium
Technical Field
The invention relates to the technical field of metering system fault prediction, in particular to a method and a device for metering automation system fault prediction based on a Transformer and a storage medium.
Background
Currently, the fault diagnosis technology of the metering automation master station system can be divided into two main categories, namely a knowledge-based method and a data mining-based method. The knowledge-based method relies on the working experience of field operation and maintenance personnel, and by analyzing the characteristics and the rules of the operation data of the metering automation master station and designing a proper algorithm to simulate the reasoning process of human experts, an expert knowledge base is established, and fault detection is realized. The method based on data mining adopts a quantitative analysis method to mine the relationship between data and faults, and establishes a mathematical model based on the obtained data characteristics to detect the faults.
With the continuous enlargement of the scale of the automatic metering master station system, the frequency of faults is increased day by day, the workload of daily operation and maintenance is increased more and more, and the requirement cannot be met by depending on manual maintenance. The knowledge-based method depends on manual experience, is greatly influenced by subjectivity, may have different results according to different levels of operation and maintenance personnel, and is difficult to effectively predict faults. The traditional method based on data mining relies on manual design to extract features, the workload is large, the design features need abundant experience, the difficulty is high, and the prediction precision is low. In addition, the method has poor generalization, can only solve a single task, and is difficult to migrate to other tasks.
Disclosure of Invention
The invention provides a method, a device and a storage medium for predicting the faults of a metering automation system based on a Transformer, which realize accurate prediction of the faults of the metering automation system.
An embodiment of the invention provides a measurement automation system fault prediction method based on a Transformer, which comprises the following steps:
acquiring real-time operation data of a metering automation master station system, and eliminating abnormal data points in the operation data;
and inputting the operation data information into a pyramid type Transformer-based fault prediction model, and outputting a fault prediction result by the Transformer-based fault prediction model.
Further, the operation data comprises IP information data, network equipment information data, network port information data and host equipment information data;
the failure prediction result comprises hardware equipment failure and system access failure.
Further, the Transformer-based fault prediction model is trained according to the following steps:
clustering historical operating data by adopting a K-means algorithm, and removing abnormal data in the operating data according to a clustering result to obtain a first operating data set;
carrying out data balance processing on the first operation data set by an undersampling method to obtain a second operation data set;
and training by adopting the second operation data set and the Transformer-based fault prediction model.
Further, the establishing process of the fault prediction model based on the Transformer is as follows:
establishing a Transformer model, wherein the Transformer model is formed by stacking a plurality of pairs of encoders and decoders;
cascading the plurality of Transformer models according to a pyramid framework, and gradually reducing the data dimension of each Transformer model according to the arrangement sequence of the plurality of Transformer models to obtain a pyramid-type fault prediction model based on transformers.
Further, an Embedding layer is adopted in an encoder of the Transformer model to encode input data into vectors with fixed length, and then the vectors with fixed length are input into a Multi-Head Attention module to calculate Attention weight; the Multi-Head-orientation module comprises a plurality of Self-orientation modules, and each Self-orientation module outputs a weighted feature vector.
Further, the weighted feature vector output by the Self-orientation module is calculated according to the following formula:
Figure BDA0003672528400000031
q, K and V are respectively a query vector, a key value vector and a weight vector, Attention represents an Attention module, d k For the length of the input vector, softmax is the activation function, and Y is the weighted feature vector.
Further, in the encoder and decoder pair, the decoder has one more Multi-Head orientation module than the encoder, the query vector of the more Multi-Head orientation module comes from the last output of the decoder, and the key value vector and the weight value vector of the more Multi-Head orientation module come from the output of the encoder.
Further, the weighted feature vector output by the Self-Attention module is input to a feedforward neural network module for encoding, and encoding calculation is performed according to the following formula:
FFN(Y)=max(0,YW 1 +b 1 )W 2 +b 2
wherein FFN represents a feedforward neural network, and the feedforward neural network comprises two fully-connected layers; w 1 And b 1 Weight and offset, W, representing the fully connected layer of the first layer 2 And b 2 Weight representing fully connected layer of second layerHeavy and biased.
The invention provides a measuring automation system fault prediction device based on a Transformer, which comprises an operation data acquisition module and a fault prediction module;
the operation data acquisition module is used for acquiring real-time operation data of the metering automation master station system and eliminating abnormal data points in the operation data;
the fault prediction module is used for inputting the operation data information into a Transformer-based fault prediction model, and the Transformer-based fault prediction model outputs a fault prediction result.
Another embodiment of the present invention provides a readable storage medium, where the readable storage medium includes a stored computer program, and when the computer program is executed, the computer program controls a device on which the readable storage medium is located to execute the method for predicting a failure of a measurement automation system based on a Transformer according to any one of the method embodiments of the present invention.
The embodiment of the invention has the following beneficial effects:
the invention provides a method, a device and a storage medium for predicting faults of a measurement automation system based on a Transformer. Therefore, the accurate fault prediction result can be obtained by inputting the real-time acquired operation data into the pyramid type fault prediction model based on the Transformer.
Drawings
Fig. 1 is a schematic flowchart of a method for predicting a failure of a Transformer-based metering automation system according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a Transformer-based measurement automation system fault prediction apparatus according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a Transformer model of a Transformer-based measurement automation system fault prediction method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a pyramid-type Transformer-based failure prediction model of a Transformer-based metering automation system failure prediction method according to an embodiment of the present invention.
Detailed Description
The technical solutions in the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, a method for predicting a failure of a measurement automation system based on a Transformer according to an embodiment of the present invention includes the following steps:
step S101: the method comprises the steps of obtaining real-time operation data of a metering automation master station system, and eliminating abnormal data points in the operation data. The operation data comprises IP information data, network equipment information data, network port information data and host equipment information data.
Step S102: and inputting the operation data information into a pyramid type Transformer-based fault prediction model, and outputting a fault prediction result by the Transformer-based fault prediction model. The failure prediction result comprises hardware equipment failure and system access failure, wherein the hardware equipment failure and the system access failure comprise but are not limited to hardware performance reaching a bottleneck, code bug on software, system version inconsistent with the outside, excessive external access causing system jamming, and external failure calling a function module causing data exception. And reporting the fault prediction result output by the fault prediction model based on the Transformer.
As an embodiment, the Transformer-based fault prediction model is trained according to the following steps:
clustering historical operating data by adopting a K-means algorithm, and removing abnormal data in the operating data according to a clustering result to obtain a first operating data set;
carrying out data balance processing on the first operation data set by an undersampling method to obtain a second operation data set;
and training by adopting the second operation data set and the Transformer-based fault prediction model.
Because the fault data is far less than the normal data, if the fault data is directly input into the neural network for training, the model has serious bias, and the fault prediction cannot be correctly carried out. Therefore, a certain number of samples are randomly extracted from normal data in an undersampling mode, a new data set is formed by the samples and fault data, the number of the extracted samples is determined by the number of the fault data samples, and the proportion of the two types of samples reaches 1: 1.
Unlike the structure of the conventional CNN model or RNN model, the Transformer model is composed of an attention mechanism, and its structure diagram is shown in fig. 2, and the Transformer model maps the distance between any two positions in the sequence as a constant, instead of using the sequential structure in the RNN. Therefore, the calculation result at the moment t does not depend on the result at the moment t-1 any more, so that the model has better parallelism, information loss caused by sequential calculation is avoided, and the problem of long-term dependence can be solved.
As an embodiment, the creating process of the fault prediction model based on the Transformer is as follows:
step A01: creating a Transformer model, wherein the Transformer model is formed by stacking a plurality of pairs of encoders and decoders. Encoding input data into a vector X with a fixed length by adopting an Embedding layer in an encoder of the Transformer model, and inputting the vector X with the fixed length into a Multi-Head Attention module to calculate Attention weight; the Multi-Head-orientation module comprises a plurality of Self-orientation modules, and each Self-orientation module outputs a weighted feature vector. The Multi-Head-orientation module is composed of a plurality of Self-orientation modules, and each Self-orientation module outputs a weighted feature vector Y:
Figure BDA0003672528400000061
wherein Q, K and V are respectively a query vector, a key value vector and a weight vector, Attention represents an Attention module, d k For the length of the input vector, softmax is the activation function, x is the input to the softmax activation function, and the expression is as follows:
Figure BDA0003672528400000062
first a score is calculated for each input vector X according to equation (2) and then the scores are normalized to keep the gradient stable, so they are divided by
Figure BDA0003672528400000063
d k Is the length of the input vector X. And mapping the score value into a weight between 0 and 1 by using a softmax function, multiplying the weight by a weight vector V, and adding all V to obtain a formula (1), namely obtaining an output Y. Wherein score is Q · K.
After the output Y is obtained, the output Y is sent to a Feed-Forward Neural Network (Feed-Forward Neural Network) module for further coding, the module consists of two layers of full connection layers, the first layer uses a ReLU activation function, the second layer uses a linear activation function, and the coding calculation process is shown as the formula (3):
FFN(Y)=max(0,YW 1 +b 1 )W 2 +b 2 (3);
wherein FFN represents a feedforward neural network, and the feedforward neural network comprises two fully-connected layers; w 1 And b 1 Weight and offset, W, representing the fully connected layer of the first layer 2 And b 2 Representing the weight and bias of the second layer fully connected layer. In the Encoder and Decoder pair, the Decoder has one more Multi-Head orientation module than the Encoder, which is also called an Encoder-Decoder orientation module. The difference with other Attention modules is that there are moreThe query vector Q of the Multi-Head orientation module comes from the last output of the decoder, and the key value vector K and the weight value vector V of the Multi-Head orientation module which are added out come from the output of the encoder.
Step A02: cascading the plurality of Transformer models according to a pyramid framework, and gradually reducing the data dimension of each Transformer model according to the arrangement sequence of the plurality of Transformer models to obtain a pyramid-type fault prediction model based on transformers. As shown in fig. 3, a plurality of transform models are cascaded according to a pyramid framework to construct a pyramid-type transform-based failure prediction model, where each stage is an encoder module of a transform model. The initial dimension of input data is H multiplied by W, the dimension of the input data is reduced by 4 times through the first stage, and the dimension of the data is reduced by 2 times sequentially through the following three stages, so that the features of different scales and different layers are obtained, and a better prediction effect is obtained. In addition, the architecture of gradual contraction gradually shortens the sequence length output by the Transformer along with the deepening of the network, so that a large amount of computing resources can be saved.
The invention adopts the pyramid-shaped fault prediction model based on the Transformer to carry out fault monitoring, can automatically extract characteristics from complex data, saves a large amount of workload, and has higher prediction precision than the traditional machine learning method. And abnormal data are eliminated by adopting a K-means algorithm, so that the influence of an abnormal point on fault monitoring is reduced. And an undersampling method is adopted to construct a balanced data set, so that the influence of excessive redundant information is avoided.
According to the method, abnormal data points are eliminated by adopting a K-means algorithm during training for the metering automation master station data with large scale and more redundant information, a balanced data set is constructed by adopting an undersampling method, so that sample balance is ensured, the model performance is improved, and only the abnormal points are eliminated during prediction. By combining the pyramid structure and the Transformer structure, global features of different scales and different levels can be obtained, so that more accurate fault prediction is realized.
On the basis of the above embodiment of the invention, the present invention correspondingly provides an embodiment of the apparatus, as shown in fig. 2;
the invention provides a measuring automation system fault prediction device based on a Transformer, which comprises an operation data acquisition module and a fault prediction module, wherein the operation data acquisition module is used for acquiring operation data of a measuring automation system;
the operation data acquisition module is used for acquiring real-time operation data of the metering automation master station system and eliminating abnormal data points in the operation data;
the fault prediction module is used for inputting the operation data information into a Transformer-based fault prediction model, and the Transformer-based fault prediction model outputs a fault prediction result.
For convenience and brevity of description, the embodiments of the apparatus of the present invention include all the embodiments of the power scheduling method applied to the incremental power distribution network, and are not described herein again.
On the basis of the embodiment of the invention, the invention correspondingly provides an embodiment of a readable storage medium; another embodiment of the present invention provides a readable storage medium, which includes a stored computer program, and when the computer program is executed, the computer program controls a device on which the readable storage medium is located to perform the method for predicting a failure of a Transformer-based metering automation system according to any one of the method embodiments of the present invention.
Illustratively, the computer program may be partitioned into one or more modules that are stored in the memory and executed by the processor to implement the invention. The one or more modules may be a series of computer program instruction segments capable of performing specific functions, which are used for describing the execution process of the computer program in the terminal device.
The terminal device can be a desktop computer, a notebook, a palm computer, a cloud server and other computing devices. The terminal device may include, but is not limited to, a processor, a memory.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like, said processor being the control center of said terminal device, and various interfaces and lines are used to connect the various parts of the whole terminal device.
The memory may be used for storing the computer programs and/or modules, and the processor may implement various functions of the terminal device by executing or executing the computer programs and/or modules stored in the memory and calling data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
Wherein, the terminal device integrated module/unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, can be stored in a computer readable storage medium (i.e. the above readable storage medium). Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
It should be noted that the above-described device embodiments are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. In addition, in the drawings of the embodiment of the apparatus provided by the present invention, the connection relationship between the modules indicates that there is a communication connection between them, and may be specifically implemented as one or more communication buses or signal lines. One of ordinary skill in the art can understand and implement it without inventive effort.
While the foregoing is directed to the preferred embodiment of the present invention, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that all or part of the processes of the above embodiments may be implemented by hardware related to instructions of a computer program, and the computer program may be stored in a computer readable storage medium, and when executed, may include the processes of the above embodiments. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), or the like.

Claims (10)

1. A measurement automation system fault prediction method based on a Transformer is characterized by comprising the following steps:
acquiring real-time operation data of a metering automation master station system, and eliminating abnormal data points in the operation data;
and inputting the operation data information into a pyramid type Transformer-based fault prediction model, and outputting a fault prediction result by the Transformer-based fault prediction model.
2. The Transformer-based metering automation system failure prediction method of claim 1, wherein the operational data comprises IP information data, network device information data, network port information data, and host device information data;
the failure prediction result comprises hardware equipment failure and system access failure.
3. The Transformer-based metering automation system fault prediction method of claim 2, characterized in that the Transformer-based fault prediction model is trained according to the following steps:
clustering historical operating data by adopting a K-means algorithm, and removing abnormal data in the operating data according to a clustering result to obtain a first operating data set;
carrying out data balance processing on the first operation data set by an undersampling method to obtain a second operation data set;
and training by adopting the second operation data set and the Transformer-based fault prediction model.
4. The Transformer-based metering automation system fault prediction method according to claim 3, wherein the Transformer-based fault prediction model is established by the following process:
establishing a Transformer model, wherein the Transformer model is formed by stacking a plurality of pairs of encoders and decoders;
cascading the plurality of Transformer models according to a pyramid framework, and gradually reducing the data dimension of each Transformer model according to the arrangement sequence of the plurality of Transformer models to obtain a pyramid-type fault prediction model based on transformers.
5. The Transformer-based metering automation system fault prediction method as claimed in claim 4, characterized in that an Embedding layer is adopted in an encoder of the Transformer model to encode input data into fixed length vectors, and then the fixed length vectors are input into a Multi-Head Attentation module to calculate Attention weights; the Multi-Head-orientation module comprises a plurality of Self-orientation modules, and each Self-orientation module outputs a weighted feature vector.
6. The Transformer-based metering automation system fault prediction method of claim 5, wherein the weighted feature vectors output by the Self-Attention module are calculated according to the following formula:
Figure FDA0003672528390000021
q, K and V are respectively a query vector, a key value vector and a weight vector, Attention represents an Attention module, d k For the length of the input vector, softmax is the activation function, and Y is the weighted feature vector.
7. The Transformer-based metering automation system failure prediction method of claim 6, wherein in the encoder and decoder pair, the decoder has one more Multi-Head orientation module than the encoder, the query vector of the more Multi-Head orientation module comes from the last output of the decoder, and the key value vector and the weight value vector of the more Multi-Head orientation module come from the output of the encoder.
8. The method for predicting the failure of the transducer-based metering automation system according to any one of claims 1 to 7, wherein the weighted eigenvectors output by the Self-Attention module are input to a feedforward neural network module for encoding, and encoding calculation is performed according to the following formula:
FFN(Y)=max(0,YW 1 +b 1 )W 2 +b 2
wherein FFN represents a feedforward neural network, and the feedforward neural network comprises two fully-connected layers; w1 and b1 denote the weight and offset of the first layer fully connected layer, and W2 and b2 denote the weight and offset of the second layer fully connected layer.
9. A measurement automation system fault prediction device based on a Transformer is characterized by comprising an operation data acquisition module and a fault prediction module;
the operation data acquisition module is used for acquiring real-time operation data of the metering automation master station system and eliminating abnormal data points in the operation data;
the fault prediction module is used for inputting the operation data information into a Transformer-based fault prediction model, and the Transformer-based fault prediction model outputs a fault prediction result.
10. A readable storage medium, comprising a stored computer program that, when executed, controls a device on which the readable storage medium resides to perform the method of any of claims 1-7 for Transformer-based metrology automation system fault prediction.
CN202210609165.3A 2022-05-31 2022-05-31 Measurement automation system fault prediction method and device based on Transformer and storage medium Pending CN115016966A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210609165.3A CN115016966A (en) 2022-05-31 2022-05-31 Measurement automation system fault prediction method and device based on Transformer and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210609165.3A CN115016966A (en) 2022-05-31 2022-05-31 Measurement automation system fault prediction method and device based on Transformer and storage medium

Publications (1)

Publication Number Publication Date
CN115016966A true CN115016966A (en) 2022-09-06

Family

ID=83071496

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210609165.3A Pending CN115016966A (en) 2022-05-31 2022-05-31 Measurement automation system fault prediction method and device based on Transformer and storage medium

Country Status (1)

Country Link
CN (1) CN115016966A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327286A (en) * 2022-10-17 2022-11-11 国能大渡河检修安装有限公司 Transformer monitoring method and system applied to power station
CN116595421A (en) * 2023-06-10 2023-08-15 北京航空航天大学 Aircraft electric signal prediction method based on time-frequency spectrogram and converter algorithm

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115327286A (en) * 2022-10-17 2022-11-11 国能大渡河检修安装有限公司 Transformer monitoring method and system applied to power station
CN116595421A (en) * 2023-06-10 2023-08-15 北京航空航天大学 Aircraft electric signal prediction method based on time-frequency spectrogram and converter algorithm
CN116595421B (en) * 2023-06-10 2024-04-09 北京航空航天大学 Aircraft electric signal prediction method based on time-frequency spectrogram and converter algorithm

Similar Documents

Publication Publication Date Title
CN115016966A (en) Measurement automation system fault prediction method and device based on Transformer and storage medium
CN113011085B (en) Equipment digital twin modeling method and system
CN110264270B (en) Behavior prediction method, behavior prediction device, behavior prediction equipment and storage medium
CN104933428A (en) Human face recognition method and device based on tensor description
Zhang et al. A two-stage data-driven approach to remaining useful life prediction via long short-term memory networks
US11727210B2 (en) Structured graph-to-text generation with two step fine-tuning
CN117094451B (en) Power consumption prediction method, device and terminal
US20220011760A1 (en) Model fidelity monitoring and regeneration for manufacturing process decision support
CN115952724A (en) Method, system, equipment and medium for predicting residual life of aircraft engine
CN116433223A (en) Substation equipment fault early warning method and equipment based on double-domain sparse transducer model
Li et al. Improved LSTM-based prediction method for highly variable workload and resources in clouds
Sinha Short term load forecasting using artificial neural networks
US20230090616A1 (en) Learning system, apparatus and method
CN113743650B (en) Power load prediction method, device, equipment and storage medium
CN114297008A (en) Cloud host performance prediction method and device, terminal and storage medium
Zong et al. Embedded software fault prediction based on back propagation neural network
Huang et al. The strategy of investment in the stock market using modified support vector regression model
CN116910049A (en) MDAN-based power load data missing value filling model and construction method thereof
CN116362301A (en) Model quantization method and related equipment
CN116011681A (en) Meteorological data prediction method and device, storage medium and electronic device
Koromilas et al. Mmatr: A lightweight approach for multimodal sentiment analysis based on tensor methods
CN115424725A (en) Data analysis method and device, storage medium and processor
CN115146822A (en) Photovoltaic power generation prediction method and device and terminal equipment
CN117851953B (en) Water use abnormality detection method, device, electronic apparatus, and storage medium
Li et al. ShuffleNet2MC: A method of light weight fault diagnosis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination