CN111738420B - Electromechanical equipment state data complement and prediction method based on multi-scale sampling - Google Patents

Electromechanical equipment state data complement and prediction method based on multi-scale sampling Download PDF

Info

Publication number
CN111738420B
CN111738420B CN202010587623.9A CN202010587623A CN111738420B CN 111738420 B CN111738420 B CN 111738420B CN 202010587623 A CN202010587623 A CN 202010587623A CN 111738420 B CN111738420 B CN 111738420B
Authority
CN
China
Prior art keywords
sample
network
data
fault
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010587623.9A
Other languages
Chinese (zh)
Other versions
CN111738420A (en
Inventor
莫毓昌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202010587623.9A priority Critical patent/CN111738420B/en
Publication of CN111738420A publication Critical patent/CN111738420A/en
Application granted granted Critical
Publication of CN111738420B publication Critical patent/CN111738420B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Supply And Distribution Of Alternating Current (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses an electromechanical device state data complement and prediction method based on multi-scale sampling, and relates to the technical field of data processing. The electromechanical equipment state data complement and prediction method based on multi-scale sampling comprises the following steps: s1, acquiring working condition data in the working process of the electromechanical equipment by adopting an intelligent sensor, and constructing a working condition data set D. According to the electromechanical equipment state data complement and prediction method based on multi-scale sampling, a plurality of time sequences are extracted from a data set by adopting the thought of multi-scale sampling, so that feature learning is performed from different time scales, prediction precision and stability are improved by voting scoring strategies, meanwhile, a generation type countermeasure network is adopted to complete samples of the data set, the generation type countermeasure network loss function is improved, training stability and training efficiency are improved, and based on voting scoring strategies, complement samples are selected, and low-quality generation samples are removed.

Description

Electromechanical equipment state data complement and prediction method based on multi-scale sampling
Technical Field
The invention relates to the technical field of data processing, in particular to an electromechanical equipment state data complement and prediction method based on multi-scale sampling.
Background
The electromechanical device usually works in a normal state, so that few sample data can be collected in a fault state, and the problem of unbalance of a data set easily occurs, namely, a normal state sample data set DG is far larger than a fault state sample data set DF. The problem of data imbalance caused by lack of fault state sample data severely affects the accuracy of device state prediction.
The traditional mode of expanding the sample data set is oversampling, but the oversampling is just to fully take care of and recycle a small amount of sample information in DF, and the data distribution characteristics of the sample cannot be automatically learned. Therefore, how to perform expansion and complement on the sample data in the fault state is a problem to be solved.
Disclosure of Invention
(one) solving the technical problems
Aiming at the defects of the prior art, the invention provides a multi-scale sampling-based electromechanical equipment state data complement and prediction method, which solves the problems that the traditional mode of expanding a sample data set is oversampling, but the oversampling is only to completely take care of recycling a small amount of sample information in DF, the data distribution characteristics of a sample cannot be automatically learned, and the sample data in a fault state is difficult to expand and complement well.
(II) technical scheme
In order to achieve the above purpose, the invention is realized by the following technical scheme: an electromechanical device state data complement and prediction method based on multi-scale sampling comprises the following steps:
s1, acquiring working condition data in the working process of electromechanical equipment by adopting an intelligent sensor, and constructing a working condition data set D, wherein a fault sample data set in the D is marked as DF, and a normal sample data set in the D is marked as DG;
s2, extracting a corresponding subset from the fault sample data set DF by using a sampling method with different interval spans to serve as a fault sample expansion data set; the specific operation is as follows:
for a fault sample dataset df= { Z (t=1), Z (t=2), …, Z (t=n) }, a subset DFm of the fault sample dataset DF may be extracted as a fault sample augmentation dataset to the power of 2 m, wherein
DF0=DF={Z(t=1),Z(t=2),…,Z(t=n)}
DF1={Z(t=2),Z(t=4),…,}
DF2={Z(t=4),Z(t=8),…,}
DFm = { Z (t=2 to the power m), Z (t=2 to the power m+1), …, }
S3, establishing a corresponding generated countermeasure network model for each fault sample expansion data set, and carrying out sample completion to obtain a fault sample expansion completion data set, wherein the specific operation is as follows:
s31, constructing a generated countermeasure network model, wherein the model comprises a construction generator network, a construction discriminator network and a construction loss function, and the specific operation is as follows:
s311, constructing a generator network:
the generator network consists of 2 hidden layers and one output layer.
The 1 st hidden layer of the generator network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the generator network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional output vector;
the 2 nd hidden layer of the generator network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional output vector;
the output layer of the generator network comprises the following calculation processes:
y=ReLU(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is the output vector with the same dimension as x;
s312, constructing a discriminator network:
the discriminator network consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the arbiter network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input to the arbiter network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the arbiter network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the discriminator network comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value.
S313, constructing a loss function:
firstly, calculating the output average value of a discriminator network by using a real fault data sample data block
Figure DEST_PATH_IMAGE001
Wherein x is a real fault data sample, E is a mean value, and D is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the larger the mean value, the more efficient the arbiter network;
calculating an output mean value of the arbiter network using the false failure data sample data blocks generated by the generator
Figure DEST_PATH_IMAGE002
Where z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution. D is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the smaller the mean value, the more efficient the arbiter network; and the larger the mean value, the more efficient the generator network;
and constructing a regular term H for inhibiting the gradient disappearance phenomenon in the training process of the generative type countermeasure network model.
H=γ*
Figure DEST_PATH_IMAGE003
Wherein, for arbitrarily generated false fault data samples z, x closest Is a true fault data sample closest to z euclidean distance;
wherein D is a discriminator network;
where γ is a regularization constant, set to 10;
constructing a discriminator network loss function LD;
LD=
Figure 577130DEST_PATH_IMAGE002
-/>
Figure 291008DEST_PATH_IMAGE001
+H;
where x is the actual fault data sample, E is the mean value, and D is the arbiter network. z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution; h is a regularization term.
Constructing a generator network loss function LG;
LG=-
Figure 601903DEST_PATH_IMAGE002
where z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution. D is a discriminator network. E is the mean;
s32, training a generated type countermeasure network model, wherein the training method comprises the following steps of:
s321, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz;
s322, randomly extracting K real fault samples from the fault sample expansion data set, wherein K is set to be 32, and a real fault data sample data block Dr is formed;
s323, training the counter generation network by using Dz and Dr and a classical gradient descent neural network training algorithm based on a loss function LD, wherein, in particular, the parameters of the fixed generator network are unchanged during training, and only the parameters of the discriminator network are updated;
s324, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz';
s325, training a counter generation network by using a Dz' and a classical gradient descent neural network training algorithm based on a loss function LG, particularly, fixing parameters of a discriminator network while training, and only updating parameters of a generator network;
s326, after repeating N training periods to train the discriminator network and the generator network for one training period, storing the generator network parameters, wherein N is set to 10000;
s33, performing sample completion by using a trained generator network to obtain a fault sample expansion completion data set, wherein the specific operation is as follows:
s331, randomly sampling L false fault data sample seeds from the mean distribution, wherein L can be set to be 1000 by utilizing L false fault data samples generated by a trained generator network;
s332, randomly extracting W real fault samples from the fault sample expansion data set, randomly extracting 2W normal samples from the normal sample data set, merging and constructing a training data set TR, wherein W can be set to be 100;
s333, respectively constructing 5 traditional classifier models, including an SVM model, a decision tree model, a naive Bayesian model, a linear model and a 3-layer neural network model;
s334, training 5 traditional classifier models by using a training data set TR;
s335, selecting 1000 false fault data samples generated by a generator network in the step S331 by using the trained 5 traditional classifier models, and eliminating the low-quality false fault data samples;
s336, removing high-quality false fault data samples left after the low-quality false fault data samples are removed, and merging the high-quality false fault data samples into each fault sample expansion data set to obtain a fault sample expansion complement data set;
s4, constructing a multi-layer neural network, namely sampling normal samples with the same size as the fault samples in the fault sample expansion and completion data set from the normal sample data set DG by using the same equidistant sampling method aiming at each fault sample expansion and completion data set, merging and constructing a training data set TR, and training the multi-layer neural network model by using the training data set TR as an electromechanical equipment health state prediction model, wherein the specific conditions are as follows:
constructing a multi-layer neural network model:
the multi-layer neural network model consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the multi-layer neural network model comprises 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the multi-layer neural network model, from the training dataset TR; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the multi-layer neural network model contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the multi-layer neural network model comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value;
training a multi-layer neural network model based on a traditional BP algorithm by utilizing a training data set TR to obtain a neural network prediction model of the health state of the electromechanical equipment;
s5, working condition parameter data of the electromechanical equipment are collected in real time, a trained electromechanical equipment health state prediction model is input, and the electromechanical equipment health state is predicted and judged, wherein the specific rules are as follows:
extracting corresponding subsets DN0, DN1, DN2, DNm from the collected condition dataset DN by using the method of sampling at different intervals in step S2;
using the trained m multi-layer neural network models to predict input data DN0, DN1, DN2, DNm, respectively;
and carrying out majority voting on the prediction result to serve as a final prediction result of the health state of the electromechanical equipment.
Preferably, in step S1, the operating parameters include data of current, voltage, speed, vibration and temperature during operation of the electromechanical device.
Preferably, in step S31, the constructed generative countermeasure network model is composed of a generator and a discriminator; the generator and the discriminator are both of a multi-layer network structure, wherein the generator is responsible for generating the generated data with the same dimension as the real data, and the discriminator is responsible for distinguishing the real data from the generated data.
Preferably, in step S335, the culling rule is: if one false fault data sample is a normal sample of 3 or more models, the false fault data sample is considered to be a low-quality sample, and rejection is required.
(III) beneficial effects
The invention provides an electromechanical device state data complement and prediction method based on multi-scale sampling. The beneficial effects are as follows: according to the electromechanical equipment state data complement and prediction method based on multi-scale sampling, a plurality of time sequences are extracted from a data set by adopting the thought of multi-scale sampling, so that feature learning is performed from different time scales, prediction precision and stability are improved by voting scoring strategies, meanwhile, a generation type countermeasure network is adopted to complete samples of the data set, the generation type countermeasure network loss function is improved, training stability and training efficiency are improved, and based on voting scoring strategies, complement samples are selected, and low-quality generation samples are removed.
Drawings
FIG. 1 is a schematic diagram of the overall steps of the technical scheme of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, the present invention provides a technical solution: an electromechanical device state data complement and prediction method based on multi-scale sampling comprises the following steps:
s1, acquiring working condition data in the working process of electromechanical equipment by adopting an intelligent sensor, and constructing a working condition data set D, wherein the working condition parameters comprise data of current, voltage, speed, vibration and temperature in the working process of the electromechanical equipment, the fault sample data set in the D is marked as DF, and the normal sample data set in the D is marked as DG;
s2, extracting a corresponding subset from the fault sample data set DF by using a sampling method with different interval spans to serve as a fault sample expansion data set; the specific operation is as follows:
for a fault sample dataset df= { Z (t=1), Z (t=2), …, Z (t=n) }, a subset DFm of the fault sample dataset DF may be extracted as a fault sample augmentation dataset to the power of 2 m, wherein
DF0=DF={Z(t=1),Z(t=2),…,Z(t=n)}
DF1={Z(t=2),Z(t=4),…,}
DF2={Z(t=4),Z(t=8),…,}
DFm = { Z (t=2 to the power m), Z (t=2 to the power m+1), …, }
S3, establishing a corresponding generated countermeasure network model for each fault sample expansion data set, and carrying out sample completion to obtain a fault sample expansion completion data set, wherein the specific operation is as follows:
s31, constructing a generated countermeasure network model, which comprises a construction generator network, a construction discriminator network and a construction loss function, wherein the constructed generated countermeasure network model consists of a generator and a discriminator; the generator and the discriminator are both of a multi-layer network structure, wherein the generator is responsible for generating the generated data with the same dimension as the real data, and the discriminator is responsible for distinguishing the real data from the generated data, and the specific operation is as follows:
s311, constructing a generator network:
the generator network consists of 2 hidden layers and one output layer.
The 1 st hidden layer of the generator network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the generator network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional output vector;
the 2 nd hidden layer of the generator network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional output vector;
the output layer of the generator network comprises the following calculation processes:
y=ReLU(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is the output vector with the same dimension as x;
s312, constructing a discriminator network:
the discriminator network consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the arbiter network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input to the arbiter network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the arbiter network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the discriminator network comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value.
S313, constructing a loss function:
first, use is made ofReal fault data sample data block, calculating output average value of discriminator network
Figure 528271DEST_PATH_IMAGE001
Wherein x is a real fault data sample, E is a mean value, and D is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the larger the mean value, the more efficient the arbiter network;
calculating an output mean value of the arbiter network using the false failure data sample data blocks generated by the generator
Figure 77064DEST_PATH_IMAGE002
Where z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution. D is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the smaller the mean value, the more efficient the arbiter network; and the larger the mean value, the more efficient the generator network;
and constructing a regular term H for inhibiting the gradient disappearance phenomenon in the training process of the generative type countermeasure network model.
H=γ*
Figure 679953DEST_PATH_IMAGE003
Wherein, for arbitrarily generated false fault data samples z, x closest Is a true fault data sample closest to z euclidean distance;
wherein D is a discriminator network;
where γ is a regularization constant, set to 10;
constructing a discriminator network loss function LD;
LD=
Figure 743723DEST_PATH_IMAGE002
-/>
Figure 473782DEST_PATH_IMAGE001
+H;
where x is the actual fault data sample, E is the mean value, and D is the arbiter network. z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution; h is a regularization term.
Constructing a generator network loss function LG;
LG=-
Figure 877082DEST_PATH_IMAGE002
where z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution. D is a discriminator network. E is the mean;
s32, training a generated type countermeasure network model, wherein the training method comprises the following steps of:
s321, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz;
s322, randomly extracting K real fault samples from the fault sample expansion data set, wherein K is set to be 32, and a real fault data sample data block Dr is formed;
s323, training the counter generation network by using Dz and Dr and a classical gradient descent neural network training algorithm based on a loss function LD, wherein, in particular, the parameters of the fixed generator network are unchanged during training, and only the parameters of the discriminator network are updated;
s324, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz';
s325, training a counter generation network by using a Dz' and a classical gradient descent neural network training algorithm based on a loss function LG, particularly, fixing parameters of a discriminator network while training, and only updating parameters of a generator network;
s326, after repeating N training periods to train the discriminator network and the generator network for one training period, storing the generator network parameters, wherein N is set to 10000;
s33, utilizing the trained generator network to complete sample complementation, and obtaining a fault sample expansion and complementation data set, wherein the specific operation is as follows:
s331, randomly sampling L false fault data sample seeds from the mean distribution, wherein L can be set to be 1000 by utilizing L false fault data samples generated by a trained generator network;
s332, randomly extracting W real fault samples from the fault sample expansion data set, randomly extracting 2W normal samples from the normal sample data set, merging and constructing a training data set TR, wherein W can be set to be 100;
s333, respectively constructing 5 traditional classifier models, including an SVM model, a decision tree model, a naive Bayesian model, a linear model and a 3-layer neural network model;
s334, training 5 traditional classifier models by using a training data set TR;
s335, selecting 1000 false fault data samples generated by a generator network in the step S331 by using the trained 5 traditional classifier models, and removing the low-quality false fault data samples, wherein the removing rule is as follows: if one false fault data sample is a normal sample of 3 or more than 3 models, the false fault data sample is considered to be a low-quality sample and needs to be removed;
s336, removing high-quality false fault data samples left after the low-quality false fault data samples are removed, and merging the high-quality false fault data samples into each fault sample expansion data set to obtain a fault sample expansion complement data set;
s4, constructing a multi-layer neural network, namely sampling normal samples with the same size as the fault samples in the fault sample expansion and completion data set from the normal sample data set DG by using the same equidistant sampling method aiming at each fault sample expansion and completion data set, merging and constructing a training data set TR, and training the multi-layer neural network model by using the training data set TR as an electromechanical equipment health state prediction model, wherein the specific conditions are as follows:
constructing a multi-layer neural network model:
the multi-layer neural network model consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the multi-layer neural network model comprises 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the multi-layer neural network model, from the training dataset TR; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the multi-layer neural network model contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the multi-layer neural network model comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value;
training a multi-layer neural network model based on a traditional BP algorithm by utilizing a training data set TR to obtain a neural network prediction model of the health state of the electromechanical equipment;
s5, working condition parameter data of the electromechanical equipment are collected in real time, a trained electromechanical equipment health state prediction model is input, and the electromechanical equipment health state is predicted and judged, wherein the specific rules are as follows:
extracting corresponding subsets DN0, DN1, DN2, DNm from the collected condition dataset DN by using the method of sampling at different intervals in step S2;
using the trained m multi-layer neural network models to predict input data DN0, DN1, DN2, DNm, respectively;
and carrying out majority voting on the prediction result to serve as a final prediction result of the health state of the electromechanical equipment.
It should be noted that:
1. in step S1, during implementation, a certain working condition parameter may be selected for data acquisition according to the condition of the working site of the electromechanical device.
For example, the acceleration sensor may be installed at a housing of the electromechanical device, and vibration amplitude data during operation of the electromechanical device may be continuously collected, to obtain a working condition data set d= { Z (t=1), Z (t=2), …, Z (t=n) }. Z (t=i) } is the vibration amplitude data of the electromechanical device at the operating time instant i;
taking intelligent monitoring of 127 power transformer devices in a region to which a certain electric company belongs as an example, each data sample in the working condition data set D is acquired for 1 month (30 days) and 1 vibration amplitude data is acquired per hour, so each sample contains 30×24=720 data points. For 127 power transformer devices, 10 months of data were collected cumulatively, i.e., operating condition data set D contained 1270 data samples. Wherein 16 samples are fault samples to form a fault sample data set DF, and 1254 samples are normal samples to form a normal sample data set DG;
2. in step S2, for the fault sample data set df= { Z (t=1), Z (t=2), …, Z (t=n) }, the subset DFm of the fault sample data set DF may be extracted as the power of 2 to m
DF0=DF={Z(t=1),Z(t=2),…,Z(t=n)}
DF1={Z(t=2),Z(t=4),…,}
DF2={Z(t=4),Z(t=8),…,}
DFm = { Z (t=2 to the power m), Z (t=2 to the power m+1), …, }
Taking intelligent monitoring of 127 power transformer devices in a region of a certain power company as an example, 16 collected samples are fault samples to form a fault sample data set DF, and each fault sample contains 30×24=720 data points. Extracting a sample data subset from a fault sample data set DF by utilizing a sampling method with different interval spans, and constructing fault sample expansion data sets DF0, DF1, DF2, DF3 and DF4;
there are 16 samples in DF0, each sample containing 720 data points;
there are 16 samples in DF1, each sample containing 360 data points;
there are 16 samples in DF2, each sample containing 180 data points;
there are 16 samples in DF3, each sample containing 90 data points;
there are 16 samples in DF4, each sample containing 45 data points;
the fault sample extended datasets DF0, DF1, DF2, DF3, DF4 thus constructed contain a total of 16×5=80 fault samples;
it is apparent that in contrast to 1254 normal samples, 80 failed samples remain unbalanced;
3. in step S31, for example, 127 power transformer devices in a region to which a certain electric power company belongs are intelligently monitored, and 5 fault sample expansion datasets DF0, DF1, DF2, DF3, DF4 are configured. Therefore, 5 generation type countermeasure network models are required to be established, and sample completion is carried out;
the construction mode of each generated countermeasure network model is as described in step S31, and the only difference is that the scale of the input data is different, because the number of data points in the fault sample of each fault sample expansion data set is different;
720 data points need to be input into the generating type countermeasure network corresponding to DF 0;
the generating type countermeasure network corresponding to DF1 needs to input 360 data points;
the generating type countermeasure network corresponding to DF2 needs to input 180 data points;
the generating type countermeasure network corresponding to DF3 needs to input 90 data points;
the generating type countermeasure network corresponding to DF4 needs to input 45 data points;
4. in step S33, taking intelligent monitoring of 127 power transformer devices in a region to which a certain power company belongs as an example, 5 generation type countermeasure network models generate false fault data samples. The high-quality false fault data samples left after the low-quality false fault data samples are removed are merged into various fault sample expansion data sets DF0, DF1, DF2, DF3 and DF4 to obtain fault sample expansion complement data sets DFB0, DFB1, DFB2, DFB3 and DFB4;
at this time, the number of samples of each of the 5 failure sample expansion complement data sets DFB0, DFB1, DFB2, DFB3, DFB4 is about 1000. Obviously, the number of faulty samples at this time is balanced against 1254 normal samples;
5. in step S4, taking intelligent monitoring of 127 power transformer devices in a region to which a certain power company belongs as an example, since each fault sample expansion complement data set DFB0, DFB1, DFB2, DFB3, DFB4 constructs a multi-layer neural network model, there are 5 multi-layer neural network models;
5. in step S5, taking intelligent monitoring of 127 power transformer devices in a region to which a certain power company belongs as an example, a newly collected unlabeled working condition data set is DN, and using the method of sampling at different intervals in step S2, corresponding subsets DN0, DN1, DN2, DN3 and DN4 are extracted from the DN. The 5 multi-layer neural network models of step S4 are utilized. And predicting the input data DN0, DN1, DN2, DN3 and DN4 to obtain the predicted result of the health states of the 5 electromechanical devices. Performing majority voting on the 5 electromechanical equipment health state prediction results to serve as final electromechanical equipment health state prediction results;
that is, if 3 or more than 3 prediction results are normal, the final prediction result of the health state of the electromechanical device is normal;
i.e. 3 or more than 3 predicted outcomes are faulty, the final electromechanical device health prediction outcome is faulty.
Taking 127 power transformer devices in a region of a certain power company as an example, the following three conditions of device health state prediction accuracy rates are compared.
Case1 (conventional method): for an original data set that was not sample extended and completed, i.e., the data set contained 16 failed samples, 1254 normal samples. According to 4: the scale of 1 is divided into training and test sets.
A multi-layer neural network model (input layer is 720) as in step S4 is constructed, and the network is trained using a training set using a classical gradient descent neural network training algorithm. And testing the trained neural network on the test set.
The test result shows that the accuracy of the test on the original data set which is not expanded and completed by the sample is 80.9%.
Case2 (simplified method of this patent): sample expansion is performed on the original data set, but is not completed, namely constructed fault sample expansion data sets DF0, DF1, DF2, DF3, DF4 contain 16×5=80 fault samples in total, similar normal sample expansion data sets DN0, DN1, DN2, DN3, DN4 contain 1254×5=6270 normal samples in total,
for the extended datasets, DF0+ DN0, DF1+ DN1, DF2+ DN2, DF3+ DN3, DF4+ DN4, respectively, according to 4: the scale of 1 is divided into training and test sets.
5 multi-layer neural network models (720, 360,180,90, 45 for input layers respectively) were constructed, and a classical gradient descent neural network training algorithm was used to train the network using a training set.
And testing the trained neural networks on a test set, wherein the test results of the 5 neural networks are required to be used as the final electromechanical equipment health state prediction result according to the majority voting method in the step S5.
The test result shows that the accuracy of the test on the data set expanded by the sample is 85.7%.
Obviously, the prediction accuracy can be improved by means of data set expansion and voting of a plurality of neural network models.
Case3 (complete method of this patent): sample expansion is carried out on the original data set, and complementation is carried out, namely, the number of samples of each fault sample expansion and complementation data set is about 1000 in 5 fault sample expansion and complementation data sets DFB0, DFB1, DFB2, DFB3 and DFB4; the number of samples per normal sample extension dataset DN0, DN1, DN2, DN3, DN4 is 1254.
Dfb0+dn0, dfb1+dn1, dfb2+dn2, dfb3+dn3, dfb4+dn4 for the extended data set, respectively, according to 4: the scale of 1 is divided into training and test sets.
5 multi-layer neural network models (the input layers are respectively increased to 720, 360,180,90 and 45) are constructed, a classical gradient descent neural network training algorithm is adopted, and the training set is utilized to train the network.
And testing the trained neural networks on a test set, wherein the test results of the 5 neural networks are required to be used as the final electromechanical equipment health state prediction result according to the majority voting method in the step S5.
The test result shows that the accuracy of the test on the data set expanded and completed by the sample is 92.3%.
Obviously, the prediction accuracy can be further improved by means of data set expansion and completion and voting of a plurality of neural network models.
In summary, the electromechanical equipment state data complement and prediction method based on multi-scale sampling adopts the thought of multi-scale sampling, extracts a plurality of time sequences from a data set, so that feature learning is performed from different time scales, prediction precision and stability are improved through a voting scoring strategy, meanwhile, a generating type countermeasure network is adopted to complement the data set, the generating type countermeasure network loss function is improved, training stability and training efficiency are improved, and based on the voting scoring strategy, complement samples are selected, and low-quality generated samples are removed.
It is noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
Although embodiments of the present invention have been shown and described, it will be understood by those skilled in the art that various changes, modifications, substitutions and alterations can be made therein without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (4)

1. The electromechanical equipment state data complement and prediction method based on multi-scale sampling is characterized by comprising the following steps of: the method comprises the following steps:
s1, acquiring working condition data in the working process of electromechanical equipment by adopting an intelligent sensor, and constructing a working condition data set D, wherein a fault sample data set in the D is marked as DF, and a normal sample data set in the D is marked as DG;
s2, extracting a corresponding subset from the fault sample data set DF by using a sampling method with different interval spans to serve as a fault sample expansion data set; the specific operation is as follows:
for a fault sample dataset df= { Z (t=1), Z (t=2), …, Z (t=n) }, a subset DFm of the fault sample dataset DF may be extracted as a fault sample augmentation dataset to the power of 2 m, wherein
DF0=DF={Z(t=1),Z(t=2),…,Z(t=n)}
DF1={Z(t=2),Z(t=4),…,}
DF2={Z(t=4),Z(t=8),…,}
DFm = { Z (t=2 to the power m), Z (t=2 to the power m+1), …, }
S3, establishing a corresponding generated countermeasure network model for each fault sample expansion data set, and carrying out sample completion to obtain a fault sample expansion completion data set, wherein the specific operation is as follows:
s31, constructing a generated countermeasure network model, wherein the model comprises a construction generator network, a construction discriminator network and a construction loss function, and the specific operation is as follows:
s311, constructing a generator network:
the generator network consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the generator network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the generator network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional output vector;
the 2 nd hidden layer of the generator network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional output vector;
the output layer of the generator network comprises the following calculation processes:
y=ReLU(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is the output vector with the same dimension as x;
s312, constructing a discriminator network:
the discriminator network consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the arbiter network contains 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input to the arbiter network; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the arbiter network contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the discriminator network comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value;
s313, constructing a loss function:
firstly, calculating the output average value of a discriminator network by using a real fault data sample data block
Figure QLYQS_1
Wherein x is a real fault data sample, E is a mean value, and D is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the larger the mean value, the more efficient the arbiter network;
calculating an output mean value of the arbiter network using the false failure data sample data blocks generated by the generator
Figure QLYQS_2
Wherein z is a generated false fault data sample, z=g (seed), G is a generator network, and seed is a seed of the false fault data sample randomly sampled from the mean distribution; d is a discriminator network;
the output of the discriminator network for the sample judged as the real fault sample is 1;
the output of the discriminator network for the sample judged as the false fault sample is 0;
the smaller the mean value, the more efficient the arbiter network; and the larger the mean value, the more efficient the generator network;
constructing a regular term H for inhibiting the gradient disappearance phenomenon in the training process of the generated countermeasure network model;
H=γ*
Figure QLYQS_3
wherein, for arbitrarily generated false fault data samples z, x closest Is a true fault data sample closest to z euclidean distance;
wherein D is a discriminator network;
where γ is a regularization constant, set to 10;
constructing a discriminator network loss function LD;
LD=
Figure QLYQS_4
-/>
Figure QLYQS_5
+H;
wherein x is a real fault data sample, E is a mean value, and D is a discriminator network; z is the generated false fault data sample, z=g (seed), G is the generator network, and seed is the seed of the false fault data sample randomly sampled from the mean distribution; h is a regularization term;
constructing a generator network loss function LG;
LG=-
Figure QLYQS_6
wherein z is a generated false fault data sample, z=g (seed), G is a generator network, seed is a seed of the false fault data sample randomly sampled from the mean distribution, D is a discriminator network, and E is a mean;
s32, training a generated type countermeasure network model, wherein the training method comprises the following steps of:
s321, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz;
s322, randomly extracting K real fault samples from the fault sample expansion data set, wherein K is set to be 32, and a real fault data sample data block Dr is formed;
s323, training the counter generation network by using Dz and Dr and a classical gradient descent neural network training algorithm based on a loss function LD, wherein, in particular, the parameters of the fixed generator network are unchanged during training, and only the parameters of the discriminator network are updated;
s324, randomly sampling K false fault data sample seeds from the mean distribution, and setting K to be 32 by utilizing K false fault data samples generated by a generator network to form a false fault data sample data block Dz';
s325, training a counter generation network by using a Dz' and a classical gradient descent neural network training algorithm based on a loss function LG, particularly, fixing parameters of a discriminator network while training, and only updating parameters of a generator network;
s326, after repeating N training periods to train the discriminator network and the generator network for one training period, storing the generator network parameters, wherein N is set to 10000;
s33, performing sample completion by using a trained generator network to obtain a fault sample expansion completion data set, wherein the specific operation is as follows:
s331, randomly sampling L false fault data sample seeds from the mean distribution, wherein L can be set to be 1000 by utilizing L false fault data samples generated by a trained generator network;
s332, randomly extracting W real fault samples from the fault sample expansion data set, randomly extracting 2W normal samples from the normal sample data set, merging and constructing a training data set TR, wherein W can be set to be 100;
s333, respectively constructing 5 traditional classifier models, including an SVM model, a decision tree model, a naive Bayesian model, a linear model and a 3-layer neural network model;
s334, training 5 traditional classifier models by using a training data set TR;
s335, selecting 1000 false fault data samples generated by a generator network in the step S331 by using the trained 5 traditional classifier models, and eliminating the low-quality false fault data samples;
s336, removing high-quality false fault data samples left after the low-quality false fault data samples are removed, and merging the high-quality false fault data samples into each fault sample expansion data set to obtain a fault sample expansion complement data set;
s4, constructing a multi-layer neural network, namely sampling normal samples with the same size as the fault samples in the fault sample expansion and completion data set from the normal sample data set DG by using the same equidistant sampling method aiming at each fault sample expansion and completion data set, merging and constructing a training data set TR, and training the multi-layer neural network model by using the training data set TR as an electromechanical equipment health state prediction model, wherein the specific conditions are as follows:
constructing a multi-layer neural network model:
the multi-layer neural network model consists of 2 hidden layers and an output layer;
the 1 st hidden layer of the multi-layer neural network model comprises 128 neurons, and the calculation process is as follows:
O1=ReLU(w1●x+b1);
where x is the input of the multi-layer neural network model, from the training dataset TR; w1 is a weight matrix, b1 is a bias; o1 is a 128-dimensional vector;
the 2 nd hidden layer of the multi-layer neural network model contains 64 neurons, and the calculation process is as follows:
O2=ReLU(w2●O1+b2);
wherein O1 is the output vector of the 1 st hidden layer, w2 is the weight matrix, and b2 is the bias; o2 is a 64-dimensional vector;
the output layer of the multi-layer neural network model comprises the following calculation processes:
y=sigmoid(w3●O2+b3);
wherein O2 is the output vector of the 2 nd hidden layer, w3 is the weight matrix, and b3 is the bias; y is a single probability output value;
training a multi-layer neural network model based on a traditional BP algorithm by utilizing a training data set TR to obtain a neural network prediction model of the health state of the electromechanical equipment;
s5, working condition parameter data of the electromechanical equipment are collected in real time, a trained electromechanical equipment health state prediction model is input, and the electromechanical equipment health state is predicted and judged, wherein the specific rules are as follows:
extracting corresponding subsets DN0, DN1, DN2, DNm from the collected condition dataset DN by using the method of sampling at different intervals in step S2;
using the trained m multi-layer neural network models to predict input data DN0, DN1, DN2, DNm, respectively;
and carrying out majority voting on the prediction result to serve as a final prediction result of the health state of the electromechanical equipment.
2. The method for supplementing and predicting state data of an electromechanical device based on multi-scale sampling according to claim 1, wherein the method comprises the following steps: in step S1, the operating parameters include data of current, voltage, speed, vibration and temperature during operation of the electromechanical device.
3. The method for supplementing and predicting state data of an electromechanical device based on multi-scale sampling according to claim 1, wherein the method comprises the following steps: in step S31, the constructed generated countermeasure network model is composed of a generator and a discriminator; the generator and the discriminator are both of a multi-layer network structure, wherein the generator is responsible for generating the generated data with the same dimension as the real data, and the discriminator is responsible for distinguishing the real data from the generated data.
4. The method for supplementing and predicting state data of an electromechanical device based on multi-scale sampling according to claim 1, wherein the method comprises the following steps: in step S335, the culling rule is: if one false fault data sample is a normal sample of 3 or more models, the false fault data sample is considered to be a low-quality sample, and rejection is required.
CN202010587623.9A 2020-06-24 2020-06-24 Electromechanical equipment state data complement and prediction method based on multi-scale sampling Active CN111738420B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010587623.9A CN111738420B (en) 2020-06-24 2020-06-24 Electromechanical equipment state data complement and prediction method based on multi-scale sampling

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010587623.9A CN111738420B (en) 2020-06-24 2020-06-24 Electromechanical equipment state data complement and prediction method based on multi-scale sampling

Publications (2)

Publication Number Publication Date
CN111738420A CN111738420A (en) 2020-10-02
CN111738420B true CN111738420B (en) 2023-06-06

Family

ID=72650929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010587623.9A Active CN111738420B (en) 2020-06-24 2020-06-24 Electromechanical equipment state data complement and prediction method based on multi-scale sampling

Country Status (1)

Country Link
CN (1) CN111738420B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381316B (en) * 2020-11-26 2022-11-25 华侨大学 Electromechanical equipment health state prediction method based on hybrid neural network model
CN112668651A (en) * 2020-12-30 2021-04-16 中国人民解放军空军预警学院 Flight fault prediction method and device based on flight data and generative type antagonistic neural network
CN113239022B (en) * 2021-04-19 2023-04-07 浙江大学 Method and device for complementing missing data in medical diagnosis, electronic device and medium
CN113947468B (en) * 2021-12-20 2022-04-08 鲁信科技股份有限公司 Data management method and platform
CN115310562B (en) * 2022-10-08 2022-12-30 深圳先进技术研究院 Fault prediction model generation method suitable for energy storage equipment in extreme state
CN117935066B (en) * 2024-03-25 2024-05-28 中国海洋大学 Sea surface temperature complement method and system based on parallel multi-scale constraint

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033636A1 (en) * 2017-08-16 2019-02-21 哈尔滨工业大学深圳研究生院 Method of using minimized-loss learning to classify imbalanced samples
CN110942099A (en) * 2019-11-29 2020-03-31 华侨大学 Abnormal data identification and detection method of DBSCAN based on core point reservation
CN111259808A (en) * 2020-01-17 2020-06-09 北京工业大学 Detection and identification method of traffic identification based on improved SSD algorithm

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105550744A (en) * 2015-12-06 2016-05-04 北京工业大学 Nerve network clustering method based on iteration

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019033636A1 (en) * 2017-08-16 2019-02-21 哈尔滨工业大学深圳研究生院 Method of using minimized-loss learning to classify imbalanced samples
CN110942099A (en) * 2019-11-29 2020-03-31 华侨大学 Abnormal data identification and detection method of DBSCAN based on core point reservation
CN111259808A (en) * 2020-01-17 2020-06-09 北京工业大学 Detection and identification method of traffic identification based on improved SSD algorithm

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于多尺度时序建模与估计的电力负荷数据恢复;张帅 等;《电工技术学报》;全文 *
基于生成式对抗网络的窃电检测数据生成方法;王德文;杨凯华;;电网技术(第02期);全文 *

Also Published As

Publication number Publication date
CN111738420A (en) 2020-10-02

Similar Documents

Publication Publication Date Title
CN111738420B (en) Electromechanical equipment state data complement and prediction method based on multi-scale sampling
CN109492822B (en) Air pollutant concentration time-space domain correlation prediction method
CN110674752B (en) Tool wear state identification and prediction method based on hidden Markov model
Liao et al. Combining deep learning and survival analysis for asset health management
CN114295377B (en) CNN-LSTM bearing fault diagnosis method based on genetic algorithm
CN112434390B (en) PCA-LSTM bearing residual life prediction method based on multi-layer grid search
CN113688869B (en) Photovoltaic data missing reconstruction method based on generation countermeasure network
CN108960423A (en) motor monitoring system based on machine learning
CN112147432A (en) BiLSTM module based on attention mechanism, transformer state diagnosis method and system
CN113901977A (en) Deep learning-based power consumer electricity stealing identification method and system
CN115689008A (en) CNN-BilSTM short-term photovoltaic power prediction method and system based on ensemble empirical mode decomposition
Zheng et al. Real-time transient stability assessment based on deep recurrent neural network
CN116680548B (en) Time sequence drought causal analysis method for multi-source observation data
CN112632840A (en) Power grid transient stability evaluation method based on adaptive differential evolution algorithm and ELM
CN108647772B (en) Method for removing gross errors of slope monitoring data
CN111079926A (en) Equipment fault diagnosis method with self-adaptive learning rate based on deep learning
CN114091349A (en) Multi-source field self-adaption based rolling bearing service life prediction method
Kim et al. Anomaly detection using clustered deep one-class classification
CN116720743A (en) Carbon emission measuring and calculating method based on data clustering and machine learning
CN113705885B (en) VMD, XGBoost and TCN fused power distribution network voltage prediction method and system
Michau et al. Feature selecting hierarchical neural network for industrial system health monitoring: catching informative features with LASSO
Jadhav et al. Forecasting energy consumption using machine learning
Coppo et al. Student dropout prediction using 1d cnn-lstm with variational autoencoder oversampling
Diallo et al. Synthetic minority oversampling technique in stages for unbalanced climate and rice dataset: the Office Du Niger case study
Chuang et al. Likage identification by perturbation and decision tree induction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant