CN113033079A - Chemical fault diagnosis method based on unbalanced correction convolutional neural network - Google Patents

Chemical fault diagnosis method based on unbalanced correction convolutional neural network Download PDF

Info

Publication number
CN113033079A
CN113033079A CN202110248735.6A CN202110248735A CN113033079A CN 113033079 A CN113033079 A CN 113033079A CN 202110248735 A CN202110248735 A CN 202110248735A CN 113033079 A CN113033079 A CN 113033079A
Authority
CN
China
Prior art keywords
sample
fault
samples
new
failure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110248735.6A
Other languages
Chinese (zh)
Other versions
CN113033079B (en
Inventor
辜小花
卢飞
杨光
唐德东
杨利平
李家庆
李太福
李芳�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongnan University Of Economics And Law
Chongqing Youyite Intelligent Technology Co Ltd
Chongqing University of Science and Technology
Original Assignee
Zhongnan University Of Economics And Law
Chongqing Youyite Intelligent Technology Co Ltd
Chongqing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongnan University Of Economics And Law, Chongqing Youyite Intelligent Technology Co Ltd, Chongqing University of Science and Technology filed Critical Zhongnan University Of Economics And Law
Priority to CN202110248735.6A priority Critical patent/CN113033079B/en
Publication of CN113033079A publication Critical patent/CN113033079A/en
Application granted granted Critical
Publication of CN113033079B publication Critical patent/CN113033079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2119/00Details relating to the type or aim of the analysis or the optimisation
    • G06F2119/08Thermal analysis or thermal optimisation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Software Systems (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Medical Informatics (AREA)
  • Complex Calculations (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a chemical fault diagnosis method based on an imbalance correction convolutional neural network, which comprises the following steps of S1: preprocessing TE process data; s2: synthesizing a sample; s3: reducing the dimension of the data; s4: and constructing the CNN incremental learning network. The invention has the advantages that the proposed II-CNN framework can synthesize unbalanced data, and considers the importance of boundary samples, thereby enabling the synthesized samples to be more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.

Description

Chemical fault diagnosis method based on unbalanced correction convolutional neural network
Technical Field
The invention belongs to the field of chemical engineering, and particularly relates to an imbalance correction convolutional neural network incremental learning method for chemical engineering fault diagnosis.
Background
The chemical process fault diagnosis is one of the most important programs in a process control system and is important to ensure the successful operation of the chemical process and improve the safety of the chemical process. The fault diagnosis model aims to detect the abnormal state in the production process, find out the root cause of the fault, assist in making a reliable decision and eliminate the system fault. The fault diagnosis model can convert historical data into process information according to data collected from a plurality of sensors and judge whether a fault occurs, so that the safety, high efficiency and economy of a complex chemical process are guaranteed.
A great deal of research is carried out on the current intelligent fault diagnosis method based on machine learning and deep learning. However, these methods have most of the following drawbacks: 1) they assume that the data samples under different failure modes are balanced or equal, but such assumptions are not always applicable to real chemical processes, and data imbalance may cause the classifier to not learn complete class knowledge, reducing the classification accuracy of the classifier because data imbalance may cause the classifier to have less attention to few failures; 2) as production progresses in an actual industrial process, one or several new fault types may appear, and as new fault categories arrive, these models all require a complete retraining process.
Therefore, it is necessary to provide a new and effective fault diagnosis framework for the problems of data sample imbalance and model update in the complex chemical process.
Disclosure of Invention
The invention aims to provide a chemical engineering fault diagnosis-oriented fault diagnosis framework based on an imbalance correction convolutional neural network, so that various methods are fully utilized, the influence of imbalance of data samples is reduced, the network structure and parameters can be automatically updated, and the robustness of a fault diagnosis model is improved.
In order to achieve the above object, the present invention provides a chemical failure diagnosis method based on an imbalance correction convolutional neural network, comprising the steps of,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: generating and extracting information of the unbalanced data;
s3: performing data dimension reduction, and extracting key characteristic variables of faults;
s4: and constructing the CNN incremental learning network.
Further, the step S1 includes,
the normalization process of the sample TE process data sample set X is calculated by adopting the following formula:
Figure BDA0002965089360000021
wherein: x is the number ofikThe sample value is the kth sample value of the ith input variable before normalization, M represents the number of the input variables, and N represents the number of training samples;
Figure BDA0002965089360000022
the k sample value of the normalized ith input variable is obtained;
xmin{xik|1≤k≤N}i,min
xmax{xik|1≤k≤N}i,max
further, the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority data set T for each minority fault type ii
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
Figure BDA0002965089360000031
i and j respectively represent sample serial numbers;
s23 obtaining xiK neighbor set of (1);
s24 is provided with k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary sampleiThe weight determines the frequency of application of the boundary samples in the data generation process, the weight wiThe calculation formula of (2):
Figure BDA0002965089360000032
wherein z isjNearest neighbor samples of most failure modes for x;
s28 is based on formula xnewSynthetic samples were generated with x '+ α × (x' -x), α being [0-1 ×]A random number within a range;
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting partial fault samples in a Tomek link pair;
s211 results in a new few failure mode data sets T.
Further, the step S3 includes,
inputting:
Figure BDA0002965089360000033
for the training data set, it is the number of iterations, θ is the allowable error, δ (0) is the learning rate, and N represents the number of training samples.
And (3) outputting: w is the feature weight vector.
S31 initialization weight vector
Figure BDA0002965089360000034
I represents the dimension of the sample;
s32 randomly selecting a sample x;
s33 sets e (t-1) to 0, δ (t) to δ (t-1)/t, t representing the number of iterations;
s34 uses the following formula to calculate alphaiAnd betai
Figure BDA0002965089360000041
h is a sample matrix, λ is a regularization factor;
s35 updates e (t-1) using the following formula:
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 when i is 1: within the range of N, circularly executing the step S34 and the step S35 until circulating N times;
s37 updates e (t-1) using the following formula:
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
Figure BDA0002965089360000042
s39 updates w (t), formula:
Figure BDA0002965089360000043
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 when i is 1: within the range of N, circularly executing the steps S32 to S310 until circulating N times;
s313 obtains a weight vector w.
Further, the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and S using the following formulaiThe matching degree between:
Figure BDA0002965089360000051
s42 when i is 1: within the range of N, the step S41 is executed circularly until N times of circulation;
s43 obtaining W1And W2
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
Further, in the step S1,
the TE process has 5 main units, including a chemical reactor, a circulating compressor, a condenser, a stripping tower, a vapor/liquid separator, and a TE simulator which generates 22 different types of state data, including 21 types of standard fault and normal state data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
fault 4 reactor cooling water inlet temperature;
failure 5 condenser cooling water inlet temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
failure 11 reactor cooling water inlet temperature;
fault 12 condenser cooling water inlet temperature;
failure 13 reaction kinetics indicator;
failure 14 reactor cooling water valve;
failure 15 condenser cooling water valve;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
Further, the step S3 includes,
given training sample set
Figure BDA0002965089360000061
Wherein
Figure BDA0002965089360000062
Figure BDA0002965089360000063
Is xiI and N are the dimensions and number of training samples, respectively,
Figure BDA0002965089360000064
representing the sample space, in a local hyperplane, xiIs represented by
Figure BDA0002965089360000065
Let xiIn the range of the number of the channels wh alpha,
Figure BDA0002965089360000066
is a sample matrix with k nearest neighbor samples xiW is the diagonal element of a diagonal matrix, wiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of a nearest neighbor sample, and the optimization problem is expressed as:
Figure BDA0002965089360000067
s.t.||w||2=1,wj≥0,j=1,...,I
wherein,
Figure BDA0002965089360000068
Figure BDA0002965089360000069
is a matrix of k xiIs close to the nearest neighbor of the neighbor,
Figure BDA00029650893600000610
is a matrix of the number of pixels in the matrix,there are k xiOf homogeneous neighbor, alphaiAnd betaiAre reconstruction coefficients of the closest sample, each from the same class
Figure BDA0002965089360000071
And from the opposite category
Figure BDA0002965089360000072
w represents a weight margin vector.
w (t) represents the weight of the weighted feature space of the ith iteration, z (t) represents the expected boundary vector of the ith iteration, and the objective function is as follows:
Figure BDA0002965089360000073
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)≥0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,
Figure BDA0002965089360000074
obtaining e (t) as:
Figure BDA0002965089360000075
representing a given sample x by a point on a local hyperplaneiThe final weight vector is obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,
Figure BDA0002965089360000076
and
Figure BDA0002965089360000077
can be expressed as:
Figure BDA0002965089360000078
wherein
Figure BDA0002965089360000079
Is a vector of reconstructed coefficients of homogeneous neighbors,
Figure BDA00029650893600000710
obtaining alpha by solving an optimization problem for the reconstruction coefficient vector of heterogeneous neighboriAnd betai
Figure BDA00029650893600000711
Wherein | | | purple hair2Is 2-norm, λ is the regularization factor, and if t is 0, the feature weight is initialized
Figure BDA00029650893600000712
I1.. I, at the (t-1) th iteration, a for each sample is obtainediAnd betai(i ═ 1.., N), then updating the feature weight factors w (t), based on the gradient ascent method, w (t) by:
Figure BDA00029650893600000713
δ is the learning rate, δ (t) ═ δ/t, t ═ 1, 2,., Ite, 0 < δ (t) ≦ 1, and the gradient is calculated as follows:
Figure BDA0002965089360000081
given training sample set
Figure BDA0002965089360000082
Wherein
Figure BDA0002965089360000083
Figure BDA0002965089360000084
Is xiI is the dimension of the training samples, N is the number of training samples, C represents the number of classes, and therefore e (t-1)) for the t-1 th iteration is defined as follows:
Figure BDA0002965089360000085
where P (c) is the prior probability of class c,
Figure BDA0002965089360000086
and
Figure BDA0002965089360000087
is x in class ciHit and miss reconstruction points.
Further, the step S4 includes,
using the matching degree to measure the similarity between two samples, setting the new sample x and the best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2The matching degree is defined as:
Figure BDA0002965089360000088
wherein m is a characteristic dimension; f (x)i) And f(s)i) I-th features of sample x and sample s, min (f (x), respectivelyi),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) The minimum value and the maximum value of (d),
Figure BDA0002965089360000089
the similarity between x and s is shown,
Figure BDA00029650893600000810
value between 0 and 1The closer the matching degree is to 1, the higher the similarity between the two samples is;
in obtaining Ws1And Ws2Then, W is compareds1And Ws2A value of (d) and a matching degree threshold value T, if W1>W2> T or W1>T>W2New sample x is then matched with best matching sample s1Belong to the same class if T > W1If x belongs to a new class, x becomes an initial sample of the new class, and inter-class incremental learning is realized.
The method has the advantages that the importance of the boundary sample is considered, so that the synthesized sample is more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.
Drawings
FIG. 1 shows a TE process block diagram;
FIG. 2 is a flow chart of a framework for chemical engineering fault diagnosis based on an imbalance correction convolutional neural network according to an embodiment of the present invention;
FIG. 3 shows a proposed II-CNN framework diagram of the present invention;
FIG. 4 is a frame diagram of the data dimension reduction algorithm proposed by the present invention;
FIG. 5 illustrates a framework diagram of the incremental hierarchical model proposed by the present invention;
FIG. 6 shows the results of the method of the present invention on two types of minor faults, both graph (a) and graph (b) being sensitivity index curves;
FIG. 7 shows the results of a class 8 fault at each iteration using the method of the present invention, graph (a) being a sensitivity index curve and graph (b) being a g-mean curve;
FIG. 8 shows the results of a class 13 fault at each iteration using the method of the present invention, with (a) a sensitivity index curve and (b) a g-mean curve;
figure 9 shows accuracy plots for 7 different method experiments based on the method of the present invention: graph (a) compares the results for different sample numbers for each fault; graph (b) compares the results of different fault types;
Detailed Description
As shown in fig. 1, the present invention provides a chemical fault diagnosis method based on an imbalance correction convolutional neural network, comprising the following steps,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: after data preprocessing, valuable information of the imbalance data is generated and extracted.
S3: after a synthetic sample is obtained, performing data dimension reduction, and extracting key characteristic variables of the fault;
s4: constructing a CNN incremental learning network;
further, the step S1 includes,
the normalization process for the sample set X of sample TE process data is calculated according to the following formula:
Figure BDA0002965089360000101
wherein: x is the number ofikThe kth sample value of the ith input variable before normalization;
Figure BDA0002965089360000102
the k sample value of the normalized ith input variable is obtained;
xi,min=min{xik|1≤k≤N};
xi,max=max{xik|1≤k≤N}。
further, the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority fault for each minority fault type iData set Ti
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
Figure BDA0002965089360000103
s23 obtaining xiK neighbor set of (1);
s24 is k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary samplei. The weights determine the frequency of application of the boundary samples in the data generation process. w is aiThe calculation formula of (2):
Figure BDA0002965089360000111
wherein z isjNearest neighbor samples of most failure modes for x;
s28 generating synthetic sample by SMOTE according to formula xnewX '+ α × (x' -x) (α is [0-1 ]]Random numbers within a range);
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting most fault samples in a Tomek link pair;
s211, obtaining a new few fault mode data set T;
further, the step S3 includes,
inputting:
Figure BDA0002965089360000112
for training data setsWhere it is the number of iterations, θ is the allowable error, and δ (0) is the learning rate.
And (3) outputting: w is the feature weight vector.
S31 initialization weight vector
Figure BDA0002965089360000113
S32 randomly selecting a sample x;
s33 setting e (t-1) to 0 and δ (t) to δ (t-1)/t;
s34 calculating alphaiAnd betaiThe formula is as follows:
Figure BDA0002965089360000114
s35 calculates e (t-1):
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 when i is 1: within the range of N, circularly executing the step S34 and the step S35 until circulating N times;
s37 calculates the average value of e (t-1):
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
Figure BDA0002965089360000121
s39 updates w (t), formula:
Figure BDA0002965089360000122
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 when i is 1: within the range of N, circularly executing the steps S32 to S310 until circulating N times;
s313 obtains a weight vector w.
Further, the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and SiThe matching degree between:
Figure BDA0002965089360000123
s42 when i is 1: within the range of N, the step S41 is executed circularly until N times of circulation;
s43 obtaining W1And W2
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
Further, the step S1 includes,
there are 5 major operations of the TE process, including chemical reactor, recycle compressor, condenser, stripper, vapor/liquid separator, variables of the TE process including 12 inputs and 41 outputs, the TE simulator generates 22 different types of status data, including 21 standard fault and normal status data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
fault 4 reactor cooling water inlet temperature;
failure 5 condenser cooling water inlet temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
failure 11 reactor cooling water inlet temperature;
fault 12 condenser cooling water inlet temperature;
failure 13 reaction kinetics indicator;
failure 14 reactor cooling water valve;
failure 15 condenser cooling water valve;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
Further, the step S3 includes,
given training sample set
Figure BDA0002965089360000141
Wherein
Figure BDA0002965089360000142
yiE y { -1, +1} is xiI and N are the dimensions and number of training samples, respectively. In a local hyperplane, xiCan be expressed as
Figure BDA0002965089360000143
Each feature is assigned an appropriate weight, the greater the weight, the more important the feature.
Each feature is assigned a weight by maximizing the expected margin. Let xiIs wh alpha, h epsilon RI×kIs a sample matrix with k nearest neighbor samples xiW is the diagonal element w of a diagonal matrixiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of the nearest neighbor sample. The optimization problem is expressed as:
Figure BDA0002965089360000144
s.t.||w||2=1,wj...0,j=1,...,I
wherein,
Figure BDA0002965089360000145
(hi∈RI×kis a matrix of k xiHomogeneous neighbor of (m)i∈RI×kIs a matrix of k xiHomogeneous neighbor of (a), aiAnd betaiAre reconstruction coefficients of the closest sample, each from the same class
Figure BDA0002965089360000146
And from the opposite category
Figure BDA0002965089360000147
w represents a weight margin vector.
w (t) and z (t) represent the weight and expected boundary vector of the weighted feature space of the ith iteration, respectively. The objective function is:
Figure BDA0002965089360000148
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)...0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,
Figure BDA0002965089360000151
obtaining e (t) as:
Figure BDA0002965089360000152
representing a given sample x by a point on a local hyperplaneiThe nearest neighbors of (c). The final weight vector may be obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,
Figure BDA0002965089360000153
and
Figure BDA0002965089360000154
can be expressed as:
Figure BDA0002965089360000155
wherein alpha isi∈Rk,βi∈RkThe reconstruction coefficient vectors of the homogeneous neighbor and the heterogeneous neighbor respectively. Obtaining alpha by solving an optimization problemiAnd betai
Figure BDA0002965089360000156
Wherein | | | purple hair2Is 2-norm and λ is the regularization factor. If t ═ o, LHD-Release initializes feature weights
Figure BDA0002965089360000157
In the (t-1) th iteration, alpha of each sample is obtainediAnd betai(i ═ 1.., N). Then, updating the characteristic weight factor w (t), wherein w (t) can be updated by the following method based on the gradient ascending method:
Figure BDA0002965089360000158
δ is a learning rate (δ (t) ═ δ/t, t ═ 1, 2,. ere, Ite, 0 < δ (t) ≦ 1), and the gradient is calculated as follows:
Figure BDA0002965089360000159
given training sample set
Figure BDA00029650893600001510
Wherein
Figure BDA00029650893600001511
yiE y ═ {1, 2.., C } is xiI and N are the dimensions and number of training samples, respectively. Thus, e (t-1)) for the t-1 th iteration is defined as follows:
Figure BDA0002965089360000161
where P (c) is the prior probability of class c,
Figure BDA0002965089360000162
and
Figure BDA0002965089360000163
is x in class ciHit and miss reconstruction points.
Further, the step S4 includes,
the degree of match is used to measure the similarity between two samples. Setting a new sample x and a best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2. The degree of matching is defined as:
Figure BDA0002965089360000164
wherein m is a characteristic dimension; f (x)i) And f(s)i) Is the ith feature of x and s, min (f (x)i),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) Minimum and maximum values of. Due to the fact thatIn this way, the temperature of the molten steel is controlled,
Figure BDA0002965089360000165
representing the similarity of x and s.
Figure BDA0002965089360000166
The value is between 0 and 1, and the closer the matching degree is to 1, the higher the similarity between the two samples is.
In obtaining Ws1And Ws2Then, W is compareds1And Ws2And a matching degree threshold T. If W is1>W2> T or W1>T>W2New sample x and best matched sample s1Belong to the same category. If T > W1If x belongs to a new class, x becomes an initial sample of the new class, and inter-class incremental learning is realized.
To diagnose new faults, new classifications are automatically added to existing networks. These new networks can inherit the topology and learn the knowledge of the trained CNN so they can update themselves to contain new fault classes without the need for a complete retraining process. These layers are not trained from scratch, but are trained step by copying the parameters of the old layers as initialization. Samples belonging to the new class may be applied to the modified CNN and the corresponding new layer may be incrementally trained.
The meaning of English abbreviations in the present invention will be described below.
II-CNN represents a new and improved incremental imbalance correction convolutional neural network.
The method has the advantages that the importance of the boundary sample is considered, so that the synthesized sample is more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.
The method of the invention is adopted to diagnose faults based on TE process data as an experimental basis and a TE process structure chart as shown in figure 1.
(1) The TE simulator can generate 22 different types of status data, including 21 standard fault and normal status data. All data sets are sampled using the basic mode of the TE process, with corresponding training and test data for each set of pre-described faults. To test the performance of the proposed method, the experiment is divided here into two cases. In the first case, an unbalanced data flow in a chemical process was simulated, and 6 types of faults were selected. This is the case in order to test the diagnostic performance of our proposed method in unbalanced fault data. The second case is to test the incremental learning performance of the method. In this case, 10 types of faults are initially selected here and then added to 15 types of faults.
(2) Comparison with other methods
Fault types are preprocessed and their outputs are used as inputs to the CNN and share the same CNN structure. Therefore, the failure diagnosis performance is compared. The DBN method in deep learning has very good performance and therefore the DBN is used here for comparison with the present invention. Some typical fault diagnosis methods (shallow models) are compared with the present invention, including comparing widely used conventional methods such as Back Propagation Neural Network (BPNN) and Support Vector Machine (SVM) with the present invention. From these methods, we can demonstrate the fault diagnosis performance of deep learning methods because these shallow methods ignore the feature learning process as compared to deep learning methods. Here, SVM is used in scimit-spare with RBF kernel, setting the parameter γ to 1/df, where df is the characteristic number of the original data. The BPNN has 5 layers (neuron numbers 52, 42, 32, 22 and 10 per layer, respectively). To obtain perfect BPNN diagnostic performance, the learning rate was set to 0.5.
Example 1: diagnostic model experiment of unbalanced fault data
To evaluate the performance of the present invention, 6 faults with a specific imbalance ratio were selected in the training process, where faults 8 and 13 are a few sample fault types. As shown in fig. 6, the advantage of diagnostic performance in a few failure modes can be found. The present invention provides a significant improvement in identifying a few faults compared to other methods. Compared with the prior art, the performance of the invention is respectively higher by about 6.7 percent and 2.9 percent. Thus, the present invention proves advantageous in producing a few failure samples. It can be seen from fig. 6 that the present invention is superior to the shallow model because it can effectively extract features from raw data and process unbalanced data in complex chemical processes. Due to the deep architecture, the present invention can effectively resolve chemical imbalance data, which has numerous variables and has a highly nonlinear relationship.
From fig. 7 and 8, the performance of the present invention in diagnosing a few faults can be embodied. In this case, the imbalance ratio is increased here to test the performance of the model. Initially, the number of samples for both type 8 and type 13 faults is 50, and 30 samples will be added per iteration. The invention greatly improves the diagnosis performance of a few fault types. Compared with the prior art, the sensitivity index and the g-mean value of the invention are respectively higher by 3.7 percent and 1.9 percent. The present invention seeks to simulate as few original features of a fault as possible and provide the most meaningful diagnosis.
Experimental results show that as the number of the few faults increases, the method provided by the invention passes through about 10 iterations, the original knowledge of the few faults is sufficient, and each model can effectively extract features from the original data. As can be seen from fig. 7 and 8, the present invention can effectively solve the class imbalance problem.
The sensitivity indices for all fault types under different diagnostic methods are shown in table 1. The results show that the method provided by the invention can obviously improve the diagnostic performance of a few fault types. The present invention is well suited to solving the chemical data imbalance problem because it attempts to learn rare failure types from the imbalance data.
TABLE 1
Figure BDA0002965089360000191
Example 2: diagnostic model experiment for increased fault types
The incremental learning capabilities of the present invention for new samples and failure classes are described herein. The invention can be adaptively updated to new faults. Here the number of faults is gradually increased from 10 to 15. The experimental results for the first 10 faults are shown in fig. 9 (a). It illustrates the incremental learning capabilities of the new sample. In FIG. 9(a), the x-axis represents the number of training samples for each fault category and the y-axis represents the accuracy of the diagnostic model test samples. Here each diagnostic model is initialized with 200 samples of each fault category. Then, for each step, 50 samples will be added for each fault category to test the incremental learning capabilities of the proposed method. In this case, the SVM, BPNN, DBN, CNN will be fully trained based on the respective data sets for comparison.
When a new fault category is present, the incremental learning capability of the present invention is shown in FIG. 9 (b). In fig. 9(b), the x-axis represents the number of failure categories and the y-axis represents the accuracy of the different diagnostic methods. Here an initial diagnostic model will be trained to diagnose 10 faults in the TE process sample. A new fault category will then be added for each step to test the incremental learning capabilities of each diagnostic method until all 15 fault categories are imported into the diagnostic model. From fig. 9(b), it can be seen that the present invention performs better than the other methods. The diagnostic performance of the present invention is superior to other methods. This is because the convolution operation can effectively extract the nonlinear characteristics of the fault trend and the fault process.
The result of the comprehensive comparison experiment shows that compared with the traditional methods such as deep learning, the II-CNN framework provided by the invention is more effective in fault diagnosis in the chemical process.
Although the present invention has been disclosed in connection with the preferred embodiments thereof as shown and described in detail, it will be understood by those skilled in the art that various modifications may be made to the chemical fault diagnosis method based on the imbalance correction convolutional neural network proposed by the present invention without departing from the spirit of the present invention. Therefore, the scope of the present invention should be determined by the contents of the appended claims.

Claims (8)

1. A chemical fault diagnosis method based on an imbalance correction convolutional neural network is characterized by comprising the following steps,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: generating and extracting information of the unbalanced data;
s3: performing data dimension reduction, and extracting key characteristic variables of faults;
s4: and constructing the CNN incremental learning network.
2. The method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S1 includes,
the normalization process of the sample TE process data sample set X is calculated by adopting the following formula:
Figure FDA0002965089350000011
wherein x isikThe sample value is the kth sample value of the ith input variable before normalization, M represents the number of the input variables, and N represents the number of training samples;
Figure FDA0002965089350000012
the k sample value of the normalized ith input variable is obtained;
xmin{xik|1≤k≤N}i,min
xmax{xik|1≤k≤N}i,max
3. the method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority data set T for each minority fault type ii
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
Figure FDA0002965089350000021
i and j respectively represent sample serial numbers;
s23 obtaining xiK neighbor set of (1);
s24 is provided with k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary sampleiThe weight determines the frequency of application of the boundary samples in the data generation process, the weight wiThe calculation formula of (2):
Figure FDA0002965089350000022
wherein z isjNearest neighbor samples of most failure modes for x;
s28 is based on formula xnewSynthetic samples were generated with x '+ α × (x' -x), α being [0-1 ×]A random number within a range;
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting partial fault samples in a Tomek link pair;
s211 results in a new few failure mode data sets T.
4. The method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S3 includes,
inputting:
Figure FDA0002965089350000023
for the training data set, it is the number of iterations, θ is the allowable error, δ (0) is the learning rate, and N represents the number of training samples.
And outputting w as a feature weight vector.
S31 initialization weight vector
Figure FDA0002965089350000031
I represents the dimension of the sample;
s32 randomly selecting a sample x;
s33 sets e (t-1) to 0, δ (t) to δ (t-1)/t, t representing the number of iterations;
s34 uses the following formula to calculate alphaiAnd betai
Figure FDA0002965089350000032
h is a sample matrix, λ is a regularization factor;
s35 updates e (t-1) using the following formula:
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 looping through steps S34 and S35 until looping N times, in the range of i ═ 1: N;
s37 updates e (t-1) using the following formula:
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
Figure FDA0002965089350000033
s39 updates w (t), formula:
Figure FDA0002965089350000034
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 looping through steps S32 through S310 until looping N times, if i is in the range of 1: N;
s313 obtains a weight vector w.
5. The method of claim 1, wherein the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and S using the following formulaiThe matching degree between:
Figure FDA0002965089350000041
s42 looping step S41 until looping N times, if i is within the range of 1: N;
s43 obtaining W1And W2
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
6. The method of claim 1, wherein in step S1,
the TE process has 5 main units, including a chemical reactor, a circulating compressor, a condenser, a stripping tower, a vapor/liquid separator, and a TE simulator which generates 22 different types of state data, including 21 types of standard fault and normal state data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
fault 4 reactor cooling water inlet temperature;
failure 5 condenser cooling water inlet temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
failure 11 reactor cooling water inlet temperature;
fault 12 condenser cooling water inlet temperature;
failure 13 reaction kinetics indicator;
failure 14 reactor cooling water valve;
failure 15 condenser cooling water valve;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
7. The method of claim 1, wherein the step S3 includes,
given training sample set
Figure FDA0002965089350000051
Wherein
Figure FDA0002965089350000052
Figure FDA0002965089350000053
Is xiI and N are the dimensions and number of training samples, respectively,
Figure FDA0002965089350000054
representing the sample space, in a local hyperplane, xiIs represented by
Figure FDA0002965089350000055
Let xiIn the range of the number of the channels wh alpha,
Figure FDA0002965089350000056
is a sample matrix with k nearest neighbor samples xiW is the diagonal element of a diagonal matrix, wiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of a nearest neighbor sample, and the optimization problem is expressed as:
Figure FDA0002965089350000061
s.t.||w||2=1,wj≥0,j=1,...,I
wherein,
Figure FDA0002965089350000062
Figure FDA0002965089350000063
is a matrix of k xiIs close to the nearest neighbor of the neighbor,
Figure FDA0002965089350000064
is a matrix of k xiOf homogeneous neighbor, alphaiAnd betaiAre reconstruction coefficients of the closest sample, each from the same class
Figure FDA0002965089350000065
And from the opposite category
Figure FDA0002965089350000066
w represents a weight margin vector.
w (t) represents the weight of the weighted feature space of the ith iteration, z (t) represents the expected boundary vector of the ith iteration, and the objective function is as follows:
Figure FDA0002965089350000067
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)≥0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,
Figure FDA0002965089350000068
obtaining e (t) as:
Figure FDA0002965089350000069
representing a given sample x by a point on a local hyperplaneiThe final weight vector is obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,
Figure FDA00029650893500000610
and
Figure FDA00029650893500000611
can be expressed as:
Figure FDA00029650893500000612
wherein
Figure FDA00029650893500000613
Is a vector of reconstructed coefficients of homogeneous neighbors,
Figure FDA00029650893500000614
obtaining alpha by solving an optimization problem for the reconstruction coefficient vector of heterogeneous neighboriAnd betai
Figure FDA00029650893500000615
Wherein | | | purple hair2Is 2-norm, λ is the regularization factor, and if t is 0, the feature weight is initialized
Figure FDA0002965089350000071
Figure FDA0002965089350000072
I1.. I, at the (t-1) th iteration, a for each sample is obtainediAnd betai(i ═ 1.., N), then updating the feature weight factors w (t), based on the gradient ascent method, w (t) by:
Figure FDA0002965089350000073
δ is the learning rate, δ (t) ═ δ/t, t ═ 1, 2,., Ite, 0 < δ (t) ≦ 1, and the gradient is calculated as follows:
Figure FDA0002965089350000074
given training sample set
Figure FDA0002965089350000075
Wherein
Figure FDA0002965089350000076
Figure FDA0002965089350000077
Is xiI is the dimension of the training samples, N is the number of training samples, C represents the number of classes, and therefore e (t-1)) for the t-1 th iteration is defined as follows:
Figure FDA0002965089350000078
where P (c) is the prior probability of class c,
Figure FDA0002965089350000079
and
Figure FDA00029650893500000710
is x in class ciHit and miss reconstruction points.
8. The method of claim 1, wherein the step S4 includes,
using the matching degree to measure the similarity between two samples, setting the new sample x and the best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2The matching degree is defined as:
Figure FDA00029650893500000711
wherein m is a characteristic dimension; f (x)i) And f(s)i) I-th features of sample x and sample s, min (f (x), respectivelyi),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) The minimum value and the maximum value of (d),
Figure FDA00029650893500000712
the similarity between x and s is shown,
Figure FDA00029650893500000713
the value is between 0 and 1;
in obtaining Ws1And Ws2Then, W is compareds1And Ws2A value of (d) and a matching degree threshold value T, if W1>W2> T or W1>T>W2New sample x is then matched with best matching sample s1Belong to the same class if T > W1Then x belongs to a new class and becomes the initial sample of the new class.
CN202110248735.6A 2021-03-08 2021-03-08 Chemical fault diagnosis method based on unbalance correction convolutional neural network Active CN113033079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110248735.6A CN113033079B (en) 2021-03-08 2021-03-08 Chemical fault diagnosis method based on unbalance correction convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110248735.6A CN113033079B (en) 2021-03-08 2021-03-08 Chemical fault diagnosis method based on unbalance correction convolutional neural network

Publications (2)

Publication Number Publication Date
CN113033079A true CN113033079A (en) 2021-06-25
CN113033079B CN113033079B (en) 2023-07-18

Family

ID=76466690

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110248735.6A Active CN113033079B (en) 2021-03-08 2021-03-08 Chemical fault diagnosis method based on unbalance correction convolutional neural network

Country Status (1)

Country Link
CN (1) CN113033079B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114038169A (en) * 2021-11-10 2022-02-11 英业达(重庆)有限公司 Method, device, equipment and medium for monitoring faults of production equipment
CN117407824A (en) * 2023-12-14 2024-01-16 四川蜀能电科能源技术有限公司 Health detection method, equipment and medium of power time synchronization device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107784325A (en) * 2017-10-20 2018-03-09 河北工业大学 Spiral fault diagnosis model based on the fusion of data-driven increment
CN109816044A (en) * 2019-02-11 2019-05-28 中南大学 A kind of uneven learning method based on WGAN-GP and over-sampling
CN110070060A (en) * 2019-04-26 2019-07-30 天津开发区精诺瀚海数据科技有限公司 A kind of method for diagnosing faults of bearing apparatus
CN110244689A (en) * 2019-06-11 2019-09-17 哈尔滨工程大学 A kind of AUV adaptive failure diagnostic method based on identification feature learning method
CN110334580A (en) * 2019-05-04 2019-10-15 天津开发区精诺瀚海数据科技有限公司 The equipment fault classification method of changeable weight combination based on integrated increment
US20200082198A1 (en) * 2017-05-23 2020-03-12 Intel Corporation Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning
CN111580506A (en) * 2020-06-03 2020-08-25 南京理工大学 Industrial process fault diagnosis method based on information fusion
CN112200104A (en) * 2020-10-15 2021-01-08 重庆科技学院 Chemical engineering fault diagnosis method based on novel Bayesian framework for enhanced principal component analysis

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200082198A1 (en) * 2017-05-23 2020-03-12 Intel Corporation Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning
CN107784325A (en) * 2017-10-20 2018-03-09 河北工业大学 Spiral fault diagnosis model based on the fusion of data-driven increment
CN109816044A (en) * 2019-02-11 2019-05-28 中南大学 A kind of uneven learning method based on WGAN-GP and over-sampling
CN110070060A (en) * 2019-04-26 2019-07-30 天津开发区精诺瀚海数据科技有限公司 A kind of method for diagnosing faults of bearing apparatus
CN110334580A (en) * 2019-05-04 2019-10-15 天津开发区精诺瀚海数据科技有限公司 The equipment fault classification method of changeable weight combination based on integrated increment
CN110244689A (en) * 2019-06-11 2019-09-17 哈尔滨工程大学 A kind of AUV adaptive failure diagnostic method based on identification feature learning method
CN111580506A (en) * 2020-06-03 2020-08-25 南京理工大学 Industrial process fault diagnosis method based on information fusion
CN112200104A (en) * 2020-10-15 2021-01-08 重庆科技学院 Chemical engineering fault diagnosis method based on novel Bayesian framework for enhanced principal component analysis

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
MARKO RISTIN等: ""Incremental Learning of Random Forests for Large-Scale Image Classification"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
WANKE YU等: ""Broad Convolutional Neural Network Based Industrial Process Fault Diagnosis With Incremental Learning Capability"", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 *
吴定海等: "基于卷积神经网络的机械故障诊断方法综述", 《机械强度》 *
胡志新: ""基于深度学习的化工故障诊断方法研究"", 《中国优秀硕士学位论文全文数据库》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114038169A (en) * 2021-11-10 2022-02-11 英业达(重庆)有限公司 Method, device, equipment and medium for monitoring faults of production equipment
CN117407824A (en) * 2023-12-14 2024-01-16 四川蜀能电科能源技术有限公司 Health detection method, equipment and medium of power time synchronization device
CN117407824B (en) * 2023-12-14 2024-02-27 四川蜀能电科能源技术有限公司 Health detection method, equipment and medium of power time synchronization device

Also Published As

Publication number Publication date
CN113033079B (en) 2023-07-18

Similar Documents

Publication Publication Date Title
CN109146246B (en) Fault detection method based on automatic encoder and Bayesian network
CN109800875A (en) Chemical industry fault detection method based on particle group optimizing and noise reduction sparse coding machine
CN112580263A (en) Turbofan engine residual service life prediction method based on space-time feature fusion
CN113033079A (en) Chemical fault diagnosis method based on unbalanced correction convolutional neural network
CN106843195A (en) Based on the Fault Classification that the integrated semi-supervised Fei Sheer of self adaptation differentiates
CN111768000A (en) Industrial process data modeling method for online adaptive fine-tuning deep learning
CN111079926B (en) Equipment fault diagnosis method with self-adaptive learning rate based on deep learning
CN112784920B (en) Yun Bianduan coordinated rotating component reactance domain self-adaptive fault diagnosis method
CN113052218A (en) Multi-scale residual convolution and LSTM fusion performance evaluation method for industrial process
Deng et al. Semi-supervised discriminative projective dictionary pair learning and its application to industrial process
Sitawarin et al. Minimum-norm adversarial examples on KNN and KNN based models
CN115424177A (en) Twin network target tracking method based on incremental learning
CN111985825A (en) Crystal face quality evaluation method for roller mill orientation instrument
CN115345222A (en) Fault classification method based on TimeGAN model
CN112146879A (en) Rolling bearing fault intelligent diagnosis method and system
CN115905855A (en) Improved meta-learning algorithm MG-copy
CN111723857B (en) Intelligent monitoring method and system for running state of process production equipment
CN113538445A (en) Image segmentation method and system based on weighted robust FCM clustering
Bi Multi-objective programming in SVMs
CN117370826A (en) Method for extracting health state characteristics in wind turbine generator operation data
CN113688875B (en) Industrial system fault identification method and device
CN115578325A (en) Image anomaly detection method based on channel attention registration network
CN109547248A (en) Based on artificial intelligence in orbit aerocraft ad hoc network method for diagnosing faults and device
Huang et al. Label propagation dictionary learning based process monitoring method for industrial process with between-mode similarity
CN112766410A (en) Rotary kiln firing state identification method based on graph neural network feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant