CN113033079A - Chemical fault diagnosis method based on unbalanced correction convolutional neural network - Google Patents
Chemical fault diagnosis method based on unbalanced correction convolutional neural network Download PDFInfo
- Publication number
- CN113033079A CN113033079A CN202110248735.6A CN202110248735A CN113033079A CN 113033079 A CN113033079 A CN 113033079A CN 202110248735 A CN202110248735 A CN 202110248735A CN 113033079 A CN113033079 A CN 113033079A
- Authority
- CN
- China
- Prior art keywords
- sample
- fault
- samples
- new
- failure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 97
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 33
- 238000003745 diagnosis Methods 0.000 title claims abstract description 25
- 239000000126 substance Substances 0.000 title claims abstract description 18
- 238000012937 correction Methods 0.000 title claims abstract description 14
- 230000008569 process Effects 0.000 claims abstract description 41
- 230000009467 reduction Effects 0.000 claims abstract description 7
- 238000007781 pre-processing Methods 0.000 claims abstract description 5
- 238000012549 training Methods 0.000 claims description 33
- 239000013598 vector Substances 0.000 claims description 26
- 239000000498 cooling water Substances 0.000 claims description 18
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000010606 normalization Methods 0.000 claims description 6
- 238000005457 optimization Methods 0.000 claims description 6
- 239000000376 reactant Substances 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 239000007788 liquid Substances 0.000 claims description 3
- 239000002184 metal Substances 0.000 claims description 3
- 238000012545 processing Methods 0.000 claims description 3
- 230000008901 benefit Effects 0.000 abstract description 4
- 230000003068 static effect Effects 0.000 abstract description 3
- 230000002194 synthesizing effect Effects 0.000 abstract 1
- 239000000523 sample Substances 0.000 description 80
- 238000001311 chemical methods and process Methods 0.000 description 9
- 238000012360 testing method Methods 0.000 description 8
- 238000013135 deep learning Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000035945 sensitivity Effects 0.000 description 5
- 238000003889 chemical engineering Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 238000012706 support-vector machine Methods 0.000 description 4
- 238000002405 diagnostic procedure Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 229910000831 Steel Inorganic materials 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000001174 ascending effect Effects 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000007786 learning performance Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000002569 neuron Anatomy 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000010959 steel Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
- G06F30/27—Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2119/00—Details relating to the type or aim of the analysis or the optimisation
- G06F2119/08—Thermal analysis or thermal optimisation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Geometry (AREA)
- Computer Hardware Design (AREA)
- Medical Informatics (AREA)
- Complex Calculations (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a chemical fault diagnosis method based on an imbalance correction convolutional neural network, which comprises the following steps of S1: preprocessing TE process data; s2: synthesizing a sample; s3: reducing the dimension of the data; s4: and constructing the CNN incremental learning network. The invention has the advantages that the proposed II-CNN framework can synthesize unbalanced data, and considers the importance of boundary samples, thereby enabling the synthesized samples to be more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.
Description
Technical Field
The invention belongs to the field of chemical engineering, and particularly relates to an imbalance correction convolutional neural network incremental learning method for chemical engineering fault diagnosis.
Background
The chemical process fault diagnosis is one of the most important programs in a process control system and is important to ensure the successful operation of the chemical process and improve the safety of the chemical process. The fault diagnosis model aims to detect the abnormal state in the production process, find out the root cause of the fault, assist in making a reliable decision and eliminate the system fault. The fault diagnosis model can convert historical data into process information according to data collected from a plurality of sensors and judge whether a fault occurs, so that the safety, high efficiency and economy of a complex chemical process are guaranteed.
A great deal of research is carried out on the current intelligent fault diagnosis method based on machine learning and deep learning. However, these methods have most of the following drawbacks: 1) they assume that the data samples under different failure modes are balanced or equal, but such assumptions are not always applicable to real chemical processes, and data imbalance may cause the classifier to not learn complete class knowledge, reducing the classification accuracy of the classifier because data imbalance may cause the classifier to have less attention to few failures; 2) as production progresses in an actual industrial process, one or several new fault types may appear, and as new fault categories arrive, these models all require a complete retraining process.
Therefore, it is necessary to provide a new and effective fault diagnosis framework for the problems of data sample imbalance and model update in the complex chemical process.
Disclosure of Invention
The invention aims to provide a chemical engineering fault diagnosis-oriented fault diagnosis framework based on an imbalance correction convolutional neural network, so that various methods are fully utilized, the influence of imbalance of data samples is reduced, the network structure and parameters can be automatically updated, and the robustness of a fault diagnosis model is improved.
In order to achieve the above object, the present invention provides a chemical failure diagnosis method based on an imbalance correction convolutional neural network, comprising the steps of,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: generating and extracting information of the unbalanced data;
s3: performing data dimension reduction, and extracting key characteristic variables of faults;
s4: and constructing the CNN incremental learning network.
Further, the step S1 includes,
the normalization process of the sample TE process data sample set X is calculated by adopting the following formula:
wherein: x is the number ofikThe sample value is the kth sample value of the ith input variable before normalization, M represents the number of the input variables, and N represents the number of training samples;
xmin{xik|1≤k≤N}i,min;
xmax{xik|1≤k≤N}i,max。
further, the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority data set T for each minority fault type ii;
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
i and j respectively represent sample serial numbers;
s23 obtaining xiK neighbor set of (1);
s24 is provided with k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary sampleiThe weight determines the frequency of application of the boundary samples in the data generation process, the weight wiThe calculation formula of (2):
wherein z isjNearest neighbor samples of most failure modes for x;
s28 is based on formula xnewSynthetic samples were generated with x '+ α × (x' -x), α being [0-1 ×]A random number within a range;
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting partial fault samples in a Tomek link pair;
s211 results in a new few failure mode data sets T.
Further, the step S3 includes,
inputting:for the training data set, it is the number of iterations, θ is the allowable error, δ (0) is the learning rate, and N represents the number of training samples.
And (3) outputting: w is the feature weight vector.
s32 randomly selecting a sample x;
s33 sets e (t-1) to 0, δ (t) to δ (t-1)/t, t representing the number of iterations;
s34 uses the following formula to calculate alphaiAnd betai:
h is a sample matrix, λ is a regularization factor;
s35 updates e (t-1) using the following formula:
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 when i is 1: within the range of N, circularly executing the step S34 and the step S35 until circulating N times;
s37 updates e (t-1) using the following formula:
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
s39 updates w (t), formula:
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 when i is 1: within the range of N, circularly executing the steps S32 to S310 until circulating N times;
s313 obtains a weight vector w.
Further, the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and S using the following formulaiThe matching degree between:
s42 when i is 1: within the range of N, the step S41 is executed circularly until N times of circulation;
s43 obtaining W1And W2;
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
Further, in the step S1,
the TE process has 5 main units, including a chemical reactor, a circulating compressor, a condenser, a stripping tower, a vapor/liquid separator, and a TE simulator which generates 22 different types of state data, including 21 types of standard fault and normal state data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
Further, the step S3 includes,
given training sample setWherein Is xiI and N are the dimensions and number of training samples, respectively,representing the sample space, in a local hyperplane, xiIs represented by
Let xiIn the range of the number of the channels wh alpha,is a sample matrix with k nearest neighbor samples xiW is the diagonal element of a diagonal matrix, wiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of a nearest neighbor sample, and the optimization problem is expressed as:
s.t.||w||2=1,wj≥0,j=1,...,I
wherein, is a matrix of k xiIs close to the nearest neighbor of the neighbor,is a matrix of the number of pixels in the matrix,there are k xiOf homogeneous neighbor, alphaiAnd betaiAre reconstruction coefficients of the closest sample, each from the same classAnd from the opposite categoryw represents a weight margin vector.
w (t) represents the weight of the weighted feature space of the ith iteration, z (t) represents the expected boundary vector of the ith iteration, and the objective function is as follows:
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)≥0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,obtaining e (t) as:
representing a given sample x by a point on a local hyperplaneiThe final weight vector is obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,andcan be expressed as:
whereinIs a vector of reconstructed coefficients of homogeneous neighbors,obtaining alpha by solving an optimization problem for the reconstruction coefficient vector of heterogeneous neighboriAnd betai:
Wherein | | | purple hair2Is 2-norm, λ is the regularization factor, and if t is 0, the feature weight is initializedI1.. I, at the (t-1) th iteration, a for each sample is obtainediAnd betai(i ═ 1.., N), then updating the feature weight factors w (t), based on the gradient ascent method, w (t) by:
δ is the learning rate, δ (t) ═ δ/t, t ═ 1, 2,., Ite, 0 < δ (t) ≦ 1, and the gradient is calculated as follows:
given training sample setWherein Is xiI is the dimension of the training samples, N is the number of training samples, C represents the number of classes, and therefore e (t-1)) for the t-1 th iteration is defined as follows:
where P (c) is the prior probability of class c,andis x in class ciHit and miss reconstruction points.
Further, the step S4 includes,
using the matching degree to measure the similarity between two samples, setting the new sample x and the best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2The matching degree is defined as:
wherein m is a characteristic dimension; f (x)i) And f(s)i) I-th features of sample x and sample s, min (f (x), respectivelyi),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) The minimum value and the maximum value of (d),the similarity between x and s is shown,value between 0 and 1The closer the matching degree is to 1, the higher the similarity between the two samples is;
in obtaining Ws1And Ws2Then, W is compareds1And Ws2A value of (d) and a matching degree threshold value T, if W1>W2> T or W1>T>W2New sample x is then matched with best matching sample s1Belong to the same class if T > W1If x belongs to a new class, x becomes an initial sample of the new class, and inter-class incremental learning is realized.
The method has the advantages that the importance of the boundary sample is considered, so that the synthesized sample is more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.
Drawings
FIG. 1 shows a TE process block diagram;
FIG. 2 is a flow chart of a framework for chemical engineering fault diagnosis based on an imbalance correction convolutional neural network according to an embodiment of the present invention;
FIG. 3 shows a proposed II-CNN framework diagram of the present invention;
FIG. 4 is a frame diagram of the data dimension reduction algorithm proposed by the present invention;
FIG. 5 illustrates a framework diagram of the incremental hierarchical model proposed by the present invention;
FIG. 6 shows the results of the method of the present invention on two types of minor faults, both graph (a) and graph (b) being sensitivity index curves;
FIG. 7 shows the results of a class 8 fault at each iteration using the method of the present invention, graph (a) being a sensitivity index curve and graph (b) being a g-mean curve;
FIG. 8 shows the results of a class 13 fault at each iteration using the method of the present invention, with (a) a sensitivity index curve and (b) a g-mean curve;
figure 9 shows accuracy plots for 7 different method experiments based on the method of the present invention: graph (a) compares the results for different sample numbers for each fault; graph (b) compares the results of different fault types;
Detailed Description
As shown in fig. 1, the present invention provides a chemical fault diagnosis method based on an imbalance correction convolutional neural network, comprising the following steps,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: after data preprocessing, valuable information of the imbalance data is generated and extracted.
S3: after a synthetic sample is obtained, performing data dimension reduction, and extracting key characteristic variables of the fault;
s4: constructing a CNN incremental learning network;
further, the step S1 includes,
the normalization process for the sample set X of sample TE process data is calculated according to the following formula:
wherein: x is the number ofikThe kth sample value of the ith input variable before normalization;
xi,min=min{xik|1≤k≤N};
xi,max=max{xik|1≤k≤N}。
further, the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority fault for each minority fault type iData set Ti;
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
s23 obtaining xiK neighbor set of (1);
s24 is k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary samplei. The weights determine the frequency of application of the boundary samples in the data generation process. w is aiThe calculation formula of (2):
wherein z isjNearest neighbor samples of most failure modes for x;
s28 generating synthetic sample by SMOTE according to formula xnewX '+ α × (x' -x) (α is [0-1 ]]Random numbers within a range);
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting most fault samples in a Tomek link pair;
s211, obtaining a new few fault mode data set T;
further, the step S3 includes,
inputting:for training data setsWhere it is the number of iterations, θ is the allowable error, and δ (0) is the learning rate.
And (3) outputting: w is the feature weight vector.
S32 randomly selecting a sample x;
s33 setting e (t-1) to 0 and δ (t) to δ (t-1)/t;
s34 calculating alphaiAnd betaiThe formula is as follows:
s35 calculates e (t-1):
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 when i is 1: within the range of N, circularly executing the step S34 and the step S35 until circulating N times;
s37 calculates the average value of e (t-1):
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
s39 updates w (t), formula:
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 when i is 1: within the range of N, circularly executing the steps S32 to S310 until circulating N times;
s313 obtains a weight vector w.
Further, the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and SiThe matching degree between:
s42 when i is 1: within the range of N, the step S41 is executed circularly until N times of circulation;
s43 obtaining W1And W2;
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
Further, the step S1 includes,
there are 5 major operations of the TE process, including chemical reactor, recycle compressor, condenser, stripper, vapor/liquid separator, variables of the TE process including 12 inputs and 41 outputs, the TE simulator generates 22 different types of status data, including 21 standard fault and normal status data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
Further, the step S3 includes,
given training sample setWhereinyiE y { -1, +1} is xiI and N are the dimensions and number of training samples, respectively. In a local hyperplane, xiCan be expressed asEach feature is assigned an appropriate weight, the greater the weight, the more important the feature.
Each feature is assigned a weight by maximizing the expected margin. Let xiIs wh alpha, h epsilon RI×kIs a sample matrix with k nearest neighbor samples xiW is the diagonal element w of a diagonal matrixiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of the nearest neighbor sample. The optimization problem is expressed as:
s.t.||w||2=1,wj...0,j=1,...,I
wherein,(hi∈RI×kis a matrix of k xiHomogeneous neighbor of (m)i∈RI×kIs a matrix of k xiHomogeneous neighbor of (a), aiAnd betaiAre reconstruction coefficients of the closest sample, each from the same classAnd from the opposite categoryw represents a weight margin vector.
w (t) and z (t) represent the weight and expected boundary vector of the weighted feature space of the ith iteration, respectively. The objective function is:
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)...0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,obtaining e (t) as:
representing a given sample x by a point on a local hyperplaneiThe nearest neighbors of (c). The final weight vector may be obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,andcan be expressed as:
wherein alpha isi∈Rk,βi∈RkThe reconstruction coefficient vectors of the homogeneous neighbor and the heterogeneous neighbor respectively. Obtaining alpha by solving an optimization problemiAnd betai:
Wherein | | | purple hair2Is 2-norm and λ is the regularization factor. If t ═ o, LHD-Release initializes feature weightsIn the (t-1) th iteration, alpha of each sample is obtainediAnd betai(i ═ 1.., N). Then, updating the characteristic weight factor w (t), wherein w (t) can be updated by the following method based on the gradient ascending method:
δ is a learning rate (δ (t) ═ δ/t, t ═ 1, 2,. ere, Ite, 0 < δ (t) ≦ 1), and the gradient is calculated as follows:
given training sample setWhereinyiE y ═ {1, 2.., C } is xiI and N are the dimensions and number of training samples, respectively. Thus, e (t-1)) for the t-1 th iteration is defined as follows:
where P (c) is the prior probability of class c,andis x in class ciHit and miss reconstruction points.
Further, the step S4 includes,
the degree of match is used to measure the similarity between two samples. Setting a new sample x and a best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2. The degree of matching is defined as:
wherein m is a characteristic dimension; f (x)i) And f(s)i) Is the ith feature of x and s, min (f (x)i),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) Minimum and maximum values of. Due to the fact thatIn this way, the temperature of the molten steel is controlled,representing the similarity of x and s.The value is between 0 and 1, and the closer the matching degree is to 1, the higher the similarity between the two samples is.
In obtaining Ws1And Ws2Then, W is compareds1And Ws2And a matching degree threshold T. If W is1>W2> T or W1>T>W2New sample x and best matched sample s1Belong to the same category. If T > W1If x belongs to a new class, x becomes an initial sample of the new class, and inter-class incremental learning is realized.
To diagnose new faults, new classifications are automatically added to existing networks. These new networks can inherit the topology and learn the knowledge of the trained CNN so they can update themselves to contain new fault classes without the need for a complete retraining process. These layers are not trained from scratch, but are trained step by copying the parameters of the old layers as initialization. Samples belonging to the new class may be applied to the modified CNN and the corresponding new layer may be incrementally trained.
The meaning of English abbreviations in the present invention will be described below.
II-CNN represents a new and improved incremental imbalance correction convolutional neural network.
The method has the advantages that the importance of the boundary sample is considered, so that the synthesized sample is more representative; on the basis, the dimension reduction is carried out on the data, and the complex learning process is simplified; and finally, updating the structure and parameters of the CNN network by adopting incremental learning aiming at the arrival of a new fault type. The method is superior to the existing static model method, and has obvious robustness and reliability in chemical fault diagnosis.
The method of the invention is adopted to diagnose faults based on TE process data as an experimental basis and a TE process structure chart as shown in figure 1.
(1) The TE simulator can generate 22 different types of status data, including 21 standard fault and normal status data. All data sets are sampled using the basic mode of the TE process, with corresponding training and test data for each set of pre-described faults. To test the performance of the proposed method, the experiment is divided here into two cases. In the first case, an unbalanced data flow in a chemical process was simulated, and 6 types of faults were selected. This is the case in order to test the diagnostic performance of our proposed method in unbalanced fault data. The second case is to test the incremental learning performance of the method. In this case, 10 types of faults are initially selected here and then added to 15 types of faults.
(2) Comparison with other methods
Fault types are preprocessed and their outputs are used as inputs to the CNN and share the same CNN structure. Therefore, the failure diagnosis performance is compared. The DBN method in deep learning has very good performance and therefore the DBN is used here for comparison with the present invention. Some typical fault diagnosis methods (shallow models) are compared with the present invention, including comparing widely used conventional methods such as Back Propagation Neural Network (BPNN) and Support Vector Machine (SVM) with the present invention. From these methods, we can demonstrate the fault diagnosis performance of deep learning methods because these shallow methods ignore the feature learning process as compared to deep learning methods. Here, SVM is used in scimit-spare with RBF kernel, setting the parameter γ to 1/df, where df is the characteristic number of the original data. The BPNN has 5 layers (neuron numbers 52, 42, 32, 22 and 10 per layer, respectively). To obtain perfect BPNN diagnostic performance, the learning rate was set to 0.5.
Example 1: diagnostic model experiment of unbalanced fault data
To evaluate the performance of the present invention, 6 faults with a specific imbalance ratio were selected in the training process, where faults 8 and 13 are a few sample fault types. As shown in fig. 6, the advantage of diagnostic performance in a few failure modes can be found. The present invention provides a significant improvement in identifying a few faults compared to other methods. Compared with the prior art, the performance of the invention is respectively higher by about 6.7 percent and 2.9 percent. Thus, the present invention proves advantageous in producing a few failure samples. It can be seen from fig. 6 that the present invention is superior to the shallow model because it can effectively extract features from raw data and process unbalanced data in complex chemical processes. Due to the deep architecture, the present invention can effectively resolve chemical imbalance data, which has numerous variables and has a highly nonlinear relationship.
From fig. 7 and 8, the performance of the present invention in diagnosing a few faults can be embodied. In this case, the imbalance ratio is increased here to test the performance of the model. Initially, the number of samples for both type 8 and type 13 faults is 50, and 30 samples will be added per iteration. The invention greatly improves the diagnosis performance of a few fault types. Compared with the prior art, the sensitivity index and the g-mean value of the invention are respectively higher by 3.7 percent and 1.9 percent. The present invention seeks to simulate as few original features of a fault as possible and provide the most meaningful diagnosis.
Experimental results show that as the number of the few faults increases, the method provided by the invention passes through about 10 iterations, the original knowledge of the few faults is sufficient, and each model can effectively extract features from the original data. As can be seen from fig. 7 and 8, the present invention can effectively solve the class imbalance problem.
The sensitivity indices for all fault types under different diagnostic methods are shown in table 1. The results show that the method provided by the invention can obviously improve the diagnostic performance of a few fault types. The present invention is well suited to solving the chemical data imbalance problem because it attempts to learn rare failure types from the imbalance data.
TABLE 1
Example 2: diagnostic model experiment for increased fault types
The incremental learning capabilities of the present invention for new samples and failure classes are described herein. The invention can be adaptively updated to new faults. Here the number of faults is gradually increased from 10 to 15. The experimental results for the first 10 faults are shown in fig. 9 (a). It illustrates the incremental learning capabilities of the new sample. In FIG. 9(a), the x-axis represents the number of training samples for each fault category and the y-axis represents the accuracy of the diagnostic model test samples. Here each diagnostic model is initialized with 200 samples of each fault category. Then, for each step, 50 samples will be added for each fault category to test the incremental learning capabilities of the proposed method. In this case, the SVM, BPNN, DBN, CNN will be fully trained based on the respective data sets for comparison.
When a new fault category is present, the incremental learning capability of the present invention is shown in FIG. 9 (b). In fig. 9(b), the x-axis represents the number of failure categories and the y-axis represents the accuracy of the different diagnostic methods. Here an initial diagnostic model will be trained to diagnose 10 faults in the TE process sample. A new fault category will then be added for each step to test the incremental learning capabilities of each diagnostic method until all 15 fault categories are imported into the diagnostic model. From fig. 9(b), it can be seen that the present invention performs better than the other methods. The diagnostic performance of the present invention is superior to other methods. This is because the convolution operation can effectively extract the nonlinear characteristics of the fault trend and the fault process.
The result of the comprehensive comparison experiment shows that compared with the traditional methods such as deep learning, the II-CNN framework provided by the invention is more effective in fault diagnosis in the chemical process.
Although the present invention has been disclosed in connection with the preferred embodiments thereof as shown and described in detail, it will be understood by those skilled in the art that various modifications may be made to the chemical fault diagnosis method based on the imbalance correction convolutional neural network proposed by the present invention without departing from the spirit of the present invention. Therefore, the scope of the present invention should be determined by the contents of the appended claims.
Claims (8)
1. A chemical fault diagnosis method based on an imbalance correction convolutional neural network is characterized by comprising the following steps,
s1: TE process data preprocessing, namely performing discrete value and standardization processing on the data;
s2: generating and extracting information of the unbalanced data;
s3: performing data dimension reduction, and extracting key characteristic variables of faults;
s4: and constructing the CNN incremental learning network.
2. The method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S1 includes,
the normalization process of the sample TE process data sample set X is calculated by adopting the following formula:
wherein x isikThe sample value is the kth sample value of the ith input variable before normalization, M represents the number of the input variables, and N represents the number of training samples;
xmin{xik|1≤k≤N}i,min;
xmax{xik|1≤k≤N}i,max。
3. the method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S2 includes,
inputting: d represents an original sample set, k represents the number of nearest neighbor samples, and n represents the number of samples in D;
and (3) outputting: t represents a few failure mode data sets;
s21 creates a minority data set T for each minority fault type ii;
S22 calculating each few sample x in DiWith each sample yjEuclidean distance between:
i and j respectively represent sample serial numbers;
s23 obtaining xiK neighbor set of (1);
s24 is provided with k'i(0≤k′iK) samples are less than or equal to the majority of failure modes;
s25 if k/2 is not more than k'iK is ≦ k, then xiIs an edge sample;
S26{x′1,x′2,...,x′mthe method comprises the following steps of (1) taking a boundary sample set as a start point, and taking m as the number of boundary samples;
s27 assigns a weight w to each boundary sampleiThe weight determines the frequency of application of the boundary samples in the data generation process, the weight wiThe calculation formula of (2):
wherein z isjNearest neighbor samples of most failure modes for x;
s28 is based on formula xnewSynthetic samples were generated with x '+ α × (x' -x), α being [0-1 ×]A random number within a range;
s29, combining the synthesized sample with the original sample to form a new few failure mode data set T';
s210, using a Tomek link to complete undersampling, and deleting partial fault samples in a Tomek link pair;
s211 results in a new few failure mode data sets T.
4. The method for diagnosing chemical faults based on the imbalance correction convolutional neural network as claimed in claim 1, wherein the step S3 includes,
inputting:for the training data set, it is the number of iterations, θ is the allowable error, δ (0) is the learning rate, and N represents the number of training samples.
And outputting w as a feature weight vector.
s32 randomly selecting a sample x;
s33 sets e (t-1) to 0, δ (t) to δ (t-1)/t, t representing the number of iterations;
s34 uses the following formula to calculate alphaiAnd betai:
h is a sample matrix, λ is a regularization factor;
s35 updates e (t-1) using the following formula:
e(t-1)=e(t-1)+(|x-xLH(NH)|-|x-xLH(NM)|);
s36 looping through steps S34 and S35 until looping N times, in the range of i ═ 1: N;
s37 updates e (t-1) using the following formula:
e(t-1)=e(t-1)/N;
s38 calculates z (t-1), formula:
s39 updates w (t), formula:
s310 determines whether the condition is satisfied: if the result is satisfied, the next step is carried out, and if the result is not satisfied, the steps from S32 to S39 are circulated;
s312 looping through steps S32 through S310 until looping N times, if i is in the range of 1: N;
s313 obtains a weight vector w.
5. The method of claim 1, wherein the step S4 includes,
inputting: x represents the new sample, N represents the number of training samples, and T represents the threshold.
And (3) outputting: w1And W2Is the first and second degree of match;
s41 calculating x and S using the following formulaiThe matching degree between:
s42 looping step S41 until looping N times, if i is within the range of 1: N;
s43 obtaining W1And W2;
S44 if ((W)1>W2>T)||(W1>T>W2) Then, the user may select, for example,
x and s1Belonging to the same category, adding x into the training dataset;
s45 if (T > W)1>W2) Then, the process of the present invention,
x is a new sample belonging to a new class; adding x to the training dataset for the new category; adding a new layer for the trained CNN; randomly initializing new parameters; new layers are trained gradually.
6. The method of claim 1, wherein in step S1,
the TE process has 5 main units, including a chemical reactor, a circulating compressor, a condenser, a stripping tower, a vapor/liquid separator, and a TE simulator which generates 22 different types of state data, including 21 types of standard fault and normal state data;
the 21 fault status types for the TE process are as follows:
fault 1A/C feed ratio, component B constant;
fault 2B component, a/C ratio constant;
fault 3D feed temperature;
fault 4 reactor cooling water inlet temperature;
failure 5 condenser cooling water inlet temperature;
failure 6A loss of feed;
failure 7C header pressure loss;
failure 8A, B, C feed component;
fault 9D feed temperature;
fault 10C feed temperature;
failure 11 reactor cooling water inlet temperature;
fault 12 condenser cooling water inlet temperature;
failure 13 reaction kinetics indicator;
failure 14 reactor cooling water valve;
failure 15 condenser cooling water valve;
fault 16-20 unknown type;
failure 21 the valve in stream 4.
Wherein A, C, D represents three different gaseous reactants, B represents an inert component, and the reactants and inert component are fed into the reactor during the TE process; flow 4 refers to the valve position.
7. The method of claim 1, wherein the step S3 includes,
given training sample setWherein Is xiI and N are the dimensions and number of training samples, respectively,representing the sample space, in a local hyperplane, xiIs represented by
Let xiIn the range of the number of the channels wh alpha,is a sample matrix with k nearest neighbor samples xiW is the diagonal element of a diagonal matrix, wiiA weight representing the ith feature, and a ∈ RkEach element is a reconstruction coefficient of a nearest neighbor sample, and the optimization problem is expressed as:
s.t.||w||2=1,wj≥0,j=1,...,I
wherein, is a matrix of k xiIs close to the nearest neighbor of the neighbor,is a matrix of k xiOf homogeneous neighbor, alphaiAnd betaiAre reconstruction coefficients of the closest sample, each from the same classAnd from the opposite categoryw represents a weight margin vector.
w (t) represents the weight of the weighted feature space of the ith iteration, z (t) represents the expected boundary vector of the ith iteration, and the objective function is as follows:
s.t.z(t-1)=w(t-1)⊙e(t-1)
||w(t)||2=1,wj(t)≥0,j=1,...,I,t=1,...,Ite
where e (t) is the expected edge vector of the original space at the I-th iteration, Ite is the maximum number of iterations,obtaining e (t) as:
representing a given sample x by a point on a local hyperplaneiThe final weight vector is obtained by maximizing the interval between a given sample and the local hyperplane. Therefore, the temperature of the molten metal is controlled,andcan be expressed as:
whereinIs a vector of reconstructed coefficients of homogeneous neighbors,obtaining alpha by solving an optimization problem for the reconstruction coefficient vector of heterogeneous neighboriAnd betai:
Wherein | | | purple hair2Is 2-norm, λ is the regularization factor, and if t is 0, the feature weight is initialized I1.. I, at the (t-1) th iteration, a for each sample is obtainediAnd betai(i ═ 1.., N), then updating the feature weight factors w (t), based on the gradient ascent method, w (t) by:
δ is the learning rate, δ (t) ═ δ/t, t ═ 1, 2,., Ite, 0 < δ (t) ≦ 1, and the gradient is calculated as follows:
given training sample setWherein Is xiI is the dimension of the training samples, N is the number of training samples, C represents the number of classes, and therefore e (t-1)) for the t-1 th iteration is defined as follows:
8. The method of claim 1, wherein the step S4 includes,
using the matching degree to measure the similarity between two samples, setting the new sample x and the best matching sample s1The best matching degree between the two is Ws1Then the new sample x is matched with the second matched sample s2The second matching degree between is Ws2The matching degree is defined as:
wherein m is a characteristic dimension; f (x)i) And f(s)i) I-th features of sample x and sample s, min (f (x), respectivelyi),f(si) Max (f (x))i),f(si) Are each f (x)i) And f(s)i) The minimum value and the maximum value of (d),the similarity between x and s is shown,the value is between 0 and 1;
in obtaining Ws1And Ws2Then, W is compareds1And Ws2A value of (d) and a matching degree threshold value T, if W1>W2> T or W1>T>W2New sample x is then matched with best matching sample s1Belong to the same class if T > W1Then x belongs to a new class and becomes the initial sample of the new class.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110248735.6A CN113033079B (en) | 2021-03-08 | 2021-03-08 | Chemical fault diagnosis method based on unbalance correction convolutional neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110248735.6A CN113033079B (en) | 2021-03-08 | 2021-03-08 | Chemical fault diagnosis method based on unbalance correction convolutional neural network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113033079A true CN113033079A (en) | 2021-06-25 |
CN113033079B CN113033079B (en) | 2023-07-18 |
Family
ID=76466690
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110248735.6A Active CN113033079B (en) | 2021-03-08 | 2021-03-08 | Chemical fault diagnosis method based on unbalance correction convolutional neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113033079B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114038169A (en) * | 2021-11-10 | 2022-02-11 | 英业达(重庆)有限公司 | Method, device, equipment and medium for monitoring faults of production equipment |
CN117407824A (en) * | 2023-12-14 | 2024-01-16 | 四川蜀能电科能源技术有限公司 | Health detection method, equipment and medium of power time synchronization device |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107784325A (en) * | 2017-10-20 | 2018-03-09 | 河北工业大学 | Spiral fault diagnosis model based on the fusion of data-driven increment |
CN109816044A (en) * | 2019-02-11 | 2019-05-28 | 中南大学 | A kind of uneven learning method based on WGAN-GP and over-sampling |
CN110070060A (en) * | 2019-04-26 | 2019-07-30 | 天津开发区精诺瀚海数据科技有限公司 | A kind of method for diagnosing faults of bearing apparatus |
CN110244689A (en) * | 2019-06-11 | 2019-09-17 | 哈尔滨工程大学 | A kind of AUV adaptive failure diagnostic method based on identification feature learning method |
CN110334580A (en) * | 2019-05-04 | 2019-10-15 | 天津开发区精诺瀚海数据科技有限公司 | The equipment fault classification method of changeable weight combination based on integrated increment |
US20200082198A1 (en) * | 2017-05-23 | 2020-03-12 | Intel Corporation | Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning |
CN111580506A (en) * | 2020-06-03 | 2020-08-25 | 南京理工大学 | Industrial process fault diagnosis method based on information fusion |
CN112200104A (en) * | 2020-10-15 | 2021-01-08 | 重庆科技学院 | Chemical engineering fault diagnosis method based on novel Bayesian framework for enhanced principal component analysis |
-
2021
- 2021-03-08 CN CN202110248735.6A patent/CN113033079B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200082198A1 (en) * | 2017-05-23 | 2020-03-12 | Intel Corporation | Methods and apparatus for discriminative semantic transfer and physics-inspired optimization of features in deep learning |
CN107784325A (en) * | 2017-10-20 | 2018-03-09 | 河北工业大学 | Spiral fault diagnosis model based on the fusion of data-driven increment |
CN109816044A (en) * | 2019-02-11 | 2019-05-28 | 中南大学 | A kind of uneven learning method based on WGAN-GP and over-sampling |
CN110070060A (en) * | 2019-04-26 | 2019-07-30 | 天津开发区精诺瀚海数据科技有限公司 | A kind of method for diagnosing faults of bearing apparatus |
CN110334580A (en) * | 2019-05-04 | 2019-10-15 | 天津开发区精诺瀚海数据科技有限公司 | The equipment fault classification method of changeable weight combination based on integrated increment |
CN110244689A (en) * | 2019-06-11 | 2019-09-17 | 哈尔滨工程大学 | A kind of AUV adaptive failure diagnostic method based on identification feature learning method |
CN111580506A (en) * | 2020-06-03 | 2020-08-25 | 南京理工大学 | Industrial process fault diagnosis method based on information fusion |
CN112200104A (en) * | 2020-10-15 | 2021-01-08 | 重庆科技学院 | Chemical engineering fault diagnosis method based on novel Bayesian framework for enhanced principal component analysis |
Non-Patent Citations (4)
Title |
---|
MARKO RISTIN等: ""Incremental Learning of Random Forests for Large-Scale Image Classification"", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 * |
WANKE YU等: ""Broad Convolutional Neural Network Based Industrial Process Fault Diagnosis With Incremental Learning Capability"", 《IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS》 * |
吴定海等: "基于卷积神经网络的机械故障诊断方法综述", 《机械强度》 * |
胡志新: ""基于深度学习的化工故障诊断方法研究"", 《中国优秀硕士学位论文全文数据库》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114038169A (en) * | 2021-11-10 | 2022-02-11 | 英业达(重庆)有限公司 | Method, device, equipment and medium for monitoring faults of production equipment |
CN117407824A (en) * | 2023-12-14 | 2024-01-16 | 四川蜀能电科能源技术有限公司 | Health detection method, equipment and medium of power time synchronization device |
CN117407824B (en) * | 2023-12-14 | 2024-02-27 | 四川蜀能电科能源技术有限公司 | Health detection method, equipment and medium of power time synchronization device |
Also Published As
Publication number | Publication date |
---|---|
CN113033079B (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109146246B (en) | Fault detection method based on automatic encoder and Bayesian network | |
CN109800875A (en) | Chemical industry fault detection method based on particle group optimizing and noise reduction sparse coding machine | |
CN112580263A (en) | Turbofan engine residual service life prediction method based on space-time feature fusion | |
CN113033079A (en) | Chemical fault diagnosis method based on unbalanced correction convolutional neural network | |
CN106843195A (en) | Based on the Fault Classification that the integrated semi-supervised Fei Sheer of self adaptation differentiates | |
CN111768000A (en) | Industrial process data modeling method for online adaptive fine-tuning deep learning | |
CN111079926B (en) | Equipment fault diagnosis method with self-adaptive learning rate based on deep learning | |
CN112784920B (en) | Yun Bianduan coordinated rotating component reactance domain self-adaptive fault diagnosis method | |
CN113052218A (en) | Multi-scale residual convolution and LSTM fusion performance evaluation method for industrial process | |
Deng et al. | Semi-supervised discriminative projective dictionary pair learning and its application to industrial process | |
Sitawarin et al. | Minimum-norm adversarial examples on KNN and KNN based models | |
CN115424177A (en) | Twin network target tracking method based on incremental learning | |
CN111985825A (en) | Crystal face quality evaluation method for roller mill orientation instrument | |
CN115345222A (en) | Fault classification method based on TimeGAN model | |
CN112146879A (en) | Rolling bearing fault intelligent diagnosis method and system | |
CN115905855A (en) | Improved meta-learning algorithm MG-copy | |
CN111723857B (en) | Intelligent monitoring method and system for running state of process production equipment | |
CN113538445A (en) | Image segmentation method and system based on weighted robust FCM clustering | |
Bi | Multi-objective programming in SVMs | |
CN117370826A (en) | Method for extracting health state characteristics in wind turbine generator operation data | |
CN113688875B (en) | Industrial system fault identification method and device | |
CN115578325A (en) | Image anomaly detection method based on channel attention registration network | |
CN109547248A (en) | Based on artificial intelligence in orbit aerocraft ad hoc network method for diagnosing faults and device | |
Huang et al. | Label propagation dictionary learning based process monitoring method for industrial process with between-mode similarity | |
CN112766410A (en) | Rotary kiln firing state identification method based on graph neural network feature fusion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |