CN111665066A - Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network - Google Patents

Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network Download PDF

Info

Publication number
CN111665066A
CN111665066A CN202010418461.6A CN202010418461A CN111665066A CN 111665066 A CN111665066 A CN 111665066A CN 202010418461 A CN202010418461 A CN 202010418461A CN 111665066 A CN111665066 A CN 111665066A
Authority
CN
China
Prior art keywords
interval
data
temp
model
bound
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010418461.6A
Other languages
Chinese (zh)
Other versions
CN111665066B (en
Inventor
张洁
任杰
汪俊亮
毛新华
魏成广
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Donghua University
Beijing Chonglee Machinery Engineering Co Ltd
National Dong Hwa University
Original Assignee
Donghua University
Beijing Chonglee Machinery Engineering Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Donghua University, Beijing Chonglee Machinery Engineering Co Ltd filed Critical Donghua University
Priority to CN202010418461.6A priority Critical patent/CN111665066B/en
Publication of CN111665066A publication Critical patent/CN111665066A/en
Application granted granted Critical
Publication of CN111665066B publication Critical patent/CN111665066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M99/00Subject matter not provided for in other groups of this subclass
    • G01M99/005Testing of complete machines, e.g. washing-machines or mobile phones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a device fault self-adaptive upper and lower early warning boundary generation method based on a convolutional neural network, which can be used for fault diagnosis of a chemical fiber winding machine in a spinning process, and comprises the following steps: a convolution neural network model for interval prediction and an upper and lower boundary model for interval self-adaptive generation and classification. The invention utilizes the vibration signal collected by the chemical fiber winding machine in the spinning process to carry out fault diagnosis, overcomes the defects of low accuracy and easy influence of human factors of the existing fault diagnosis technology, introduces the cost sensitive learning module to optimize the loss function in the convolution neural network updating iteration process, and obtains the machine learning fault detection method with the error-division cost as the optimization target. The invention has better practicability under the condition of unbalanced samples.

Description

Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network
Technical Field
The invention relates to a chemical fiber winding machine fault diagnosis method combining cost-sensitive learning and a classification algorithm of a convolutional neural network, and belongs to the field of fault diagnosis and machine learning.
Background
The normal operation of the chemical fiber equipment plays an important role in the quality of chemical fiber products, the safety of chemical fiber production and the normal use of the chemical fiber equipment. The chemical fiber production process is more complex than the traditional mechanical manufacturing, mainly shows that silk threads are formed by solid raw materials through high temperature, high pressure, cooling and air drying, and relates to various processes such as melting, extrusion, spinning, bundling, winding, elasticizing and the like. Therefore, the problem of fault diagnosis of chemical fiber equipment is increasingly emphasized.
At present, the research on the aspect of fault diagnosis of chemical fiber equipment is deficient, the traditional fault diagnosis method of mechanical equipment such as gears and bearings is referred, the traditional method mainly adopts time domain analysis, frequency domain analysis and time-frequency domain analysis methods, and in the face of heavy parts of chemical fiber winding machines with complex mechanical structures, the traditional method is difficult to accurately diagnose faults and reflect fault trends, and further adopts a machine learning data analysis method, but most of machine learning methods usually adopt fixed thresholds when applied to fault diagnosis, so that the results are judged to not fully utilize the advantages of models, the fault diagnosis accuracy is not high, fault misjudgment often occurs, and the fault diagnosis process of the complex equipment is not facilitated.
For fault classification application of convolutional neural networks in machine learning, the fault classification application is usually designed under the condition of balanced sample distribution, in the chemical fiber production process, fault equipment usually occupies a small amount, and unbalanced samples can influence the result of a machine learning classification algorithm and even cause failure of a classification model. Therefore, the application of the machine learning method in the fault diagnosis of the chemical fiber equipment is limited.
Disclosure of Invention
The technical problem to be solved by the invention is as follows: the existing fault diagnosis technology is not high in accuracy and is easily influenced by human factors.
In order to solve the technical problem, the technical scheme of the invention is to provide a device fault self-adaptive upper and lower early warning boundary generation method based on a convolutional neural network, which is characterized by comprising the following steps of:
step 1, collecting vibration signals in the running process of equipment with known conditions, sampling the collected signals, and obtaining sampling signals:
step 2, inputting a sampling signal into a constructed interval prediction model based on CNN, outputting an upper bound and a lower bound, defining the number of times of cyclic training, and obtaining a trained adaptive interval generation model, wherein the adaptive interval generation model trains an adaptive upper bound interval and an adaptive lower bound interval under a normal operation state according to existing historical data, wherein:
the interval prediction model based on the CNN is a CNN model with double-channel output, takes a time sequence as input, outputs an upper bound and a lower bound, and comprises an input layer, an output layer and a hidden layer, wherein the output layer is designed with two nerve units, and the first nerve unit is connected with the hidden layer and is used for outputting the lower bound ylowerThe second neural unit is the output upper bound yupperUpper bound of yupperFrom a lower bound ylowerThe method is obtained by increasing the fixed width on the basis, namely:
yupper=ylower+diff (1)
in the formula (1), diff represents an upper bound yupperAnd lower bound ylowerA fixed width therebetween;
the adaptive interval generation model generates and optimizes an adaptive upper and lower bound interval by the following steps:
step 201, four cases in the iterative process, including:
determining a label of original training data, wherein S is 0 to represent that the original data is normal, and S is 1 to represent that the original data is abnormal;
determining a diagnostic label for the model to the data as stempWhen 0 indicates that the model is judged to be normal, s temp1 indicates that the model is judged to be abnormal;
two signal states in the data sequence: normal and abnormal;
defining that the effect is better for normal data, the closer to the center of the interval is; for abnormal data, the closer to the center of the interval, the worse the effect; the labels S of the original training data and the diagnosis labels S of the model to the datatempFor labeling, the procedure was divided into four different cases in the iterative process as shown in table 1 below:
stemp=0 stemp=1
S=0 normal data in the interval Normal data is outside the interval
S=1 Abnormal data within interval Abnormal data outside of interval
TABLE 1
Step 202, calculating real-time state parameters:
diagnostic tags s for data using modelstempCalculating the global loss:
diagnostic label stempDefining the distance dis of a signal value to the center of the interval in relation to the distancecenterThen, there are:
discenter=y-ylower+diff/2 (2)
formula (2) represents the distance between the real signal and the upper and lower boundaries, and in formula (2), y represents the model output signal;
defining the distance dis of a signal value to the nearest boundary of a sectionboundThen, there are:
disbound=|discenter-diff/2| (3)
equation (3) represents the distance of the real signal to the upper and lower nearest boundaries;
calculating a diagnostic tag stempDetermining stempThe value is calculated by the following equation (4):
Figure BDA0002495976980000031
step 203, defining a loss function based on cost sensitivity:
step 2031, the normal loss function of the original data is
Figure BDA0002495976980000032
Figure BDA0002495976980000033
In the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000034
the distance from the output signal to the center of the interval is represented, wherein i is the number of current samples participating in calculation and output;
loss function in the case of failure of raw data is
Figure BDA0002495976980000035
Figure BDA0002495976980000036
In the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000037
the distance from the output signal to the nearest boundary of the interval is represented, wherein i is the number of current samples participating in calculation and output;
the source shown in the fourth quadrant of Table 1Initial data abnormality, abnormal model judgment, loss function in the case
Figure BDA0002495976980000038
Is defined as:
Figure BDA0002495976980000039
wherein, is a very small constant;
the overall loss function J (ω, b) is then:
Figure BDA00024959769800000310
in the formula (5), m represents the number of loss function values obtained by all samples participating in calculation;
step 2032, setting a false score cost for the model, wherein the existence condition of the false score cost is shown in table 2:
stemp=0 stemp=1
S=0 normal data, predicted normal, no cost Normal data, predictive anomaly, at cost
S=1 Abnormal data, predicted Normal, costed Abnormal data, predicted abnormality, no cost
TABLE 2
For S ═ 0, diagnostic tag StempCost of misclassification at 1 hour
Figure BDA0002495976980000041
The method comprises the following steps:
Figure BDA0002495976980000042
in the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000043
introducing an imbalance rate IR;
for S ═ 1, diagnostic tag StempCost of misclassification when 0
Figure BDA0002495976980000044
The method comprises the following steps:
Figure BDA0002495976980000045
in the formula, IR represents the unbalance rate,
Figure BDA0002495976980000046
Mmajorrepresenting the number of majority classes of samples, i.e. normal data samples, MminorRepresenting the number of a few types of samples, namely the number of abnormal data samples;
Figure BDA0002495976980000047
step 2033, obtaining the loss function J (ω, b) added with the cost sensitivity as:
Figure BDA0002495976980000048
step 204, error back propagation process: updating the weight omega and the bias b according to the gradient descent idea;
and 3, carrying out fault diagnosis on equipment in an unknown state, inputting vibration signals acquired in real time into a trained adaptive interval generation model, judging whether actual output falls into an upper and lower bound interval, judging that the equipment normally operates when the actual output falls into the upper and lower bound interval, and judging that the equipment abnormally operates when the actual output falls outside the upper and lower bound interval.
Preferably, in step 202, the diagnostic tag s is redefined using the tanh functiontempThen the equation (4) is transformed into: stemp(discenter)=0.5×tanh[300×(discenter-diff/2)]+0.5。
Preferably, in step 204, the parameters are updated by using the following formula:
Figure BDA0002495976980000051
Figure BDA0002495976980000052
in the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000053
represents the weight of the l-layer node i of the neural network at the current time t,
Figure BDA0002495976980000054
the bias of the l-layer node i of the neural network at the current time t is shown, α shows the learning rate, and J shows the loss function J (omega, b).
The invention provides a machine learning fault diagnosis method which can adapt to the running condition of chemical fiber equipment and has high robustness of self-adaptive interval prediction, can diagnose the fault of a chuck in the running process of a chemical fiber winding machine, overcomes the defects that the existing fault diagnosis technology is not high in accuracy and too dependent on manpower, adopts a self-adaptive fault detection method of an interval prediction convolutional neural network time sequence prediction model, generates an interval prediction model in a self-adaptive mode, and diagnoses the equipment fault of the chemical fiber winding machine. The fault diagnosis method has the advantages of good robustness, few system parameters and simplicity and convenience.
Drawings
FIG. 1 is a diagram of an adaptive interval model of a convolutional neural network;
FIG. 2 is stempAn original definition map;
FIG. 3 is stempTan h improvement diagram of (1).
Detailed Description
The invention will be further illustrated with reference to the following specific examples. It should be understood that these examples are for illustrative purposes only and are not intended to limit the scope of the present invention. Further, it should be understood that various changes or modifications of the present invention may be made by those skilled in the art after reading the teaching of the present invention, and such equivalents may fall within the scope of the present invention as defined in the appended claims.
The invention provides a device fault self-adaptive upper and lower early warning boundary generation method based on a convolutional neural network, which comprises the following steps of:
step 1, collecting vibration signals in the operation process of a chuck of a chemical fiber winding machine, and storing original signals into a computer by adopting an acceleration sensor and a collection card. And sampling the acquired signals by using a sliding window sampling method.
Step 2, training a model: firstly, inputting a sampling signal into a constructed CNN-based interval prediction model, outputting an upper boundary and a lower boundary, defining cyclic training for 50000 times, and obtaining a trained adaptive interval generation model, wherein the adaptive interval generation model trains an adaptive upper boundary interval and a adaptive lower boundary interval under a normal operation state according to existing historical data. In the testing process, when the original data label is normal, the prediction is accurate when the original data label falls into the interval, and the prediction is wrong when the original data label exceeds the boundary of the interval. When the original data label is a fault, the prediction is accurate when the original data label exceeds the boundary of the interval, and the prediction is wrong when the original data label falls into the interval. Considering the unbalance of the original sample, adding the misclassification cost considering the unbalance ratio and determining the importance degree of the sample in the model design process, and improving the misclassification cost under the condition of wrong prediction, particularly when the original data label is a fault and falls into the interval. Therefore, model prediction errors are reduced to a greater extent, and the accuracy of fault diagnosis is improved.
Specifically, in the above step, the CNN-based section prediction model is a CNN model having a dual channel output, and the upper and lower bounds are output with the time series as input. Before inputting the time series, the input time series length needs to be specified, and the data of the input interval prediction model is acquired from the time series signal by adopting a sliding window sampling method. In order to reduce the calculation cost, shorten the calculation time and appropriately reduce the size of the sliding window, but at the same time, the requirement that the time sequence can provide enough information to predict the time interval is met, and the length of the sliding window is determined to be n-200.
As shown in fig. 1, the CNN-based interval prediction model has 11 layers in total, including an input layer, an output layer, and a hidden layer.
The input sequence length in the input layer is 200.
The first layer of the 9 hidden layers, i.e., the 2 nd layer of the interval prediction model, is a convolution layer, and the convolution kernel is composed of 32 5 × 1 convolutions.
The build-up layer is followed by a pooling layer, with pooling being performed using the same number of pooling units of length 5.
Then, at layers 4, 5, 6, and 7, 64, 128, 256, and 512 convolution kernels are applied to perform convolution operations, respectively, and the convolution kernels have a size of 5 × 1.
At level 8, maxpoling is performed, and its output is flattened to one-dimensional feature mapping at level 9, and a fully connected neural network is added.
In order to generate an interval, an output layer is different from a time series prediction model, the time series prediction model only designs one neuron to output a predicted value, and two neural units are designed in the output layer. The first nerve unit is connected with the tenth layer and outputs a lower bound ylowerThe second neural unit is the output upper bound yupperUpper bound of yupperFrom a lower bound ylowerThe method is obtained by increasing the fixed width on the basis, namely:
yupper=ylower+diff (1)
in the formula (1), diff represents an upper bound yupperAnd lower bound ylowerBetweenA fixed width of (a).
The linear correction unit of the interval prediction model adopts a ReLU activation function, and has the advantages of simple calculation and high convergence speed, and the ReLU function is defined as the following formula (2):
Figure BDA0002495976980000071
in equation (2), z is a value before activation, and since the ReLU function can change all negative values to 0 and keep the positive values unchanged, the hidden unit is sparsely activated.
The self-adaptive interval generation module: two control boundaries are continuously obtained through uninterrupted time sequence input, and meanwhile, the model is trained, so that the two control boundaries are optimized, and high detection precision is kept. The adaptive control boundaries can be generated and optimized in 4 steps:
step 201, four cases in the iterative process, including:
determining a label of original training data, wherein S is 0 to represent that the original data is normal, and S is 1 to represent that the original data is abnormal;
determining a diagnostic label for the model to the data as stempWhen 0 indicates that the model is judged to be normal, s temp1 indicates that the model is judged to be abnormal;
two signal states in the data sequence: normal and abnormal;
defining that the effect is better for normal data, the closer to the center of the interval is; for abnormal data, the closer to the center of the interval, the worse the effect; the labels S of the original training data and the diagnosis labels S of the model to the datatempFor labeling, the procedure was divided into four different cases in the iterative process as shown in table 1 below:
stemp=0 stemp=1
S=0 normal data in the interval Normal data is outside the interval
S=1 Abnormal data within interval Abnormal data outside of interval
TABLE 1
Step 202, calculating real-time state parameters:
to facilitate calculation of losses under different conditions, diagnostic labels s of the data using the modeltempCalculating the global loss:
diagnostic label stempDefining the distance dis of a signal value to the center of the interval in relation to the distancecenterThen, there are:
discenter=y-ylower+diff/2 (3)
formula (3) represents the distance between the real signal and the upper and lower boundaries, and in formula (3), y represents the model output signal;
defining the distance dis of a signal value to the nearest boundary of a sectionboundThen, there are:
disbound=|discenter-diff/2| (4)
equation (4) represents the distance of the real signal to the upper and lower nearest boundaries;
calculating a diagnostic tag stempDetermining stempThe value is calculated by the following equation (5):
Figure BDA0002495976980000081
due to unavailability of piecewise functionMicro-nature, redefining s using tanh-functiontempAs shown in the following formula (6):
stemp(discenter)=0.5×tanh[300×(discenter-diff/2)]+0.5 (6)
the differentiability effect of the redefined formula (6) is shown in fig. 2 to 3.
Step 203, defining a loss function based on cost sensitivity:
step 2031, the normal loss function of the original data is
Figure BDA0002495976980000082
Figure BDA0002495976980000083
In the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000084
the distance from the output signal to the center of the interval is represented, wherein i is the number of current samples participating in calculation and output;
loss function in the case of failure of raw data is
Figure BDA0002495976980000085
Figure BDA0002495976980000086
In the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000087
the distance from the output signal to the nearest boundary of the interval is represented, wherein i is the number of current samples participating in calculation and output;
the original data shown in the fourth quadrant of table 1 is abnormal, the model judges the abnormal condition, and the loss function in the abnormal condition
Figure BDA0002495976980000088
Is defined as:
Figure BDA0002495976980000089
in the formula, considering that the denominator cannot be 0, it is a very small constant and is set to 0.0001;
the overall loss function J (ω, b) is then:
Figure BDA0002495976980000091
in the formula (7), m represents the number of loss function values calculated by all samples.
In the actual training process, when the original is normal, the model is judged to be abnormal, namely s is 0 but s istempWhen the original is abnormal or 1, the model is judged to be normal, i.e. s is 1 but stempWhen the value is 0, the classification accuracy of the model is affected, and especially when the value is 0, missing detection of product forming problems can occur in the chemical fiber production process. Therefore, the cost of the error score needs to be set for the model, and the following steps are provided:
step 2032, setting a false score cost for the model, wherein the existence condition of the false score cost is shown in table 2:
stemp=0 stemp=1
S=0 normal data, predicted normal, no cost Normal data, predictive anomaly, at cost
S=1 Abnormal data, predicted Normal, costed Abnormal data, predicted abnormality, no cost
TABLE 2
For S ═ 0, diagnostic tag StempCost of misclassification at 1 hour
Figure BDA0002495976980000092
The method comprises the following steps:
Figure BDA0002495976980000093
in the formula, considering that the farther from the center of the section, the higher the degree of important information, the larger the error score cost is set, then
Figure BDA0002495976980000094
Introducing an imbalance rate IR;
for S ═ 1, diagnostic tag StempCost of misclassification when 0
Figure BDA0002495976980000095
The method comprises the following steps:
Figure BDA0002495976980000096
in the formula, because the unbalance condition of the original data is considered in the setting process, the unbalance rate IR is introduced,
Figure BDA0002495976980000097
Mmajorrepresenting the number of majority classes of samples, i.e. normal data samples, MminorRepresenting the number of a few types of samples, namely the number of abnormal data samples;
Figure BDA0002495976980000098
step 2033, obtaining the loss function J (ω, b) added with the cost sensitivity as:
Figure BDA0002495976980000101
step 204, error back propagation process: updating the weight omega and the bias b by adopting the following formula according to the idea of gradient descent:
Figure BDA0002495976980000102
Figure BDA0002495976980000103
in the formula (I), the compound is shown in the specification,
Figure BDA0002495976980000104
represents the weight of the l-layer node i of the neural network at the current time t,
Figure BDA0002495976980000105
represents the bias of the l-layer node i of the neural network at the current time t, α represents the learning rate, J represents the loss function J (omega, b)
Step 3, testing the model: and carrying out fault diagnosis on equipment in an unknown state, inputting signals acquired in real time by using a vibration acceleration sensor and an acquisition card into a trained model, judging whether actual output falls into an upper and lower bound interval, judging that the equipment normally operates when the actual output falls into the interval, and judging that the equipment abnormally operates when the actual output falls outside the interval.

Claims (3)

1. A self-adaptive upper and lower early warning boundary generation method for equipment faults based on a convolutional neural network is characterized by comprising the following steps:
step 1, collecting vibration signals in the running process of equipment with known conditions, sampling the collected signals, and obtaining sampling signals:
step 2, inputting a sampling signal into a constructed interval prediction model based on CNN, outputting an upper bound and a lower bound, defining the number of times of cyclic training, and obtaining a trained adaptive interval generation model, wherein the adaptive interval generation model trains an adaptive upper bound interval and an adaptive lower bound interval under a normal operation state according to existing historical data, wherein:
the interval prediction model based on the CNN is a CNN model with double-channel output, takes a time sequence as input, outputs an upper bound and a lower bound, and comprises an input layer, an output layer and a hidden layer, wherein the output layer is designed with two nerve units, and the first nerve unit is connected with the hidden layer and is used for outputting the lower bound ylowerThe second neural unit is the output upper bound yupperUpper bound of yupperFrom a lower bound ylowerThe method is obtained by increasing the fixed width on the basis, namely:
yupper=ylower+diff (1)
in the formula (1), diff represents an upper bound yupperAnd lower bound ylowerA fixed width therebetween;
the adaptive interval generation model generates and optimizes an adaptive upper and lower bound interval by the following steps:
step 201, four cases in the iterative process, including:
determining a label of original training data, wherein S is 0 to represent that the original data is normal, and S is 1 to represent that the original data is abnormal;
determining a diagnostic label for the model to the data as stempWhen 0 indicates that the model is judged to be normal, stemp1 indicates that the model is judged to be abnormal;
two signal states in the data sequence: normal and abnormal;
defining that the effect is better for normal data, the closer to the center of the interval is; for abnormal data, the closer to the center of the interval, the worse the effect; the labels S of the original training data and the diagnosis labels S of the model to the datatempFor labeling, the procedure was divided into four different cases in the iterative process as shown in table 1 below:
stemp=0 stemp=1 S=0 normal data in the interval Normal data is outside the interval S=1 Abnormal data within interval Abnormal data outside of interval
TABLE 1
Step 202, calculating real-time state parameters:
diagnostic tags s for data using modelstempCalculating the global loss:
diagnostic label stempDefining the distance dis of a signal value to the center of the interval in relation to the distancecenterThen, there are:
discenter=y-ylower+diff/2 (2)
formula (2) represents the distance between the real signal and the upper and lower boundaries, and in formula (2), y represents the model output signal;
defining the distance dis of a signal value to the nearest boundary of a sectionboundThen, there are:
disbound=|discenter-diff/2| (3)
equation (3) represents the distance of the real signal to the upper and lower nearest boundaries;
calculating a diagnostic tag stempDetermining stempThe value is calculated by the following equation (4):
Figure FDA0002495976970000021
step 203, defining a loss function based on cost sensitivity:
step 2031, the normal loss function of the original data is
Figure FDA0002495976970000022
Figure FDA0002495976970000023
In the formula (I), the compound is shown in the specification,
Figure FDA0002495976970000024
the distance from the output signal to the center of the interval is represented, wherein i is the number of current samples participating in calculation and output;
loss function in the case of failure of raw data is
Figure FDA0002495976970000025
Figure FDA0002495976970000026
In the formula (I), the compound is shown in the specification,
Figure FDA0002495976970000027
the distance from the output signal to the nearest boundary of the interval is represented, wherein i is the number of current samples participating in calculation and output;
the original data shown in the fourth quadrant of table 1 is abnormal, the model judges the abnormal condition, and the loss function in the abnormal condition
Figure FDA0002495976970000028
Is defined as:
Figure FDA0002495976970000029
wherein, is a very small constant;
the overall loss function J (ω, b) is then:
Figure FDA0002495976970000031
in the formula (5), m represents the number of loss function values obtained by all samples participating in calculation;
step 2032, setting a false score cost for the model, wherein the existence condition of the false score cost is shown in table 2:
stemp=0 stemp=1 S=0 normal data, predicted normal, no cost Normal data, predictive anomaly, at cost S=1 Abnormal data, predicted Normal, costed Abnormal data, predicted abnormality, no cost
TABLE 2
For S ═ 0, diagnostic tag StempCost of misclassification at 1 hour
Figure FDA0002495976970000032
The method comprises the following steps:
Figure FDA0002495976970000033
in the formula (I), the compound is shown in the specification,
Figure FDA0002495976970000034
introducing an imbalance rate IR;
for S ═ 1, diagnostic tag StempCost of misclassification when 0
Figure FDA0002495976970000035
The method comprises the following steps:
Figure FDA0002495976970000036
in the formula, IR represents the unbalance rate,
Figure FDA0002495976970000037
Mmajorrepresenting the number of majority classes of samples, i.e. normal data samples, MminorRepresenting the number of a few types of samples, namely the number of abnormal data samples;
Figure FDA0002495976970000038
step 2033, obtaining the loss function J (ω, b) added with the cost sensitivity as:
Figure FDA0002495976970000039
step 204, error back propagation process: updating the weight omega and the bias b according to the gradient descent idea;
and 3, carrying out fault diagnosis on equipment in an unknown state, inputting vibration signals acquired in real time into a trained adaptive interval generation model, judging whether actual output falls into an upper and lower bound interval, judging that the equipment normally operates when the actual output falls into the upper and lower bound interval, and judging that the equipment abnormally operates when the actual output falls outside the upper and lower bound interval.
2. A convolutional neural network-based device as claimed in claim 1The method for generating the self-adaptive upper and lower early warning boundaries of the standby fault is characterized in that in step 202, the tan h function is used for redefining the diagnosis label stempThen the equation (4) is transformed into: stemp(discenter)=0.5×tanh[300×(discenter-diff/2)]+0.5。
3. The convolutional neural network-based device fault adaptive upper and lower early warning bound generation method as claimed in claim 1, wherein in step 204, the following formula is used to update the parameters:
Figure FDA0002495976970000041
Figure FDA0002495976970000042
in the formula (I), the compound is shown in the specification,
Figure FDA0002495976970000043
represents the weight of the l-layer node i of the neural network at the current time t,
Figure FDA0002495976970000044
the bias of the l-layer node i of the neural network at the current time t is shown, α shows the learning rate, and J shows the loss function J (omega, b).
CN202010418461.6A 2020-05-18 2020-05-18 Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network Active CN111665066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010418461.6A CN111665066B (en) 2020-05-18 2020-05-18 Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010418461.6A CN111665066B (en) 2020-05-18 2020-05-18 Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN111665066A true CN111665066A (en) 2020-09-15
CN111665066B CN111665066B (en) 2021-06-11

Family

ID=72383899

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010418461.6A Active CN111665066B (en) 2020-05-18 2020-05-18 Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN111665066B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364706A (en) * 2020-10-19 2021-02-12 燕山大学 Small sample bearing fault diagnosis method based on class imbalance
CN112506687A (en) * 2020-11-24 2021-03-16 四川长虹电器股份有限公司 Fault diagnosis method based on multi-period segmented sliding window standard deviation
CN114034957A (en) * 2021-11-12 2022-02-11 广东电网有限责任公司江门供电局 Transformer vibration abnormity detection method based on working condition division
CN114831643A (en) * 2022-07-04 2022-08-02 南京大学 Electrocardiosignal monitoring devices and wearable equipment

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
CN109447153A (en) * 2018-10-29 2019-03-08 四川大学 Divergence-excitation self-encoding encoder and its classification method for lack of balance data classification
CN109636026A (en) * 2018-12-07 2019-04-16 东华大学 A kind of wafer yield prediction technique based on deep learning model
CN109635677A (en) * 2018-11-23 2019-04-16 华南理工大学 Combined failure diagnostic method and device based on multi-tag classification convolutional neural networks
CN109948478A (en) * 2019-03-06 2019-06-28 中国科学院自动化研究所 The face identification method of extensive lack of balance data neural network based, system
CN110110905A (en) * 2019-04-17 2019-08-09 华电国际电力股份有限公司十里泉发电厂 A kind of electrical equipment fault based on CNN judges method for early warning, terminal and readable storage medium storing program for executing
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN110361176A (en) * 2019-06-05 2019-10-22 华南理工大学 A kind of intelligent failure diagnosis method for sharing neural network based on multitask feature
WO2020048119A1 (en) * 2018-09-04 2020-03-12 Boe Technology Group Co., Ltd. Method and apparatus for training a convolutional neural network to detect defects
CN110991295A (en) * 2019-11-26 2020-04-10 电子科技大学 Self-adaptive fault diagnosis method based on one-dimensional convolutional neural network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170169315A1 (en) * 2015-12-15 2017-06-15 Sighthound, Inc. Deeply learned convolutional neural networks (cnns) for object localization and classification
CN108648161A (en) * 2018-05-16 2018-10-12 江苏科技大学 The binocular vision obstacle detection system and method for asymmetric nuclear convolutional neural networks
WO2020048119A1 (en) * 2018-09-04 2020-03-12 Boe Technology Group Co., Ltd. Method and apparatus for training a convolutional neural network to detect defects
CN109447153A (en) * 2018-10-29 2019-03-08 四川大学 Divergence-excitation self-encoding encoder and its classification method for lack of balance data classification
CN109635677A (en) * 2018-11-23 2019-04-16 华南理工大学 Combined failure diagnostic method and device based on multi-tag classification convolutional neural networks
CN109636026A (en) * 2018-12-07 2019-04-16 东华大学 A kind of wafer yield prediction technique based on deep learning model
CN109948478A (en) * 2019-03-06 2019-06-28 中国科学院自动化研究所 The face identification method of extensive lack of balance data neural network based, system
CN110110905A (en) * 2019-04-17 2019-08-09 华电国际电力股份有限公司十里泉发电厂 A kind of electrical equipment fault based on CNN judges method for early warning, terminal and readable storage medium storing program for executing
CN110210381A (en) * 2019-05-30 2019-09-06 盐城工学院 A kind of adaptive one-dimensional convolutional neural networks intelligent failure diagnosis method of domain separation
CN110361176A (en) * 2019-06-05 2019-10-22 华南理工大学 A kind of intelligent failure diagnosis method for sharing neural network based on multitask feature
CN110991295A (en) * 2019-11-26 2020-04-10 电子科技大学 Self-adaptive fault diagnosis method based on one-dimensional convolutional neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
PENGFEI LIANG 等: ""Compound Fault Diagnosis of Gearboxes via Multi-label Convolutional Neural Network and Wavelet Transform"", 《COMPUTERS IN INDUSTRY》 *
代璐 等: ""基于卷积神经网络的非等效点云分割方法"", 《东华大学学报(自然科学版)》 *
曲建岭 等: ""基于一维卷积神经网络的滚动轴承自适应故障诊断算法"", 《仪器仪表学报》 *
董勋 等: ""代价敏感卷积神经网络: 一种机械故障数据不平衡分类方法"", 《仪器仪表学报》 *
高甜容 等: ""基于自适应误差修正模型的概率神经网络"", 《计算机集成制造系统》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112364706A (en) * 2020-10-19 2021-02-12 燕山大学 Small sample bearing fault diagnosis method based on class imbalance
CN112506687A (en) * 2020-11-24 2021-03-16 四川长虹电器股份有限公司 Fault diagnosis method based on multi-period segmented sliding window standard deviation
CN112506687B (en) * 2020-11-24 2022-03-01 四川长虹电器股份有限公司 Fault diagnosis method based on multi-period segmented sliding window standard deviation
CN114034957A (en) * 2021-11-12 2022-02-11 广东电网有限责任公司江门供电局 Transformer vibration abnormity detection method based on working condition division
CN114034957B (en) * 2021-11-12 2023-10-03 广东电网有限责任公司江门供电局 Transformer vibration anomaly detection method based on working condition division
CN114831643A (en) * 2022-07-04 2022-08-02 南京大学 Electrocardiosignal monitoring devices and wearable equipment

Also Published As

Publication number Publication date
CN111665066B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
CN111665066B (en) Equipment fault self-adaptive upper and lower early warning boundary generation method based on convolutional neural network
WO2023071217A1 (en) Multi-working-condition process industrial fault detection and diagnosis method based on deep transfer learning
CN110334740A (en) The electrical equipment fault of artificial intelligence reasoning fusion detects localization method
EP1960853B1 (en) Evaluating anomaly for one-class classifiers in machine condition monitoring
CN112508105B (en) Fault detection and retrieval method for oil extraction machine
CN112284440B (en) Sensor data deviation self-adaptive correction method
CN108490923A (en) The design method of small fault detection and positioning for electric traction system
CN111474475B (en) Motor fault diagnosis system and method
CN110163075A (en) A kind of multi-information fusion method for diagnosing faults based on Weight Training
CN110570013B (en) Single-station online wave period data prediction diagnosis method
CN116380445B (en) Equipment state diagnosis method and related device based on vibration waveform
US20220004163A1 (en) Apparatus for predicting equipment damage
CN108257365A (en) A kind of industrial alarm designs method based on global nonspecific evidence dynamic fusion
CN115455746B (en) Nuclear power device operation monitoring data anomaly detection and correction integrated method
KR20220062547A (en) Sensor Agnostic Mechanical Mechanical Fault Identification
Van den Hoogen et al. An improved wide-kernel CNN for classifying multivariate signals in fault diagnosis
CN116720073A (en) Abnormality detection extraction method and system based on classifier
CN112202630A (en) Network quality abnormity detection method and device based on unsupervised model
US11609836B2 (en) Operation method and operation device of failure detection and classification model
Lin et al. Performance analysis of rotating machinery using enhanced cerebellar model articulation controller (E-CMAC) neural networks
CN114046816A (en) Sensor signal fault diagnosis method based on lightweight gradient lifting decision tree
Tastimur et al. Defect diagnosis of rolling element bearing using deep learning
KR20220038269A (en) Method of making prediction relating to products manufactured via manufacturing process
CN112231849A (en) Axle temperature fault detection method based on NEST and SPRT fusion algorithm
CN115556099B (en) Sustainable learning industrial robot fault diagnosis system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant