CN112434729B - Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample - Google Patents

Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample Download PDF

Info

Publication number
CN112434729B
CN112434729B CN202011244156.6A CN202011244156A CN112434729B CN 112434729 B CN112434729 B CN 112434729B CN 202011244156 A CN202011244156 A CN 202011244156A CN 112434729 B CN112434729 B CN 112434729B
Authority
CN
China
Prior art keywords
model
new
training
sample
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011244156.6A
Other languages
Chinese (zh)
Other versions
CN112434729A (en
Inventor
陈景龙
李芙东
訾艳阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Jiaotong University
Original Assignee
Xian Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Jiaotong University filed Critical Xian Jiaotong University
Priority to CN202011244156.6A priority Critical patent/CN112434729B/en
Publication of CN112434729A publication Critical patent/CN112434729A/en
Application granted granted Critical
Publication of CN112434729B publication Critical patent/CN112434729B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)

Abstract

The invention discloses a fault intelligent diagnosis method based on a layer regeneration network under an unbalanced sample, which uses an acceleration sensor to collect an original signal under the running state of mechanical equipment, intercepts a time sequence with a fixed length to obtain a data sample set, and performs standardized pretreatment on each sample; classifying and labeling the acquired data samples, and dividing sample data into a pre-training set, a training set and a testing set; then constructing a fault diagnosis model based on a layer regeneration network, and pre-training by using a pre-training set to obtain a model with good recognition capability on the health state of old task data; training the model by using a training set sample and fine-adjusting parameters of the full-connection layer to obtain a model with good recognition capability on the health state of the new task data; finally, all parameters of the network are adjusted by utilizing knowledge distillation, and the recognition capability of the network to the old task data is improved. The invention can reduce a large amount of data storage cost and is beneficial to promoting the practical application of the intelligent diagnosis method in engineering.

Description

Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample
Technical Field
The invention relates to the technical field of mechanical equipment fault diagnosis, in particular to an intelligent fault diagnosis method based on a layer regeneration network under an unbalanced sample.
Background
The fault diagnosis method based on deep learning has a plurality of barriers in the process from theoretical research to practical application, and one of the barriers is how to identify problems by a deep learning model after new type data appear. Generally, after the model is trained and applied to practice, the type of data that it can recognize is fixed. However, due to the complexity of the actual conditions, it is difficult to take into account all failure modes. When new task data which is not considered in the training stage is generated in the running process of the equipment, the model is difficult to identify the type of data, and therefore, the model is required to be updated so as to be capable of identifying the new task data.
The fine tuning method can directly fine tune the model on the basis of the old task by means of knowledge of the old task. However, the parameters change during the training process, which easily results in a dramatic drop in the model performance on the old task. The combined training method uses all new task data and old task data training models, and can give consideration to both new tasks and old tasks. However, the requirement for data volume is very high, and as task data is accumulated, the data storage cost is increased and the model updating speed is greatly reduced. Aiming at the problem that the model is difficult to update parameters under the unbalanced sample, the novel technology and the novel method for intelligent diagnosis of the mechanical equipment faults, which have small data volume requirements and give consideration to the recognition performance of the new task and the old task, are necessary to be researched.
Disclosure of Invention
The invention provides a fault intelligent diagnosis method based on a layer regeneration network under a type unbalance sample, which overcomes the defects of the prior art.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a fault intelligent diagnosis method based on a layer regeneration network under a class unbalance sample comprises the following steps:
step 1: collecting original signal data under the running state of mechanical equipment by using an acceleration sensor, intercepting the original signal data with a fixed length to obtain a data sample set, and carrying out time sequence preprocessing on each sample in the data sample set to obtain a sequence sample with the same mean value and variance;
step 2: classifying and labeling the acquired sequence samples, dividing the sequence samples into a pre-training set, a training set and a testing set, wherein the pre-training set corresponds to old tasks and labels, the training set is new type data and labels encountered in application, the new task is called as new tasks, and the testing set comprises old tasks, new task data and labels;
step 3: constructing a fault diagnosis model based on a layer regeneration network, which consists of a feature extraction module and a new task state identification module, and training by using the pre-training set obtained in the step 2 to obtain a model capable of identifying the health state of an old task;
step 4: updating and training parameters in the model trained in the step 3 by using the training set sample in the step 2, so that the model can identify the health state of a new task and simultaneously slow down the sliding of the model in the old task performance;
step 5: and (3) performing fault diagnosis on the state of the test set sample in the step (2) by using the model trained in the step (4).
Further, the timing pre-processing described in step 1 uses zero-mean normalization,
for sample { a } in the data sample set 1 ,a 2 ,...,a n And the calculation formula is as follows:
wherein a is i An i-th data value for a sample; n is the length of intercepting the original signal data; a is the sample mean; s is the sample variance;
new sequence { x } 1 ,x 2 ,...,x n The mean value of 0, variance 1, and dimensionless.
Further, the feature extraction module in the step 3 is constructed by a one-dimensional convolutional neural network and comprises four convolutional layer-pooling layer structures and three full-connection layers, specifically, the size of a convolutional kernel is reduced along with the deepening of the layer number, the sizes of the convolutional kernels used by the four convolutional layers are respectively 9, 7, 5 and 3, the step sizes are all 1, and the size of input and output is equal by adopting an edge zero padding measure; the pooling layers adopt the mode of maximum pooling, the sizes are respectively 4, 2 and 2, and the step sizes are respectively 4, 2 and 2.
Further, the new task state identification module in step 3 is composed of two fully connected layers, and the input of the new task state identification module is the output of the first fully connected layer of the feature extraction module.
Further, the parameter update training in step 4 includes two steps: the method comprises a parameter fine tuning stage and a distillation training stage, wherein the parameter fine tuning stage aims at enabling a model to quickly converge on a new task, but simultaneously can lead the model to rapidly slide down on the old task, and the distillation training stage utilizes knowledge to distill and improve the performance of the model on the old task.
Further, in the parameter fine tuning stage, the convolutional layer parameters are frozen, and the training set sequence samples are used for training the full-connection layer parameters until the loss converges.
Further, the cross entropy output by the model is calculated as a loss in the training process of the parameter fine adjustment stage, and the optimization target is to minimize the cross entropy loss:
min ylog(q(x new ))
wherein y is the actual label of the training set data; q (x) new ) And the result is a prediction result output by the model, and the result is a result obtained by splicing the last layer of output of the new task state identification module and the feature extraction module together and performing Softmax operation.
Further, in the distillation training stage, the optimization target consists of two parts, namely minimizing cross entropy loss and minimizing Euclidean distance between distillation output and original output, wherein the minimized cross entropy loss is used for ensuring the recognition accuracy of the model on a new task, and the minimized Euclidean distance between the distillation output and the original output is used for slowing down the degradation of the performance of the model on an old task.
Further, aiming at minimizing the cross entropy loss, calculating the cross entropy between the model output and the training set data label as the loss, wherein the optimization target is to minimize the cross entropy loss:
ylog(q'(x new ))
wherein q' (x new ) And outputting and performing a Softmax operation on the model after parameter fine tuning.
Further, for the Euclidean distance between the minimum distillation output and the original output, the feature extraction module of the model is outputOutput of the new task state identification module>Splicing into new output:
output z= { z to model 1 ,z 2 ,...,z n A generalized softmax function was used:
wherein T is temperature, and the larger T is, the softer the output probability is; q i Obtaining the probability of the ith state after Softmax operation;
the distance between the distillation output of the model after training and the output after parameter fine adjustment is used for participating in training:
dist(q R (x new )/T,q T (x new )/T)
wherein dist (·) is a euclidean distance function; q R (x new ) Outputting the model after parameter fine adjustment; q T (x new ) Distillation output for the model after training using euclidean distance metrics;
the overall optimization objective of this stage is:
min(ylog(q T (x new ))+λdist(q R (x new )/T,q T (x new )/T))
where λ is a fractional weight factor.
Compared with the prior art, the invention has the following beneficial technical effects:
1) The invention uses the deep convolutional neural network to perform feature extraction and operation state identification on the mechanical signals, can effectively extract sensitive features in the mechanical signals, and gets rid of the dependence of the traditional feature extraction process on artificial experience.
2) The method gives consideration to both new tasks and old tasks, enhances the learning of the new tasks, and simultaneously slows down the performance reduction of the model on the old tasks. Compared with the fine tuning method, the method can improve the performance of the model on the old task by about 30% -40%.
3) According to the method, old task data are not needed to participate in training, and the model trained by the old task is considered to be capable of mining information related to the old task in the new task and training the model by using the information. Therefore, the method can reduce a large amount of data storage cost, and is beneficial to pushing the intelligent diagnosis method to practical engineering application.
4) The method can use new task data to continuously learn self in the running process of the equipment, and can use more and more data to update self along with the running of the equipment.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of the structure of the detection model of the present invention;
FIG. 3 is a graph showing the detection results of an embodiment of the method of the present invention.
Detailed Description
The present invention is described in further detail below with reference to the drawings and examples to provide a better understanding of the present invention to those skilled in the art. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. For convenience of description, only parts related to the related invention are shown in the drawings. It should be noted that, without conflict, the embodiments of the present invention and the features of the embodiments may be combined with each other. The invention will be described in detail below with reference to the drawings in connection with embodiments.
An intelligent fault diagnosis method based on a layer regeneration network under a class unbalance sample is shown in fig. 1, and comprises the following steps:
step 1: collecting original signal data under the running state of mechanical equipment by using an acceleration sensor, intercepting the original signal data with a fixed length to obtain a data sample set, and carrying out time sequence preprocessing on each sample in the data sample set to obtain a sequence sample with the same mean value and variance;
the data normalization preprocessing uses zero mean normalization to normalize the sequence { a } 1 ,a 2 ,...,a n And the calculation formula is as follows:
wherein a is i An i-th data value for a sample; n is the length of intercepting the original signal data; a is the sample mean; s is the sample variance. New sequence { x } 1 ,x 2 ,...,x n The mean value of 0, variance 1, and dimensionless.
Step 2: classifying and labeling the acquired sequence samples, dividing the sequence samples into a pre-training set, a training set and a testing set, wherein the pre-training set corresponds to old tasks and labels, the training set corresponds to new types of data and labels encountered by a model in application, the new tasks are called new tasks, and the testing set comprises old tasks and new task data and labels;
step 3: constructing a fault diagnosis model based on a layer regeneration network, which consists of a feature extraction module and a new task state identification module, and training by using the pre-training set obtained in the step 2 to obtain a model capable of identifying the health state of an old task;
the feature extraction module is constructed by a one-dimensional convolutional neural network and comprises four convolutional layer-pooling layer structures and three full-connection layers. More specifically, the size of the convolution kernels is reduced along with the deepening of the layers, the sizes of the convolution kernels used by the four convolution layers are respectively 9, 7, 5 and 3, the step length is 1, and the size of the input is equal to that of the output by adopting an edge zero padding measure; the pooling layers adopt the mode of maximum pooling, the sizes are respectively 4, 2 and 2, and the step sizes are respectively 4, 2 and 2.
The new task state recognition module consists of two full-connection layers, the input of the new task state recognition module is the output of the first full-connection layer of the feature extraction module in the step 3, and the new task state recognition module and the feature extraction module are not subjected to gradient operation except that the output of the middle layer of the feature extraction module is accepted as the input.
Step 4: updating and training parameters in the model trained in the step 3 by using the training set sample in the step 2, so that the model can identify the health state of a new task and simultaneously slow down the sliding of the model in the old task performance;
the parameter update training includes two steps: parameter fine tuning stage and distillation training stage. The parameter fine tuning stage aims at enabling the model to quickly converge on a new task, but at the same time, the model can be caused to rapidly slide down on the old task. The distillation training stage utilizes knowledge distillation to promote the performance of the model in the old task.
And in the parameter fine tuning stage, freezing the parameters of the convolution layer, and training the parameters of the full-connection layer by using the training set sequence samples until the loss converges. The freeze parameters may slow down the model's performance downslide at the old task. The cross entropy between the model output and the data label is calculated as loss, and the optimization target is to minimize the cross entropy loss:
min ylog(q(x new ))
wherein y is the actual label of the data; q (x) new ) And outputting a prediction result for the model. Note that the result is the result of splicing the new task state identification module and the last layer of output of the feature extraction module together and performing Softmax operation.
In the distillation training stage, the optimization target consists of two parts, namely minimizing cross entropy loss and minimizing Euclidean distance between distillation output and original output, wherein the minimized cross entropy loss is used for ensuring the recognition accuracy of the model in a new task, and the minimized Euclidean distance between the distillation output and the original output is used for slowing down the degradation of the performance of the model in an old task.
In the section of minimizing cross entropy loss, calculating cross entropy between model output and data label as loss, and optimizing the target to minimize cross entropy loss:
ylog(q'(x new ))
wherein q' (x new ) And outputting and performing a Softmax operation on the result of the model after training.
In the Euclidean distance part of the minimized distillation output and the original output in the distillation training stage, the feature extraction module of the model is outputOutput of the new task state identification module>Splicing into new output:
output z= { z to model 1 ,z 2 ,...,z n A generalized softmax function was used:
wherein T is temperature, and the larger T is, the softer the output probability is; q i The probability of the ith state is obtained after Softmax operation.
The distance between the distillation output of the model after training and the output of the model after parameter fine adjustment using the euclidean distance measure participates in training:
dist(q R (x new )/T,q T (x new )/T)
wherein dist (·) is a euclidean distance function; q R (x new ) Outputting a parameter fine tuning model; q T (x new ) Distillation output for the trained model using euclidean distance metrics.
To sum up the two parts, the overall optimization objective of this stage is:
min ylog(q T (x new ))+λdist(q R (x new )/T,q T (x new )/T)
where λ is a fractional weight factor.
Step 5: and (3) performing fault diagnosis on the state of the test set sample in the step (2) by using the model trained in the step (4).
And the trained model is used for carrying out state evaluation on the test set sample, and the performance of the model on the old task can be obviously improved under the condition that the accuracy of identifying the new task state of the model is not affected.
The invention is described in further detail below in connection with specific examples:
a certain data set containing four bearing operating states is used, namely four rolling bearing operating states including normal state, ball fault state, inner ring fault state and outer ring fault state, wherein each operating state contains 2000 samples, and total contains 8000 samples, and the sample length is 1024. 1000 samples of normal, ball fault and inner ring fault are taken as pre-training data (old task), 1000 samples of outer ring fault are taken as training data (new task), and the remaining 1000 samples of normal, ball fault, inner ring fault and outer ring fault are taken as test data. The model is first trained using and training data to enable the model to identify the data types that the old task contains. And then, updating and training the model by using training data, so as to simulate the updating process of the model when encountering new type data in practice. And finally, measuring the performance of the model in the new task and the old task by using the test data. As shown in FIG. 3, the method only uses new task data to improve the performance of the model over 40% of the old task on the basis of fine tuning.
While the foregoing describes illustrative embodiments of the present invention to facilitate an understanding of the present invention by those skilled in the art, it should be understood that the present invention is not limited to the scope of the embodiments, but is to be construed as protected by the accompanying claims insofar as various changes are within the spirit and scope of the present invention as defined and defined by the appended claims.

Claims (3)

1. The intelligent fault diagnosis method based on the layer regeneration network under the unbalanced sample is characterized by comprising the following steps of:
step 1: collecting original signal data under the running state of mechanical equipment by using an acceleration sensor, intercepting the original signal data with a fixed length to obtain a data sample set, and carrying out time sequence preprocessing on each sample in the data sample set to obtain a sequence sample with the same mean value and variance;
step 2: classifying and labeling the acquired sequence samples, dividing the sequence samples into a pre-training set, a training set and a testing set, wherein the pre-training set corresponds to old tasks and labels, the training set is new type data and labels encountered in application, the new task is called as new tasks, and the testing set comprises old tasks, new task data and labels;
step 3: constructing a fault diagnosis model based on a layer regeneration network, which consists of a feature extraction module and a new task state identification module, and training by using the pre-training set obtained in the step 2 to obtain a model capable of identifying the health state of an old task;
the feature extraction module is constructed by a one-dimensional convolutional neural network and comprises four convolutional layer-pooling layer structures and three full-connection layers, specifically, the size of a convolutional kernel is reduced along with the deepening of the layer number, the sizes of the convolutional kernels used by the four convolutional layers are respectively 9, 7, 5 and 3, the step sizes are all 1, and the size of input and output is equal by adopting an edge zero padding measure; the pooling layers adopt the mode of maximum pooling, the sizes are respectively 4, 2 and 2, and the step sizes are respectively 4, 2 and 2;
the new task state identification module consists of two fully-connected layers, and the input of the new task state identification module is the output of the first fully-connected layer of the feature extraction module;
step 4: updating and training parameters in the model trained in the step 3 by using the training set sample in the step 2, so that the model can identify the health state of a new task and simultaneously slow down the sliding of the model in the old task performance;
the parameter update training includes two steps: the method comprises a parameter fine tuning stage and a distillation training stage, wherein the parameter fine tuning stage aims at enabling a model to quickly converge in a new task, but simultaneously can lead the model to rapidly slide down in the old task, and the distillation training stage utilizes knowledge to distill and improve the performance of the model in the old task;
in the parameter fine tuning stage, freezing the parameters of the convolution layer, and training the parameters of the full-connection layer by using training set sequence samples until the loss converges;
in the training process of the parameter fine adjustment stage, the cross entropy output by the model is calculated as loss, and the optimization target is to minimize the cross entropy loss:
min ylog(q(x new ))
wherein y is the actual label of the training set data; q (x) new ) The method comprises the steps of outputting a prediction result for model output, wherein the result is a result obtained by splicing a new task state identification module with the last layer of output of a feature extraction module and performing Softmax operation;
in the distillation training stage, an optimization target consists of two parts, wherein the minimized cross entropy loss is used for ensuring the recognition accuracy of the model in a new task, and the minimized Euclidean distance between the distillation output and the original output is used for slowing down the degradation of the performance of the model in an old task;
aiming at minimizing cross entropy loss, calculating cross entropy between model output and training set data labels as loss, wherein an optimization target is to minimize cross entropy loss:
ylog(q'(x new ))
wherein q' (x new ) Outputting the model after fine adjustment of the parameters and performing a result of Softmax operation;
step 5: and (3) performing fault diagnosis on the state of the test set sample in the step (2) by using the model trained in the step (4).
2. The method for intelligent diagnosis of layer-based regenerative network failure under unbalanced sample condition according to claim 1, wherein the timing preprocessing in step 1 uses zero-mean normalization,
for sample { a } in the data sample set 1 ,a 2 ,...,a n And the calculation formula is as follows:
wherein a is i An i-th data value for a sample; n is the length of intercepting the original signal data;is the sample mean value; s is the sample variance;
new sequence { x } 1 ,x 2 ,...,x n The mean value of 0, variance 1, and dimensionless.
3. The method for intelligently diagnosing faults based on a layer regeneration network under an unbalanced sample as claimed in claim 1, wherein the feature extraction module of the model is output for minimizing Euclidean distance between distillation output and original outputOutput by the new task state identification module/>Splicing into new output:
output z= { z to model 1 ,z 2 ,...,z n A generalized softmax function was used:
q={q 1 ,q 2 ,...,q n },
wherein T is temperature, and the larger T is, the softer the output probability is; q i Obtaining the probability of the ith state after Softmax operation;
the distance between the distillation output of the model after training and the output after parameter fine adjustment is used for participating in training:
dist(q R (x new )/T,q T (x new )/T)
wherein dist (·) is a euclidean distance function; q R (x new ) Outputting the model after parameter fine adjustment; q T (x new ) Distillation output for the model after training using euclidean distance metrics;
the overall optimization objective of this stage is:
min(y log(q T (x new ))+λdist(q R (x new )/T,q T (x new )/T))
where λ is a fractional weight factor.
CN202011244156.6A 2020-11-09 2020-11-09 Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample Active CN112434729B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011244156.6A CN112434729B (en) 2020-11-09 2020-11-09 Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011244156.6A CN112434729B (en) 2020-11-09 2020-11-09 Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample

Publications (2)

Publication Number Publication Date
CN112434729A CN112434729A (en) 2021-03-02
CN112434729B true CN112434729B (en) 2023-09-19

Family

ID=74699868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011244156.6A Active CN112434729B (en) 2020-11-09 2020-11-09 Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample

Country Status (1)

Country Link
CN (1) CN112434729B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269174B (en) * 2021-07-21 2021-10-12 北京航空航天大学 Electrical actuator fault diagnosis test method based on extended convolution countermeasure self-encoder

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110162018A (en) * 2019-05-31 2019-08-23 天津开发区精诺瀚海数据科技有限公司 The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer
CN110647923A (en) * 2019-09-04 2020-01-03 西安交通大学 Variable working condition mechanical fault intelligent diagnosis method based on self-learning under small sample
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network
WO2020073951A1 (en) * 2018-10-10 2020-04-16 腾讯科技(深圳)有限公司 Method and apparatus for training image recognition model, network device, and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110008842A (en) * 2019-03-09 2019-07-12 同济大学 A kind of pedestrian's recognition methods again for more losing Fusion Model based on depth

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020073951A1 (en) * 2018-10-10 2020-04-16 腾讯科技(深圳)有限公司 Method and apparatus for training image recognition model, network device, and storage medium
CN110162018A (en) * 2019-05-31 2019-08-23 天津开发区精诺瀚海数据科技有限公司 The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer
CN110647923A (en) * 2019-09-04 2020-01-03 西安交通大学 Variable working condition mechanical fault intelligent diagnosis method based on self-learning under small sample
CN110866365A (en) * 2019-11-22 2020-03-06 北京航空航天大学 Mechanical equipment intelligent fault diagnosis method based on partial migration convolutional network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
包萍 ; 刘运节 ; .不均衡数据集下基于生成对抗网络的改进深度模型故障识别研究.电子测量与仪器学报.2019,(第03期),全文. *

Also Published As

Publication number Publication date
CN112434729A (en) 2021-03-02

Similar Documents

Publication Publication Date Title
Li et al. A systematic review of deep transfer learning for machinery fault diagnosis
CN111914883B (en) Spindle bearing state evaluation method and device based on deep fusion network
CN110516305B (en) Intelligent fault diagnosis method under small sample based on attention mechanism meta-learning model
CN109376620A (en) A kind of migration diagnostic method of gearbox of wind turbine failure
CN112651167A (en) Semi-supervised rolling bearing fault diagnosis method based on graph neural network
CN110609524A (en) Industrial equipment residual life prediction model and construction method and application thereof
Li et al. Domain adaptation remaining useful life prediction method based on AdaBN-DCNN
CN111680788A (en) Equipment fault diagnosis method based on deep learning
CN112461537A (en) Wind power gear box state monitoring method based on long-time neural network and automatic coding machine
CN111224805A (en) Network fault root cause detection method, system and storage medium
Shang et al. A hybrid method for traffic incident detection using random forest-recursive feature elimination and long short-term memory network with Bayesian optimization algorithm
CN117034143B (en) Distributed system fault diagnosis method and device based on machine learning
CN114297918A (en) Aero-engine residual life prediction method based on full-attention depth network and dynamic ensemble learning
CN112434729B (en) Intelligent fault diagnosis method based on layer regeneration network under unbalanced sample
CN114819315A (en) Bearing degradation trend prediction method based on multi-parameter fusion health factor and time convolution neural network
Kong et al. A contrastive learning framework enhanced by unlabeled samples for remaining useful life prediction
Yang et al. Few-shot learning for rolling bearing fault diagnosis via siamese two-dimensional convolutional neural network
CN114942140A (en) Rolling bearing fault diagnosis method based on multi-input parallel graph convolution neural network
Wang et al. Fault diagnosis of industrial robots based on multi-sensor information fusion and 1D convolutional neural network
Saufi et al. Machinery fault diagnosis based on a modified hybrid deep sparse autoencoder using a raw vibration time-series signal
CN114118162A (en) Bearing fault detection method based on improved deep forest algorithm
Li et al. Transformer-based meta learning method for bearing fault identification under multiple small sample conditions
CN116842459B (en) Electric energy metering fault diagnosis method and diagnosis terminal based on small sample learning
CN116861343A (en) Bearing fault diagnosis method
CN114429197B (en) Neural network architecture searching method, system, equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant