CN114429153A - Lifetime learning-based gearbox increment fault diagnosis method and system - Google Patents
Lifetime learning-based gearbox increment fault diagnosis method and system Download PDFInfo
- Publication number
- CN114429153A CN114429153A CN202111677774.4A CN202111677774A CN114429153A CN 114429153 A CN114429153 A CN 114429153A CN 202111677774 A CN202111677774 A CN 202111677774A CN 114429153 A CN114429153 A CN 114429153A
- Authority
- CN
- China
- Prior art keywords
- stage
- fault diagnosis
- fault
- model
- learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000003745 diagnosis Methods 0.000 title claims abstract description 110
- 238000000034 method Methods 0.000 title claims abstract description 59
- 238000012549 training Methods 0.000 claims abstract description 50
- 230000002776 aggregation Effects 0.000 claims abstract description 46
- 238000004220 aggregation Methods 0.000 claims abstract description 46
- 230000036541 health Effects 0.000 claims abstract description 24
- 210000002569 neuron Anatomy 0.000 claims abstract description 20
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 29
- 230000006870 function Effects 0.000 claims description 22
- 238000013508 migration Methods 0.000 claims description 16
- 230000005012 migration Effects 0.000 claims description 16
- 238000012360 testing method Methods 0.000 claims description 11
- 238000005457 optimization Methods 0.000 claims description 10
- 238000013140 knowledge distillation Methods 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 6
- 238000013527 convolutional neural network Methods 0.000 claims description 5
- 230000001133 acceleration Effects 0.000 claims description 4
- 238000004821 distillation Methods 0.000 claims description 2
- 238000013528 artificial neural network Methods 0.000 claims 1
- 238000013135 deep learning Methods 0.000 abstract description 7
- 238000013526 transfer learning Methods 0.000 abstract description 6
- 238000010586 diagram Methods 0.000 description 7
- 230000006872 improvement Effects 0.000 description 7
- 238000012545 processing Methods 0.000 description 6
- 238000011161 development Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 238000009825 accumulation Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 3
- 238000010801 machine learning Methods 0.000 description 3
- 230000003044 adaptive effect Effects 0.000 description 2
- 230000007423 decrease Effects 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000010892 electric spark Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000008014 freezing Effects 0.000 description 1
- 238000007710 freezing Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
- G06F18/24133—Distances to prototypes
- G06F18/24137—Distances to cluster centroïds
- G06F18/2414—Smoothing the distance, e.g. radial basis function networks [RBFN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/01—Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Signal Processing (AREA)
- Testing Of Devices, Machine Parts, Or Other Structures Thereof (AREA)
Abstract
The invention discloses a method and a system for diagnosing incremental fault of a gearbox based on lifetime learning, which comprises the following steps: s101: acquiring vibration data of a gearbox to construct an incremental health state data set, and dividing the incremental health state data set into fault diagnosis tasks in different stages; s102: utilizing an original ResNet-32 network to learn a fault diagnosis task in an initial stage and constructing a diagnosis model in the initial stage; s103: initializing a ResNet-32 double-branch aggregation network by using an initial stage diagnosis model, and increasing the number of neuron in a classification layer according to the number of newly added fault types; s104: training a diagnosis model at the stage by the selected paradigm and the fault diagnosis task data at the stage, and selecting the paradigm of the fault diagnosis task data at the stage after the training is finished; s105: and repeating the steps S103-S104 in the subsequent increment stage to obtain a final fault diagnosis model for fault diagnosis. The invention aims to solve the problem that the existing fault diagnosis model based on deep learning and transfer learning cannot diagnose actual accidental faults of the gearbox.
Description
Technical Field
The invention relates to the technical field of mechanical fault diagnosis, in particular to a method and a system for diagnosing incremental faults of a gearbox based on lifetime learning.
Background
With the rapid development of modern industrialization process, the precision and importance of rotary mechanical equipment are higher and higher. Rotating machinery has become one of the most widely used industrial machinery, and the reliability of rotating machinery is increasingly required. The rotating machinery is used in many fields, such as aviation, navigation, machinery, chemical industry, energy, electric power and the like, the service conditions of the rotating machinery show an increasingly complex trend, performance decline and even failure inevitably occur in the operation process, huge economic loss is generated, the operation and maintenance cost is higher and higher, even disastrous personal casualties are caused, and irrecoverable bad influences are caused to the environment and the society. Therefore, the research on the health state monitoring and fault diagnosis method carried out by taking the rotary mechanical equipment as an object has important significance for ensuring the safe and reliable operation of the mechanical equipment, preventing the key equipment from generating faults and avoiding huge economic loss and catastrophic accidents.
The requirements of high speed, heavy load and automation degree of modern rotating mechanical equipment are continuously improved, expressed dynamic signals are more complex, the modern state monitoring technology can realize data acquisition of complex equipment with multiple measuring points and full service life, and further obtain massive data, but the processing of the dynamic signals and the feature extraction of health state information in the dynamic signals bring great difficulty. The traditional fault diagnosis method comprises the steps of extracting fault characteristic frequency based on vibration signals, short-time Fourier transform, empirical mode decomposition, sparse representation method and the like. The methods are mature, but for the current mechanical equipment state signals, the method based on signal processing does not have the capacity of processing a large amount of signal data, wherein the fault data density is low, the interference is strong, and the diversity is shown under variable working conditions.
In recent years, with rapid development of the fields of artificial intelligence and machine learning, more and more rotating machine intelligent fault diagnosis methods based on machine learning are proposed. The fault diagnosis based on machine learning generally comprises the steps of signal acquisition, feature extraction, fault identification and prediction and the like. The method greatly simplifies the fault diagnosis process and improves the diagnosis efficiency, but the method is mostly a shallow network, has simple structure and limited layers, the effectiveness of the method depends on the effectiveness of the pre-processing extraction characteristics at the early stage, and the method has limited processing capability when facing a large amount of equipment state signals with complex structures.
In recent years, many scholars overcome the defect that a shallow model is difficult to represent a complex mapping relation between a signal and a health condition by utilizing excellent adaptive feature learning and extraction capability of deep learning, and obtain good effects. However, these methods are based on two assumptions: the training data is co-distributed with the test data and is sufficiently numerous. In actual engineering, the operation conditions of mechanical equipment are variable and the occurrence of faults is accidental, and the obtained samples are difficult to meet the two assumptions and directly influence the fault diagnosis result. With the rapid development of the transfer learning, by means of knowledge mining and transfer capabilities of the transfer learning in cross-fields and cross-distributions, a transfer learning solution aiming at a label sample limited (extremely small samples or no samples) problem or a variable working condition problem is also developed in the field of mechanical fault diagnosis. However, the migration learning can only meet the fault diagnosis of a single target task, namely, the migration is completed once under the given conditions of a source domain and a target domain, and due to the diversity of the fault of mechanical equipment and the operation working conditions, the generalization capability of the model is greatly reduced when a new task is faced, and the universality is poor; on the other hand, the transfer learning does not involve the accumulation of knowledge, and the migration learning is often poor in performance and inconsistent with the actual requirements of engineering when facing the equipment state identification task under the working condition corresponding to the source domain data.
In practical situations, due to the complexity and variability of the operation conditions, unexpected faults can be frequently generated on the machine, so that the fault types are increased, the deep diagnosis model and the deep migration diagnosis model trained by pre-collecting semi-complete fault data are invalid, and therefore the model needs to be retrained to identify new fault types. However, training the depth model directly using the new type of data will result in the recognition of the old fault class exhibiting a cliff-like decline, which is called catastrophic forgetting. Catastrophic forgetting is always an important problem in the field of deep learning, and similarly, in the field of fault diagnosis, the catastrophic forgetting problem of a deep diagnosis model caused by an unexpected fault needs to be researched and solved so as to establish a lifetime fault diagnosis model with higher reliability, generalization and universality.
Disclosure of Invention
The invention aims to provide a method and a system for diagnosing incremental faults of a gearbox based on lifetime learning, and the method and the system are used for solving the problem that the existing fault diagnosis model based on deep learning and transfer learning cannot diagnose actual unexpected faults of the gearbox.
In order to solve the technical problem, the invention provides a lifetime learning-based gearbox increment fault diagnosis method, which comprises the following steps of:
s101: acquiring vibration data of a gearbox to construct an incremental health state data set, and dividing the incremental health state data set into fault diagnosis tasks in different stages;
s102: learning a fault diagnosis task at an initial stage by using an original ResNet-32 network, constructing a diagnosis model at the initial stage, and selecting a paradigm of data of the fault diagnosis task at the initial stage;
s103: initializing a ResNet-32 double-branch aggregation network by utilizing an initial-stage diagnosis model, wherein the ResNet-32 double-branch aggregation network adopts a cosine standardized classifier, and increases the number of neuron in a classification layer according to the number of newly increased fault types;
s104: training a diagnosis model at the stage by the selected paradigm and the fault diagnosis task data at the stage, and selecting the paradigm of the fault diagnosis task data at the stage after the training is finished;
the method comprises the following steps of representing the migration capacity of different residual block layers by using aggregation weight in a training process, reducing the difference of a new-stage and old-stage diagnosis model on old-stage fault diagnosis task data by combining a knowledge distillation loss function, and optimizing the aggregation weight and model parameters by using a double-layer optimization scheme;
s105: and repeating the steps S103-S104 in the subsequent increment stage to obtain a final fault diagnosis model for fault diagnosis.
As a further improvement of the present invention, the step S101 specifically includes the following steps:
acquiring a gear box vibration signal by using an acceleration sensor to construct an incremental health state data set D;
if N +1 fault diagnosis tasks are total, N +1 learning stages are provided, namely a fault diagnosis task 0 and N increment stages in the initial stage, and the number of the diagnosis tasks is gradually increased in the period;
in the nth stage, the training data of task n is wherein ,PnIs the number of fault data samples for task n;
if JnIndicating old fault class C0:n-1={C0,C1,…,Cn-1Number of (c) }, KnIndicating a new fault class CnThe number of (3), then Jn+1=Kn+Jn,Which represents the number of the i-th sample,
as a further improvement of the present invention, the step S102 specifically includes the following steps:
utilizing data for task 0Training the original ResNet-32 learning Fault class C0Obtaining an initial stage diagnostic model theta0The loss function of the initial stage diagnosis model is a classification cross entropy loss function:wherein δ is a true tag;
after the training is finished, the feature extractor F in front of the classification layer is utilized0Selecting a certain number of classical instances epsilon through a herding algorithm0。
As a further improvement of the invention, a feature extractor F in front of the classification layer is utilized0Selecting a certain number of examples through a coding algorithm, wherein the examples comprise the following steps:
by usingTraining samples representing a fault class c, then the class of c is averaged to wherein ,PcIs the number of training samples of class c;
or the selected number of the examples is t, each example passesCalculating to obtain epsilon ═ (e)1,e2,…,et)。
As a further improvement of the present invention, the step S103 specifically includes the following steps:
replacing the original ResNet-32 network with a ResNet-32 dual-branch aggregation network, wherein the ResNet-32 dual-branch aggregation network comprises dynamic branches and steady-state branches;
the dynamic branch is a conventional parameter level fine adjustment, namely, the dynamic branch in the increment stage is initialized by using the initial stage diagnosis model, and the parameter alpha is fine adjusted by using task training in each stage;
the steady-state branch is the neuron-level parameter fine adjustment after the initial stage network parameters are frozen, namely, each neuron is given with weight beta and is fine-adjusted by using task training of each stage, if the k-th layer convolutional neural network of the steady-state branch comprises Q neurons, the neuron weight is the frozen parameters of the initial modelThe input of the k-th convolutional neural network is xk-1The output is xk=(Wk⊙βk)xk-1Wherein, u is a hadamard product;
the cosine normalized classifier of the incremental stage n is obtained byCalculating the prediction probability that the input x is class c, wherein thetanFully connected Classification layer parameter, h, for incremental stage nnFor the features extracted for the incremental stage n,is represented by2The norm of the number of the first-order-of-arrival,eta is a learnable scaling parameter, and the cosine similarity value is controlled at < -1,1 > through eta]Within the range;
for the failure class increase, the number of classification layer neurons increases to coincide with the number of failure classes.
As a further improvement of the present invention, said representing the migration capability of different residual block layers by using the aggregation weight includes:
using the initial phase reserved0And task data D of this stage0Training a double-branch aggregation network, and respectively endowing self-adaptive aggregation weights omega and xi to the different migration capabilities of a dynamic residual block and a steady-state residual block of each residual block layer;
the fault training data x[0]Extracting characteristics through a double-branch aggregation network, wherein the characteristics extracted by the dynamic residual block at the mth residual block layer areThe steady state residual block is extracted by
As a further improvement of the invention, the loss function of the initial stage is classified cross entropy loss
The loss function of the increment stage is classified cross entropy lossAnd knowledge distillationLoss of wherein , andthe temperature T is typically greater than 1 for soft tags with old models in the old failure class and hard tags with new models in the old failure class, respectively.
As a further development of the invention, the loss function of the incremental phase isWherein lambda is more than 0 and less than or equal to 1;
The unoptimized parameters of the incremental phase have model parameters ΘnAnd aggregation weights ω and ξ for which an update requires a fixed model parameter ΘnAdopting a double-layer optimization scheme;
using randomly sampled data sets DnTo obtainEstablishing balance dataBy passingUpdating the aggregation weight, wherein gamma2Is the learning rate of the upper layer problem.
As a further improvement of the invention, after each increment training is finished, the performance of the model on the new and old tasks is tested by using the test data of all learned tasks, and the ability of the model not forgetting learning is verified, which comprises the following steps:
the model theta obtained by the incremental stage n trainingnNeed to complete all learned fault classes C0:nThe test data comprises all learned fault classes to verify that the model has the ability to learn without forgetting.
The lifetime learning-based gearbox increment fault diagnosis system adopts the lifetime learning-based gearbox increment fault diagnosis method to diagnose the gearbox fault.
The invention has the beneficial effects that: according to the method for diagnosing the fault of the gearbox, firstly, an acceleration sensor is used for acquiring a vibration signal of the gearbox to construct an incremental health state data set, diagnosis tasks in different stages are divided, and the increase of the diagnosis tasks caused by the increase of fault types due to the occurrence of an unexpected fault in an actual scene is simulated;
in the initial stage, an initial gearbox bearing fault diagnosis task is learned by using an original ResNet-32 to simulate an incomplete fault diagnosis model of pre-acquired fault data training in a real scene, and after training is completed, a certain number of cases are selected from initial task data through a compiling algorithm to be stored; replacing the original ResNet-32 with an improved double-branch aggregation network based on ResNet-32 in a subsequent increment stage to obtain an increment stage feature extractor structure, balancing plasticity (knowledge migration) and stability (knowledge accumulation) of the model, and modifying a full-connection layer classifier into a cosine standardized classifier to avoid the problem of classification bias of the model and increase the number of neurons in a classification layer according to the number of newly added fault types;
the model of the first incremental stage is trained by the stored paradigm of the initial stage and the diagnosis task data of the stage together to arouse the memory of the model for old knowledge and overcome the catastrophic forgetting of a deep learning model; the loss function of the increment stage comprises a classification cross entropy loss function and a knowledge distillation loss function, and the knowledge distillation loss function can reduce the difference of the new and old stage models in the old task data and further prevent catastrophic forgetting;
the aggregation weight is used for representing the migration capability of different residual block layers, and the migration capability of a steady-state branch and the migration capability of a dynamic branch can be balanced to balance the plasticity and the stability of the model; optimizing and mutually constraining the aggregation weight and the model parameters, and adopting a double-layer optimization scheme to update the parameters of the aggregation weight and the model parameters; after the diagnosis task training is completed in each increment stage, a certain number of sample examples of the data in the stage are continuously selected for storage and used for the training of the next increment stage;
the invention generally constructs a lifetime learning-based gearbox increment fault diagnosis method, adopts a double-branch aggregation network, combines knowledge distillation and paradigm, solves the catastrophic forgetting problem of a deep learning diagnosis model, and can be suitable for continuous gearbox fault diagnosis of new unexpected faults.
Drawings
FIG. 1 is a flow chart of a particular embodiment of the method of the present invention;
FIG. 2 is a test chart of a gearbox data generation test stand of the present invention;
FIG. 3 is a gearbox fault location map of the present invention;
FIG. 4 is a diagram of a dual-branch aggregation network architecture in the model of the present invention;
FIG. 5 is a graph of diagnostic accuracy for two fine tuning methods of a depth model without a lifetime learning method and the method of the present invention.
Detailed Description
The present invention is further described below in conjunction with the drawings and the embodiments so that those skilled in the art can better understand the present invention and can carry out the present invention, but the embodiments are not to be construed as limiting the present invention.
Referring to fig. 1, the invention provides a lifetime learning-based gearbox incremental fault diagnosis method, S101: acquiring vibration data of a gearbox to construct an incremental health state data set, and dividing the incremental health state data set into fault diagnosis tasks in different stages;
s102: learning a fault diagnosis task at an initial stage by using an original ResNet-32 network, constructing a diagnosis model at the initial stage, and selecting a paradigm of data of the fault diagnosis task at the initial stage;
s103: initializing a ResNet-32 double-branch aggregation network by utilizing an initial-stage diagnosis model, wherein the ResNet-32 double-branch aggregation network adopts a cosine standardized classifier, and increases the number of neuron in a classification layer according to the number of newly increased fault types;
s104: training a diagnosis model at the stage by the selected paradigm and the fault diagnosis task data at the stage, and selecting the paradigm of the fault diagnosis task data at the stage after the training is finished;
the method comprises the following steps of representing migration capacity of different residual block layers by using aggregation weights in a training process, reducing differences of new and old stage diagnosis models expressed on old stage fault diagnosis task data by combining knowledge distillation loss functions, and optimizing the aggregation weights and model parameters by using a double-layer optimization scheme;
s105: and repeating the steps S103-S104 in the subsequent increment stage to obtain a final fault diagnosis model for fault diagnosis.
The invention adopts a lifelong learning method to construct a diagnosis model capable of realizing continuous knowledge transfer and accumulation so as to facilitate the fault diagnosis of fault type increment caused by complex working conditions.
Further, the performance of the model on the new task and the old task is tested by using the test data of all the learned tasks, and the learning capability of the model is verified.
Examples
This example describes the above method with reference to the specific collected experimental data.
The bench shown in fig. 2 was used to collect the required experimental data and construct an incremental health state data set. In order to obtain the unexpected failure of the gearbox with the bearing and gear composite failure as shown in FIG. 3, 0.4 mm of cracks are arranged on the inner ring, the outer ring and the roller of the bearing by adopting a linear cutting technology, and the local failure of the bearing is simulated; half teeth are cut on the driving gear by adopting an electric spark technology, and local faults of the gear are simulated.
In the experiment, the speed of the motor is 1496r/min, and the sampling frequency is set to be 25.6 KHz. The gearbox augmentation dataset was constructed with 11 different health states consisting of a combination of gear and bearing failures, as listed in table 1. The gear has two health states of normal gear and gear fault, the bearing has four basic health states including normal bearing, inner ring fault, roller fault and outer ring fault, and the three health states of mixed bearing fault are combined in pairs.
Therefore, according to the actual scenario, the diagnosis tasks at different stages are divided: and acquiring a gear box vibration signal by using an acceleration sensor to construct an incremental health state data set D. Assuming that there are N +1 gearbox fault diagnosis tasks in total, there are N +1 learning stages, i.e., the stage of learning diagnosis task 0 and N incremental stages, during which the number of diagnosis tasks gradually increases. In the nth stage, the training data of task n is wherein PnIs the number of fault data samples for task n. By JnIndicating old fault class C0:n-1={C0,C1,…,Cn-1Number of (c) }, KnIndicating a new fault class CnThe number of (3), then Jn+1=Kn+JnTherefore, it isWhich represents the number of the i-th sample,
as listed in Table 1, in the actual scenario, the gear box health data pre-obtained through experimentation will be used as a training sample for task 0 to train the initial stage model. These health states are generally common, and therefore are more diverse and easy to learn, so seven gearbox health states where gears normally have only bearings failing are considered as failure types for task 0 learning; in order to simulate the increment of fault types caused by unexpected faults occurring in a real scene, each learned task comprises a gear-bearing mixed fault type in each increment stage. There are 200 training samples and 100 test samples per fault type. Table 1 state of health and incremental mission settings of the gearbox:
therefore, the step S102 specifically includes the following steps:
s102.1: utilizing data for task 0Training the original ResNet-32 learning Fault class C0Obtain an initial model theta0The detailed structure of ResNet-32 is shown in Table 2. The loss function of the model is a classified cross entropy loss function:where δ is the true label. The model parameters Θ of the initial phase0Is conventional
Table 2 structural parameters of the backbone network ResNet-32:
s102.2: after the training is finished, the feature extractor F in front of the classification layer is utilized0Selecting a certain number of classical instances epsilon by a coding algorithm0. By usingTraining samples representing a fault class c, then the class of c is averaged to wherein PcIs the number of training samples of class c.
There are two schemes for selecting the number of the examples: firstly, fixing the number of selected examples of each fault type to be 5; or a fixed total memory amount of 55. If the number of the selected class c is t, each example passesCalculating to obtain epsilon ═ (e)1,e2,…,et)。
The step S103 specifically includes the following steps:
s103.1: the original ResNet-32 is replaced by a dual-branch aggregation network, the structure of which is shown in FIG. 4. Wherein, the double-branch aggregation network comprises dynamic branches and steady-state branches.
The dynamic branch is conventional parameter-level fine adjustment, namely, the dynamic branch in the incremental stage is initialized by using an initial model, and the parameter alpha is fine-adjusted by using task training in each stage;
the steady state branch is the fine tuning of the neuron level parameters after the initial stage network parameters are frozen, namely, each neuron is given with the weight beta and is trained and fine tuned by each stage task. Supposing that the k-th layer convolutional neural network of the steady-state branch contains Q neurons, and the weights of the neurons are parameters for freezing the initial modelThe input of the k-th convolutional neural network is xk-1The output is xk=(Wk⊙βk)xk-1Wherein £ is a hadamard product. The learnable parameter beta of the steady-state block is less than alpha, the method can make the steady-state residual block slowly adapt to the new taskWhile substantially preserving the old knowledge.
S103.2: the classifier for the initial model is a conventional fully connected classification layer byCalculating the prediction probability that the input x is class c, where θ0For the initial stage full connectivity of the classification layer parameters, h0Features extracted for the initial stage;
the cosine normalized classifier of the incremental stage n is obtained byCalculating the prediction probability that the input x is class c, where θnFully connected Classification layer parameter, h, for incremental stage nnFor the features extracted at the incremental stage n,is represented by2The norm of the number of the first-order-of-arrival,eta is a learnable scaling parameter, and the cosine similarity value is controlled at < -1,1 > through eta]Within the range. The problem of the classification bias of the new and old classes can be avoided by the cosine standardized classifier.
For the failure class increase, the number of classification layer neurons should be increased to coincide with the number of failure classes.
The step S104 specifically includes the following steps:
s104.1: using the initial phase reserved0And the stage task data D0Training a two-branch aggregation network, and respectively giving adaptive aggregation weights ω and ξ according to different migration capabilities of a dynamic residual block and a steady-state residual block of each residual block layer, as shown in fig. 4.
Failed training data x[0]Extracting features through a double-branch aggregation network, wherein the features extracted by the dynamic residual block at the mth residual block layer areThe steady state residual block is extracted by
S104.2: the loss function of the increment stage is classified cross entropy lossAnd knowledge of distillation losses wherein , andtemperature T is typically greater than 1 for soft tags with old models in the old failure class and hard tags with new models in the old failure class, respectively. Narrowing the new model in the old fault class C by knowledge distillation loss0:n-1The similarity distribution of the old class in the new model is approximately constrained to the similarity distribution of the old class in the old model. The loss function of the incremental phase isWherein lambda is more than 0 and less than or equal to 1.
S104.2: the unoptimized parameters of the incremental phase have model parameters ΘnAnd aggregation weights ω and ξ for which an update requires a fixed model parameter ΘnAdopting a double-layer optimization scheme;
the double-layer optimization scheme is divided intoUpper layer problemAnd lower layer problemBy passingUpdating model parameters Θn, wherein γ1Is the lower layer problem learning rate;
the update of the aggregation weights for the upper layer problem is to balance the dynamic and steady-state residual blocks using a randomly sampled data set DnTo obtainEstablishing balance dataBy passingUpdating the aggregation weights, wherein γ2Is the learning rate of the upper layer problem.
The step S105 specifically includes the following steps:
model theta obtained by training of incremental stage nnNeed to be able to complete all learned fault classes C0:nThe test data comprises all learned fault classes to verify that the model has the ability to learn without forgetting. After 4 incremental task studies are completed, two kinds of tweaks and the confusion matrix of the method of the present invention under two exemplary number strategies are shown in fig. 5. The two types of fine-tuning confusion matrixes reflect the catastrophic forgetting of the deep learning diagnosis model without lifelong learning, and the method can effectively solve the catastrophic forgetting and realize the continuous gearbox fault diagnosis of new emergent faults.
In conclusion, the method for realizing incremental fault diagnosis of the gearbox is designed based on the lifelong learning method. Compared with the traditional deep learning method, the method can solve the problem of catastrophic forgetting and is more suitable for the actual scene of industrial application.
The invention also provides a lifelong learning-based gearbox increment fault diagnosis system, which adopts the lifelong learning-based gearbox increment fault diagnosis method to diagnose the fault of the gearbox.
The principles are similar to the above-described method and are not repeated here, but it is noted that the present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are merely preferred embodiments for fully illustrating the present invention, and the scope of the present invention is not limited thereto. The equivalent substitution or change made by the technical personnel in the technical field on the basis of the invention is all within the protection scope of the invention. The protection scope of the invention is subject to the claims.
Claims (10)
1. A gearbox increment fault diagnosis method based on lifetime learning is characterized in that: the method comprises the following steps:
s101: acquiring vibration data of a gearbox to construct an incremental health state data set, and dividing the incremental health state data set into fault diagnosis tasks in different stages;
s102: learning a fault diagnosis task at an initial stage by using an original ResNet-32 network, constructing a diagnosis model at the initial stage, and selecting a paradigm of data of the fault diagnosis task at the initial stage;
s103: initializing a ResNet-32 double-branch aggregation network by utilizing an initial-stage diagnosis model, wherein the ResNet-32 double-branch aggregation network adopts a cosine standardized classifier, and increases the number of neuron in a classification layer according to the number of newly increased fault types;
s104: training a diagnosis model at the stage by the selected paradigm and the fault diagnosis task data at the stage, and selecting the paradigm of the fault diagnosis task data at the stage after the training is finished;
the method comprises the following steps of representing the migration capacity of different residual block layers by using aggregation weight in a training process, reducing the difference of a new-stage and old-stage diagnosis model on old-stage fault diagnosis task data by combining a knowledge distillation loss function, and optimizing the aggregation weight and model parameters by using a double-layer optimization scheme;
s105: and repeating the steps S103-S104 in the subsequent increment stage to obtain a final fault diagnosis model for fault diagnosis.
2. The lifetime learning-based gearbox delta fault diagnosis method of claim 1, wherein: the step S101 specifically includes the following steps:
acquiring a gear box vibration signal by using an acceleration sensor to construct an incremental health state data set D;
if N +1 fault diagnosis tasks are total, N +1 learning stages are provided, namely a fault diagnosis task 0 and N increment stages in the initial stage, and the number of the diagnosis tasks is gradually increased in the period;
in the nth stage, the training data of task n is wherein ,PnIs the number of fault data samples for task n;
3. the lifetime learning-based gearbox delta fault diagnosis method of claim 2, wherein: the step S102 specifically includes the following steps:
utilizing data for task 0Training the original ResNet-32 learning Fault class C0Obtaining an initial stage diagnostic model theta0The loss function of the initial stage diagnosis model is a classification cross entropy loss function:wherein δ is a true tag;
after the training is finished, the feature extractor F in front of the classification layer is utilized0Selecting a certain number of classical instances epsilon by a coding algorithm0。
4. The lifetime learning-based gearbox delta fault diagnosis method of claim 3, wherein: feature extractor F using front of classification layer0Selecting a certain number of examples through a coding algorithm, wherein the examples comprise the following steps:
by usingA training sample representing a fault class c, then the class of c is averaged to wherein ,PcIs the number of training samples of class c;
5. The lifetime learning-based gearbox delta fault diagnosis method of claim 1, wherein: the step S103 specifically includes the following steps:
replacing the original ResNet-32 network with a ResNet-32 dual-branch aggregation network, wherein the ResNet-32 dual-branch aggregation network comprises dynamic branches and steady-state branches;
the dynamic branch is a conventional parameter level fine adjustment, namely, the dynamic branch in the increment stage is initialized by using the initial stage diagnosis model, and the parameter alpha is fine adjusted by using task training in each stage;
the steady-state branch is the fine tuning of the neuron level parameters after the initial stage network parameters are frozen, namely, each neuron is given with the weight beta and is fine tuned by the training of each stage task, if the kth layer convolution neural network of the steady-state branch contains Q neurons, the neuron weight is the frozen parameters of the initial modelThe input of the k-th convolutional neural network is xk-1The output is xk=(Wk⊙βk)xk-1Wherein, u is a hadamard product;
the cosine normalized classifier of the incremental stage n is obtained byCalculating the prediction probability that the input x is class c, wherein thetanFully connected Classification layer parameter, h, for incremental stage nnFor the features extracted for the incremental stage n,is represented by2The norm of the number of the first-order-of-arrival,eta isLearning the scaling parameter, controlling the cosine similarity value at [ -1,1 ] by eta]Within the range;
for the failure class increase, the number of classification layer neurons increases to coincide with the number of failure classes.
6. The lifetime learning-based gearbox delta fault diagnosis method of claim 1, wherein: the representing the migration capability of different residual block layers by using the aggregation weight comprises:
using the initial phase reserved0And task data D of this stage0Training a double-branch aggregation network, and respectively endowing self-adaptive aggregation weights omega and xi to the different migration capabilities of a dynamic residual block and a steady-state residual block of each residual block layer;
the fault training data x[0]Extracting features through a double-branch aggregation network, wherein the features extracted by the dynamic residual block at the mth residual block layer areThe steady state residual block is extracted by
7. The lifetime learning-based gearbox delta fault diagnosis method of claim 1, wherein: the loss function of the initial stage is classified cross entropy loss
8. The lifetime learning-based gearbox delta fault diagnosis method of claim 7, wherein: the loss function of the incremental phase isWherein, lambda is more than 0 and less than or equal to 1;
The unoptimized parameters of the incremental phase have model parameters ΘnAnd aggregation weights ω and ξ for which an update requires a fixed model parameter ΘnAdopting a double-layer optimization scheme;
9. The lifetime learning-based gearbox delta fault diagnosis method of any one of claims 1-8, wherein: after each increment training is finished, testing the performance of the model on new and old tasks by using the test data of all learned tasks, and verifying the learning-forgetting capability of the model, wherein the method comprises the following steps:
the model theta obtained by the incremental stage n trainingnNeed to complete all learned fault classes C0:nThe test data comprises all learned fault classes to verify that the model has the ability to learn without forgetting.
10. Gearbox increment fault diagnosis system based on lifelong study, its characterized in that: gearbox fault diagnosis using a lifelong learning based gearbox delta fault diagnosis method according to any one of claims 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111677774.4A CN114429153B (en) | 2021-12-31 | 2021-12-31 | Gear box increment fault diagnosis method and system based on life learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111677774.4A CN114429153B (en) | 2021-12-31 | 2021-12-31 | Gear box increment fault diagnosis method and system based on life learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114429153A true CN114429153A (en) | 2022-05-03 |
CN114429153B CN114429153B (en) | 2023-04-28 |
Family
ID=81311970
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111677774.4A Active CN114429153B (en) | 2021-12-31 | 2021-12-31 | Gear box increment fault diagnosis method and system based on life learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114429153B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115270956A (en) * | 2022-07-25 | 2022-11-01 | 苏州大学 | Cross-equipment incremental bearing fault diagnosis method based on continuous learning |
CN116029367A (en) * | 2022-12-26 | 2023-04-28 | 东北林业大学 | Fault diagnosis model optimization method based on personalized federal learning |
CN116089883A (en) * | 2023-01-30 | 2023-05-09 | 北京邮电大学 | Training method for improving classification degree of new and old categories in existing category increment learning |
CN116108346A (en) * | 2023-02-17 | 2023-05-12 | 苏州大学 | Bearing increment fault diagnosis life learning method based on generated feature replay |
CN117150377A (en) * | 2023-11-01 | 2023-12-01 | 北京交通大学 | Motor fault diagnosis stepped learning method based on full-automatic motor offset |
CN117313251A (en) * | 2023-11-30 | 2023-12-29 | 北京交通大学 | Train transmission device global fault diagnosis method based on non-hysteresis progressive learning |
CN117313000A (en) * | 2023-09-19 | 2023-12-29 | 北京交通大学 | Motor brain learning fault diagnosis method based on sample characterization topology |
CN117591888A (en) * | 2024-01-17 | 2024-02-23 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key parts of train |
WO2024060381A1 (en) * | 2022-09-20 | 2024-03-28 | 同济大学 | Incremental device fault diagnosis method |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5566092A (en) * | 1993-12-30 | 1996-10-15 | Caterpillar Inc. | Machine fault diagnostics system and method |
CN108376264A (en) * | 2018-02-26 | 2018-08-07 | 上海理工大学 | A kind of handpiece Water Chilling Units method for diagnosing faults based on support vector machines incremental learning |
CN109492765A (en) * | 2018-11-01 | 2019-03-19 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on migration models |
CN110162018A (en) * | 2019-05-31 | 2019-08-23 | 天津开发区精诺瀚海数据科技有限公司 | The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer |
US20190339688A1 (en) * | 2016-05-09 | 2019-11-07 | Strong Force Iot Portfolio 2016, Llc | Methods and systems for data collection, learning, and streaming of machine signals for analytics and maintenance using the industrial internet of things |
CN111651937A (en) * | 2020-06-03 | 2020-09-11 | 苏州大学 | Method for diagnosing similar self-adaptive bearing fault under variable working conditions |
CN112381788A (en) * | 2020-11-13 | 2021-02-19 | 北京工商大学 | Part surface defect increment detection method based on double-branch matching network |
CN112990280A (en) * | 2021-03-01 | 2021-06-18 | 华南理工大学 | Class increment classification method, system, device and medium for image big data |
CN113281048A (en) * | 2021-06-25 | 2021-08-20 | 华中科技大学 | Rolling bearing fault diagnosis method and system based on relational knowledge distillation |
-
2021
- 2021-12-31 CN CN202111677774.4A patent/CN114429153B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5566092A (en) * | 1993-12-30 | 1996-10-15 | Caterpillar Inc. | Machine fault diagnostics system and method |
US20190339688A1 (en) * | 2016-05-09 | 2019-11-07 | Strong Force Iot Portfolio 2016, Llc | Methods and systems for data collection, learning, and streaming of machine signals for analytics and maintenance using the industrial internet of things |
CN108376264A (en) * | 2018-02-26 | 2018-08-07 | 上海理工大学 | A kind of handpiece Water Chilling Units method for diagnosing faults based on support vector machines incremental learning |
CN109492765A (en) * | 2018-11-01 | 2019-03-19 | 浙江工业大学 | A kind of image Increment Learning Algorithm based on migration models |
CN110162018A (en) * | 2019-05-31 | 2019-08-23 | 天津开发区精诺瀚海数据科技有限公司 | The increment type equipment fault diagnosis method that knowledge based distillation is shared with hidden layer |
CN111651937A (en) * | 2020-06-03 | 2020-09-11 | 苏州大学 | Method for diagnosing similar self-adaptive bearing fault under variable working conditions |
CN112381788A (en) * | 2020-11-13 | 2021-02-19 | 北京工商大学 | Part surface defect increment detection method based on double-branch matching network |
CN112990280A (en) * | 2021-03-01 | 2021-06-18 | 华南理工大学 | Class increment classification method, system, device and medium for image big data |
CN113281048A (en) * | 2021-06-25 | 2021-08-20 | 华中科技大学 | Rolling bearing fault diagnosis method and system based on relational knowledge distillation |
Non-Patent Citations (4)
Title |
---|
MATTHIAS DE LANGE 等: "A Continual Learning Survey: Defying Forgetting in Classification Tasks", 《ARXIV》 * |
SIYU SHAO 等: "Highly Accurate Machine Fault Diagnosis Using Deep Transfer Learning", 《IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS》 * |
杨瑞双 等: "基于改进卷积神经网络及ightGBM 的滚动轴承故障诊断", 《轴承》 * |
韩久琦 等: "基于神经网络迁移学习和增量学习的脑电信号分类", 《第四届全国神经动力学学术会议》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115270956A (en) * | 2022-07-25 | 2022-11-01 | 苏州大学 | Cross-equipment incremental bearing fault diagnosis method based on continuous learning |
CN115270956B (en) * | 2022-07-25 | 2023-10-27 | 苏州大学 | Continuous learning-based cross-equipment incremental bearing fault diagnosis method |
WO2024021246A1 (en) * | 2022-07-25 | 2024-02-01 | 苏州大学 | Cross-device incremental bearing fault diagnosis method based on continuous learning |
WO2024060381A1 (en) * | 2022-09-20 | 2024-03-28 | 同济大学 | Incremental device fault diagnosis method |
CN116029367A (en) * | 2022-12-26 | 2023-04-28 | 东北林业大学 | Fault diagnosis model optimization method based on personalized federal learning |
CN116089883A (en) * | 2023-01-30 | 2023-05-09 | 北京邮电大学 | Training method for improving classification degree of new and old categories in existing category increment learning |
CN116089883B (en) * | 2023-01-30 | 2023-12-19 | 北京邮电大学 | Training method for improving classification degree of new and old categories in existing category increment learning |
CN116108346A (en) * | 2023-02-17 | 2023-05-12 | 苏州大学 | Bearing increment fault diagnosis life learning method based on generated feature replay |
CN117313000A (en) * | 2023-09-19 | 2023-12-29 | 北京交通大学 | Motor brain learning fault diagnosis method based on sample characterization topology |
CN117313000B (en) * | 2023-09-19 | 2024-03-15 | 北京交通大学 | Motor brain learning fault diagnosis method based on sample characterization topology |
CN117150377B (en) * | 2023-11-01 | 2024-02-02 | 北京交通大学 | Motor fault diagnosis stepped learning method based on full-automatic motor offset |
CN117150377A (en) * | 2023-11-01 | 2023-12-01 | 北京交通大学 | Motor fault diagnosis stepped learning method based on full-automatic motor offset |
CN117313251A (en) * | 2023-11-30 | 2023-12-29 | 北京交通大学 | Train transmission device global fault diagnosis method based on non-hysteresis progressive learning |
CN117313251B (en) * | 2023-11-30 | 2024-03-15 | 北京交通大学 | Train transmission device global fault diagnosis method based on non-hysteresis progressive learning |
CN117591888A (en) * | 2024-01-17 | 2024-02-23 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key parts of train |
CN117591888B (en) * | 2024-01-17 | 2024-04-12 | 北京交通大学 | Cluster autonomous learning fault diagnosis method for key parts of train |
Also Published As
Publication number | Publication date |
---|---|
CN114429153B (en) | 2023-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114429153B (en) | Gear box increment fault diagnosis method and system based on life learning | |
CN115270956B (en) | Continuous learning-based cross-equipment incremental bearing fault diagnosis method | |
CN110162018B (en) | Incremental equipment fault diagnosis method based on knowledge distillation and hidden layer sharing | |
CN112161784A (en) | Mechanical fault diagnosis method based on multi-sensor information fusion migration network | |
CN115758212B (en) | Mechanical equipment fault diagnosis method based on parallel network and transfer learning | |
Ma et al. | Bearing degradation assessment based on weibull distribution and deep belief network | |
Nasser et al. | A hybrid of convolutional neural network and long short-term memory network approach to predictive maintenance | |
CN109472097B (en) | Fault diagnosis method for online monitoring equipment of power transmission line | |
CN111812507A (en) | Motor fault diagnosis method based on graph convolution | |
CN112132102B (en) | Intelligent fault diagnosis method combining deep neural network with artificial bee colony optimization | |
Dong et al. | Design and application of unsupervised convolutional neural networks integrated with deep belief networks for mechanical fault diagnosis | |
CN116007937B (en) | Intelligent fault diagnosis method and device for mechanical equipment transmission part | |
Chen et al. | A novel Bayesian-optimization-based adversarial TCN for RUL prediction of bearings | |
CN116108346A (en) | Bearing increment fault diagnosis life learning method based on generated feature replay | |
Wang et al. | Fault diagnosis of industrial robots based on multi-sensor information fusion and 1D convolutional neural network | |
Zhou et al. | Differentiable architecture search for aeroengine bevel gear fault diagnosis | |
Xiang et al. | Fault diagnosis of gearbox based on refined topology and spatio-temporal graph convolutional networks | |
CN113255977A (en) | Intelligent factory production equipment fault prediction method and system based on industrial internet | |
Zou et al. | Overview of Bearing Fault Diagnosis Based on Deep Learning | |
Long et al. | Research on Testability Fault Diagnosis Based on Deep Learning | |
CN116680554B (en) | Rotary machine life prediction method based on probabilistic element learning model | |
Ma et al. | Prediction of rolling bearing performance degradation degree based on SELSTM | |
Lu et al. | Remaining Useful Life Prediction and Health Status Estimation Based on Joint-Loss Convolution Neural Networks | |
Kang et al. | Intelligent Diagnosis of Planetary Gearboxes Based on DAE-CNN | |
Li et al. | Intelligent fault diagnosis of rolling bearings based on MDF and Swin Transformer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |